Search Results: "iwj"

21 March 2024

Ian Jackson: How to use Rust on Debian (and Ubuntu, etc.)

tl;dr: Don t just apt install rustc cargo. Either do that and make sure to use only Rust libraries from your distro (with the tiresome config runes below); or, just use rustup. Don t do the obvious thing; it s never what you want Debian ships a Rust compiler, and a large number of Rust libraries. But if you just do things the obvious default way, with apt install rustc cargo, you will end up using Debian s compiler but upstream libraries, directly and uncurated from crates.io. This is not what you want. There are about two reasonable things to do, depending on your preferences. Q. Download and run whatever code from the internet? The key question is this: Are you comfortable downloading code, directly from hundreds of upstream Rust package maintainers, and running it ? That s what cargo does. It s one of the main things it s for. Debian s cargo behaves, in this respect, just like upstream s. Let me say that again: Debian s cargo promiscuously downloads code from crates.io just like upstream cargo. So if you use Debian s cargo in the most obvious way, you are still downloading and running all those random libraries. The only thing you re avoiding downloading is the Rust compiler itself, which is precisely the part that is most carefully maintained, and of least concern. Debian s cargo can even download from crates.io when you re building official Debian source packages written in Rust: if you run dpkg-buildpackage, the downloading is suppressed; but a plain cargo build will try to obtain and use dependencies from the upstream ecosystem. ( Happily , if you do this, it s quite likely to bail out early due to version mismatches, before actually downloading anything.) Option 1: WTF, no I don t want curl bash OK, but then you must limit yourself to libraries available within Debian. Each Debian release provides a curated set. It may or may not be sufficient for your needs. Many capable programs can be written using the packages in Debian. But any upstream Rust project that you encounter is likely to be a pain to get working, unless their maintainers specifically intend to support this. (This is fairly rare, and the Rust tooling doesn t make it easy.) To go with this plan, apt install rustc cargo and put this in your configuration, in $HOME/.cargo/config.toml:
[source.debian-packages]
directory = "/usr/share/cargo/registry"
[source.crates-io]
replace-with = "debian-packages"
This causes cargo to look in /usr/share for dependencies, rather than downloading them from crates.io. You must then install the librust-FOO-dev packages for each of your dependencies, with apt. This will allow you to write your own program in Rust, and build it using cargo build. Option 2: Biting the curl bash bullet If you want to build software that isn t specifically targeted at Debian s Rust you will probably need to use packages from crates.io, not from Debian. If you re doing to do that, there is little point not using rustup to get the latest compiler. rustup s install rune is alarming, but cargo will be doing exactly the same kind of thing, only worse (because it trusts many more people) and more hidden. So in this case: do run the curl bash install rune. Hopefully the Rust project you are trying to build have shipped a Cargo.lock; that contains hashes of all the dependencies that they last used and tested. If you run cargo build --locked, cargo will only use those versions, which are hopefully OK. And you can run cargo audit to see if there are any reported vulnerabilities or problems. But you ll have to bootstrap this with cargo install --locked cargo-audit; cargo-audit is from the RUSTSEC folks who do care about these kind of things, so hopefully running their code (and their dependencies) is fine. Note the --locked which is needed because cargo s default behaviour is wrong. Privilege separation This approach is rather alarming. For my personal use, I wrote a privsep tool which allows me to run all this upstream Rust code as a separate user. That tool is nailing-cargo. It s not particularly well productised, or tested, but it does work for at least one person besides me. You may wish to try it out, or consider alternative arrangements. Bug reports and patches welcome. OMG what a mess Indeed. There are large number of technical and social factors at play. cargo itself is deeply troubling, both in principle, and in detail. I often find myself severely disappointed with its maintainers decisions. In mitigation, much of the wider Rust upstream community does takes this kind of thing very seriously, and often makes good choices. RUSTSEC is one of the results. Debian s technical arrangements for Rust packaging are quite dysfunctional, too: IMO the scheme is based on fundamentally wrong design principles. But, the Debian Rust packaging team is dynamic, constantly working the update treadmills; and the team is generally welcoming and helpful. Sadly last time I explored the possibility, the Debian Rust Team didn t have the appetite for more fundamental changes to the workflow (including, for example, changes to dependency version handling). Significant improvements to upstream cargo s approach seem unlikely, too; we can only hope that eventually someone might manage to supplant it.
edited 2024-03-21 21:49 to add a cut tag


comment count unavailable comments

26 November 2023

Ian Jackson: Hacking my filter coffee machine

I hacked my coffee machine to let me turn it on from upstairs in bed :-). Read on for explanation, circuit diagrams, 3D models, firmware source code, and pictures. Background: the Morphy Richards filter coffee machine I have a Morphy Richards filter coffee machine. It makes very good coffee. But the display and firmware are quite annoying: Also, I m lazy and wanted to be able to cause coffee to exist from upstairs in bed, without having to make a special trip down just to turn the machine on. Planning My original feeling was I can t be bothered dealing with the coffee machine innards so I thought I would make a mechanical contraption to physically press the coffee machine s on button. I could have my contraption press the button to turn the machine on (timed, or triggered remotely), and then periodically in pairs to reset the 25-minute keep-warm timer. But a friend pointed me at a blog post by Andy Bradford, where Andy recounts modifying his coffee machine, adding an ESP8266 and connecting it to his MQTT-based Home Assistant setup. I looked at the pictures and they looked very similar to my machine. I decided to take a look inside. Inside the Morphy Richards filter coffee machine My coffee machine seemed to be very similar to Andy s. His disassembly report was very helpful. Inside I found the high-voltage parts with the heating elements, and the front panel with the display and buttons. I spent a while poking about, masuring things, and so on. Unexpected electrical hazard At one point I wanted to use my storage oscilloscope to capture the duration and amplitude of the beep signal. I needed to connect the scope ground to the UI board s ground plane, but then when I switched the coffee machine on at the wall socket, it tripped the house s RCD. It turns out that the low voltage UI board is coupled to the mains. In my setting, there s an offset of about 8V between the UI board ground plane, and true earth. (In my house the neutral is about 2-3V away from true earth.) This alarmed me rather. To me, this means that my modifications needed to still properly electrically isolate everything connected to the UI board from anything external to the coffee machine s housing. In Andy s design, I think the internal UI board ground plane is directly brought out to an external USB-A connector. This means that if there were a neutral fault, the USB-A connector would be at live potential, possibly creating an electrocution or fire hazard. I made a comment in Andy Bradford s blog, reporting this issue, but it doesn t seem to have appeared. This is all quite alarming. I hope Andy is OK! Design approach I don t have an MQTT setup at home, or an installation of Home Assistant. I didn t feel like adding a lot of complicated software to my life, if I could avoid it. Nor did I feel like writing a web UI myself. I ve done that before, but I m lazy and in this case my requirements were quite modest. Also, the need for electrical isolation would further complicate any attempt to do something sophisticated (that could, for example, sense the state of the coffee machine). I already had a Tasmota-based cloud-free smart plug, which controls the fairy lights on our gazebo. We just operate that through its web UI. So, I decided I would add a small and stupid microcontroller. The microcontroller would be powered via a smart plug and an off-the-shelf USB power supply. The microcontroller would have no inputs. It would simply simulate an on button press once at startup, and thereafter two presses every 24 minutes. After the 4th double press the microcontroller would stop, leaving the coffee machine to time out itself, after a total period of about 2h. Implementation - hardware I used a DigiSpark board with an ATTiny85. One of the GPIOs is connected to an optoisolator, whose output transistor is wired across the UI board s on button. circuit diagram; board layout diagram; (click for diagram scans as pdfs). The DigiSpark has just a USB tongue, which is very wobbly in a normal USB socket. I designed a 3D printed case which also had an approximation of the rest of the USB A plug. The plug is out of spec; our printer won t go fine enough - and anyway, the shield is supposed to be metal, not fragile plastic. But it fit in the USB PSU I was using, satisfactorily if a bit stiffly, and also into the connector for programming via my laptop. Inside the coffee machine, there s the boundary between the original, coupled to mains, UI board, and the isolated low voltage of the microcontroller. I used a reasonably substantial cable to bring out the low voltage connection, past all the other hazardous innards, to make sure it stays isolated. I added a drain power supply resistor on another of the GPIOs. This is enabled, with a draw of about 30mA, when the microcontroller is soon going to off / on cycle the coffee machine. That reduces the risk that the user will turn off the smart plug, and turn off the machine, but that the microcontroller turns the coffee machine back on again using the remaining power from USB PSU. Empirically in my setup it reduces the time from smart plug off to microcontroller stops from about 2-3s to more like 1s. Optoisolator board (inside coffee machine) pictures (Click through for full size images.) optoisolator board, front; optoisolator board, rear; optoisolator board, fitted. Microcontroller board (in USB-plug-ish housing) pictures microcontroller board, component side; microcontroller board, wiring side, part fitted; microcontroller in USB-plug-ish housing. Implementation - software I originally used the Arduino IDE, writing my program in C. I had a bad time with that and rewrote it in Rust. The firmware is in a repository on Debian s gitlab Results I can now cause the coffee to start, from my phone. It can be programmed more than 12h in advance. And it stays warm until we ve drunk it. UI is worse There s one aspect of the original Morphy Richards machine that I haven t improved: the user interface is still poor. Indeed, it s now even worse: To turn the machine on, you probably want to turn on the smart plug instead. Unhappily, the power button for that is invisible in its installed location. In particular, in the usual case, if you want to turn it off, you should ideally turn off both the smart plug (which can be done with the button on it) and the coffee machine itself. If you forget to turn off the smart plug, the machine can end up being turned on, very briefly, a handful of times, over the next hour or two. Epilogue We had used the new features a handful of times when one morning the coffee machine just wouldn t make coffee. The UI showed it turning on, but it wouldn t get hot, so no coffee. I thought oh no, I ve broken it! But, on investigation, I found that the machine s heating element was open circuit (ie, completely broken). I didn t mess with that part. So, hooray! Not my fault. Probably, just being inverted a number of times and generally lightly jostled, had precipitated a latent fault. The machine was a number of years old. Happily I found a replacement, identical, machine, online. I ve transplanted my modification and now it all works well. Bonus pictures (Click through for full size images.) probing the innards; machine base showing new cable route.
edited 2023-11-26 14:59 UTC in an attempt to fix TOC links


comment count unavailable comments

22 October 2023

Ian Jackson: DigiSpark (ATTiny85) - Arduino, C, Rust, build systems

Recently I completed a small project, including an embedded microcontroller. For me, using the popular Arduino IDE, and C, was a mistake. The experience with Rust was better, but still very exciting, and not in a good way. Here follows the rant. Introduction In a recent project (I ll write about the purpose, and the hardware in another post) I chose to use a DigiSpark board. This is a small board with a USB-A tongue (but not a proper plug), and an ATTiny85 microcontroller, This chip has 8 pins and is quite small really, but it was plenty for my application. By choosing something popular, I hoped for convenient hardware, and an uncomplicated experience. Convenient hardware, I got. Arduino IDE The usual way to program these boards is via an IDE. I thought I d go with the flow and try that. I knew these were closely related to actual Arduinos and saw that the IDE package arduino was in Debian. But it turns out that the Debian package s version doesn t support the DigiSpark. (AFAICT from the list it offered me, I m not sure it supports any ATTiny85 board.) Also, disturbingly, its board manager seemed to be offering to install board support, suggesting it would download stuff from the internet and run it. That wouldn t be acceptable for my main laptop. I didn t expect to be doing much programming or debugging, and the project didn t have significant security requirements: the chip, in my circuit, has only a very narrow ability do anything to the real world, and no network connection of any kind. So I thought it would be tolerable to do the project on my low-security video laptop . That s the machine where I m prepared to say yes to installing random software off the internet. So I went to the upstream Arduino site and downloaded a tarball containing the Arduino IDE. After unpacking that in /opt it ran and produced a pointy-clicky IDE, as expected. I had already found a 3rd-party tutorial saying I needed to add a magic URL (from the DigiSpark s vendor) in the preferences. That indeed allowed it to download a whole pile of stuff. Compilers, bootloader clients, god knows what. However, my tiny test program didn t make it to the board. Half-buried in a too-small window was an error message about the board s bootloader ( Micronucleus ) being too new. The boards I had came pre-flashed with micronucleus 2.2. Which is hardly new, But even so the official Arduino IDE (or maybe the DigiSpark s board package?) still contains an old version. So now we have all the downsides of curl bash-ware, but we re lacking the it s up to date and it just works upsides. Further digging found some random forum posts which suggested simply downloading a newer micronucleus and manually stuffing it into the right place: one overwrites a specific file, in the middle the heaps of stuff that the Arduino IDE s board support downloader squirrels away in your home directory. (In my case, the home directory of the untrusted shared user on the video laptop,) So, whatever . I did that. And it worked! Having demo d my ability to run code on the board, I set about writing my program. Writing C again The programming language offered via the Arduino IDE is C. It s been a little while since I started a new thing in C. After having spent so much of the last several years writing Rust. C s primitiveness quickly started to grate, and the program couldn t easily be as DRY as I wanted (Don t Repeat Yourself, see Wilson et al, 2012, 4, p.6). But, I carried on; after all, this was going to be quite a small job. Soon enough I had a program that looked right and compiled. Before testing it in circuit, I wanted to do some QA. So I wrote a simulator harness that #included my Arduino source file, and provided imitations of the few Arduino library calls my program used. As an side advantage, I could build and run the simulation on my main machine, in my normal development environment (Emacs, make, etc.). The simulator runs confirmed the correct behaviour. (Perhaps there would have been some more faithful simulation tool, but the Arduino IDE didn t seem to offer it, and I wasn t inclined to go further down that kind of path.) So I got the video laptop out, and used the Arduino IDE to flash the program. It didn t run properly. It hung almost immediately. Some very ad-hoc debugging via led-blinking (like printf debugging, only much worse) convinced me that my problem was as follows: Arduino C has 16-bit ints. My test harness was on my 64-bit Linux machine. C was autoconverting things (when building for the micrcocontroller). The way the Arduino IDE ran the compiler didn t pass the warning options necessary to spot narrowing implicit conversions. Those warnings aren t the default in C in general because C compilers hate us all for compatibility reasons. I don t know why those warnings are not the default in the Arduino IDE, but my guess is that they didn t want to bother poor novice programmers with messages from the compiler explaining how their program is quite possibly wrong. After all, users don t like error messages so we shouldn t report errors. And novice programmers are especially fazed by error messages so it s better to just let them struggle themselves with the arcane mysteries of undefined behaviour in C? The Arduino IDE does offer a dropdown for compiler warnings . The default is None. Setting it to All didn t produce anything about my integer overflow bugs. And, the output was very hard to find anyway because the log window has a constant stream of strange messages from javax.jmdns, with hex DNS packet dumps. WTF. Other things that were vexing about the Arduino IDE: it has fairly fixed notions (which don t seem to be documented) about how your files and directories ought to be laid out, and magical machinery for finding things you put nearby its sketch (as it calls them) and sticking them in its ear, causing lossage. It has a tendency to become confused if you edit files under its feet (e.g. with git checkout). It wasn t really very suited to a workflow where principal development occurs elsewhere. And, important settings such as the project s clock speed, or even the target board, or the compiler warning settings to use weren t stored in the project directory along with the actual code. I didn t look too hard, but I presume they must be in a dotfile somewhere. This is madness. Apparently there is an Arduino CLI too. But I was already quite exasperated, and I didn t like the idea of going so far off the beaten path, when the whole point of using all this was to stay with popular tooling and share fate with others. (How do these others cope? I have no idea.) As for the integer overflow bug: I didn t seriously consider trying to figure out how to control in detail the C compiler options passed by the Arduino IDE. (Perhaps this is possible, but not really documented?) I did consider trying to run a cross-compiler myself from the command line, with appropriate warning options, but that would have involved providing (or stubbing, again) the Arduino/DigiSpark libraries (and bugs could easily lurk at that interface). Instead, I thought, if only I had written the thing in Rust . But that wasn t possible, was it? Does Rust even support this board? Rust on the DigiSpark I did a cursory web search and found a very useful blog post by Dylan Garrett. This encouraged me to think it might be a workable strategy. I looked at the instructions there. It seemed like I could run them via the privsep arrangement I use to protect myself when developing using upstream cargo packages from crates.io. I got surprisingly far surprisingly quickly. It did, rather startlingly, cause my rustup to download a random recent Nightly Rust, but I have six of those already for other Reasons. Very quickly I got the trinket LED blink example, referenced by Dylan s blog post, to compile. Manually copying the file to the video laptop allowed me to run the previously-downloaded micronucleus executable and successfully run the blink example on my board! I thought a more principled approach to the bootloader client might allow a more convenient workflow. I found the upstream Micronucleus git releases and tags, and had a look over its source code, release dates, etc. It seemed plausible, so I compiled v2.6 from source. That was a success: now I could build and install a Rust program onto my board, from the command line, on my main machine. No more pratting about with the video laptop. I had got further, more quickly, with Rust, than with the Arduino IDE, and the outcome and workflow was superior. So, basking in my success, I copied the directory containing the example into my own project, renamed it, and adjusted the path references. That didn t work. Now it didn t build. Even after I copied about .cargo/config.toml and rust-toolchain.toml it didn t build, producing a variety of exciting messages, depending what precisely I tried. I don t have detailed logs of my flailing: the instructions say to build it by cd ing to the subdirectory, and, given that what I was trying to do was to not follow those instructions, it didn t seem sensible to try to prepare a proper repro so I could file a ticket. I wasn t optimistic about investigating it more deeply myself: I have some experience of fighting cargo, and it s not usually fun. Looking at some of the build control files, things seemed quite complicated. Additionally, not all of the crates are on crates.io. I have no idea why not. So, I would need to supply local copies of them anyway. I decided to just git subtree add the avr-hal git tree. (That seemed better than the approach taken by the avr-hal project s cargo template, since that template involve a cargo dependency on a foreign git repository. Perhaps it would be possible to turn them into path dependencies, but given that I had evidence of file-location-sensitive behaviour, which I didn t feel like I wanted to spend time investigating, using that seems like it would possibly have invited more trouble. Also, I don t like package templates very much. They re a form of clone-and-hack: you end up stuck with whatever bugs or oddities exist in the version of the template which was current when you started.) Since I couldn t get things to build outside avr-hal, I edited the example, within avr-hal, to refer to my (one) program.rs file outside avr-hal, with a #[path] instruction. That s not pretty, but it worked. I also had to write a nasty shell script to work around the lack of good support in my nailing-cargo privsep tool for builds where cargo must be invoked in a deep subdirectory, and/or Cargo.lock isn t where it expects, and/or the target directory containing build products is in a weird place. It also has to filter the output from cargo to adjust the pathnames in the error messages. Otherwise, running both cd A; cargo build and cd B; cargo build from a Makefile produces confusing sets of error messages, some of which contain filenames relative to A and some relative to B, making it impossible for my Emacs to reliably find the right file. RIIR (Rewrite It In Rust) Having got my build tooling sorted out I could go back to my actual program. I translated the main program, and the simulator, from C to Rust, more or less line-by-line. I made the Rust version of the simulator produce the same output format as the C one. That let me check that the two programs had the same (simulated) behaviour. Which they did (after fixing a few glitches in the simulator log formatting). Emboldened, I flashed the Rust version of my program to the DigiSpark. It worked right away! RIIR had caused the bug to vanish. Of course, to rewrite the program in Rust, and get it to compile, it was necessary to be careful about the types of all the various integers, so that s not so surprising. Indeed, it was the point. I was then able to refactor the program to be a bit more natural and DRY, and improve some internal interfaces. Rust s greater power, compared to C, made those cleanups easier, so making them worthwhile. However, when doing real-world testing I found a weird problem: my timings were off. Measured, the real program was too fast by a factor of slightly more than 2. A bit of searching (and searching my memory) revealed the cause: I was using a board template for an Adafruit Trinket. The Trinket has a clock speed of 8MHz. But the DigiSpark runs at 16.5MHz. (This is discussed in a ticket against one of the C/C++ libraries supporting the ATTiny85 chip.) The Arduino IDE had offered me a choice of clock speeds. I have no idea how that dropdown menu took effect; I suspect it was adding prelude code to adjust the clock prescaler. But my attempts to mess with the CPU clock prescaler register by hand at the start of my Rust program didn t bear fruit. So instead, I adopted a bodge: since my code has (for code structure reasons, amongst others) only one place where it dealt with the underlying hardware s notion of time, I simply changed my delay function to adjust the passed-in delay values, compensating for the wrong clock speed. There was probably a more principled way. For example I could have (re)based my work on either of the two unmerged open MRs which added proper support for the DigiSpark board, rather than abusing the Adafruit Trinket definition. But, having a nearly-working setup, and an explanation for the behaviour, I preferred the narrower fix to reopening any cans of worms. An offer of help As will be obvious from this posting, I m not an expert in dev tools for embedded systems. Far from it. This area seems like quite a deep swamp, and I m probably not the person to help drain it. (Frankly, much of the improvement work ought to be done, and paid for, by hardware vendors.) But, as a full Member of the Debian Project, I have considerable gatekeeping authority there. I also have much experience of software packaging, build systems, and release management. If anyone wants to try to improve the situation with embedded tooling in Debian, and is willing to do the actual packaging work. I would be happy to advise, and to review and sponsor your contributions. An obvious candidate: it seems to me that micronucleus could easily be in Debian. Possibly a DigiSpark board definition could be provided to go with the arduino package. Unfortunately, IMO Debian s Rust packaging tooling and workflows are very poor, and the first of my suggestions for improvement wasn t well received. So if you need help with improving Rust packages in Debian, please talk to the Debian Rust Team yourself. Conclusions Embedded programming is still rather a mess and probably always will be. Embedded build systems can be bizarre. Documentation is scant. You re often expected to download board support packages full of mystery binaries, from the board vendor (or others). Dev tooling is maddening, especially if aimed at novice programmers. You want version control? Hermetic tracking of your project s build and install configuration? Actually to be told by the compiler when you write obvious bugs? You re way off the beaten track. As ever, Free Software is under-resourced and the maintainers are often busy, or (reasonably) have other things to do with their lives. All is not lost Rust can be a significantly better bet than C for embedded software: The Rust compiler will catch a good proportion of programming errors, and an experienced Rust programmer can arrange (by suitable internal architecture) to catch nearly all of them. When writing for a chip in the middle of some circuit, where debugging involves staring an LED or a multimeter, that s precisely what you want. Rust embedded dev tooling was, in this case, considerably better. Still quite chaotic and strange, and less mature, perhaps. But: significantly fewer mystery downloads, and significantly less crazy deviations from the language s normal build system. Overall, less bad software supply chain integrity. The ATTiny85 chip, and the DigiSpark board, served my hardware needs very well. (More about the hardware aspects of this project in a future posting.)

comment count unavailable comments

3 October 2023

Bastian Blank: Introducing uploads to Debian by git tag

Several years ago, several people proposed a mechanism to upload packages to Debian just by doing "git tag" and "git push". Two not too long discussions on debian-devel (first and second) did not end with an agreement how this mechanism could work.1 The central problem was the ability to properly trace uploads back to the ones who authorised it. Now, several years later and after re-reading those e-mail threads, I again was stopped at the question: can we do this? Yes, it would not be just "git tag", but could we do with close enough? I have some rudimentary code ready to actually do uploads from the CI. However the list of caveats are currently pretty long. Yes, it works in principle. It is still disabled here, because in practice it does not yet work. Problems with this setup So what are the problems? It requires the git tags to include the both signed files for a successful source upload. This is solved by a new tool that could be a git sub-command. It just creates the source package, signs it and adds the signed file describing the upload (the .dsc and .changes file) to the tag to be pushed. The CI then extracts the signed files from the tag message and does it's work as normal. It requires a sufficiently reproducible build for source packages. Right now it is only known to work with the special 3.0 (gitarchive) source format, but even that requires the latest version of this format. No idea if it is possible to use others, like 3.0 (quilt) for this purpose. The shared GitLab runner provides by Salsa do not allow ftp access to the outside. But Debian still uses ftp to do uploads. At least if you don't want to share your ssh key, which can't be restricted to uploads only, but ssh would not work either. And as the current host for those builds, the Google Cloud Platform, does not provide connection tracking support for ftp, there is no easy way to allow that without just allowing everything. So we have no way to currently actually perform uploads from this platform. Further work As this is code running in a CI under the control of the developer, we can easily do other workflows. Some teams do workflows that do tags after acceptance into the Debian archive. Or they don't use tags at all. With some other markers, like variables or branch names, this support can be expanded easily. Unrelated to this task here, we might want to think about tying the .changes files for uploads to the target archive. As this code makes all of them readily available in form of tag message, replaying them into possible other archives might be more of a concern now. Conclusion So to come back to the question, yes we can. We can prepare uploads using our CI in a way that they would be accepted into the Debian archive. It just needs some more work on infrastructure.

  1. Here I have to admit that, after reading it again, I'm not really proud of my own behaviour and have to apologise.

20 December 2022

Ian Jackson: Rust for the Polyglot Programmer, December 2022 edition

I have reviewed, updated and revised my short book about the Rust programming language, Rust for the Polyglot Programmer. It now covers some language improvements from the past year (noting which versions of Rust they re available in), and has been updated for changes in the Rust library ecosystem. With (further) assistance from Mark Wooding, there is also a new table of recommendations for numerical conversion. Recap about Rust for the Polyglot Programmer There are many introductory materials about Rust. This one is rather different. Compared to much other information about Rust, Rust for the Polyglot Programmer is: After reading Rust for the Polyglot Programmer, you won t know everything you need to know to use Rust for any project, but should know where to find it. Comments are welcome of course, via the Dreamwidth comments or Salsa issue or MR. (If you re making a contribution, please indicate your agreement with the Developer Certificate of Origin.)
edited 2022-12-20 01:48 to fix a typo


comment count unavailable comments

8 August 2022

Ian Jackson: dkim-rotate - rotation and revocation of DKIM signing keys

Background Internet email is becoming more reliant on DKIM, a scheme for having mail servers cryptographically sign emails. The Big Email providers have started silently spambinning messages that lack either DKIM signatures, or SPF. DKIM is arguably less broken than SPF, so I wanted to deploy it. But it has a problem: if done in a naive way, it makes all your emails non-repudiable, forever. This is not really a desirable property - at least, not desirable for you, although it can be nice for someone who (for example) gets hold of leaked messages obtained by hacking mailboxes. This problem was described at some length in Matthew Green s article Ok Google: please publish your DKIM secret keys. Following links from that article does get you to a short script to achieve key rotation but it had a number of problems, and wasn t useable in my context. dkim-rotate So I have written my own software for rotating and revoking DKIM keys: dkim-rotate. I think it is a good solution to this problem, and it ought to be deployable in many contexts (and readily adaptable to those it doesn t already support). Here s the feature list taken from the README: Complications It seems like it should be a simple problem. Keep N keys, and every day (or whatever), generate and start using a new key, and deliberately leak the oldest private key. But, things are more complicated than that. Considerably more complicated, as it turns out. I didn t want the DKIM key rotation software to have to edit the actual DNS zones for each relevant mail domain. That would tightly entangle the mail server administration with the DNS administration, and there are many contexts (including many of mine) where these roles are separated. The solution is to use DNS aliases (CNAME). But, now we need a fixed, relatively small, set of CNAME records for each mail domain. That means a fixed, relatively small set of key identifiers ( selectors in DKIM terminology), which must be used in rotation. We don t want the private keys to be published via the DNS because that makes an ever-growing DNS zone, which isn t great for performance; and, because we want to place barriers in the way of processes which might enumerate the set of keys we use (and the set of keys we have leaked) and keep records of what status each key had when. So we need a separate publication channel - for which a webserver was the obvious answer. We want the private keys to be readily noticeable and findable by someone who is verifying an alleged leaked email dump, but to be hard to enumerate. (One part of the strategy for this is to leave a note about it, with the prospective private key url, in the email headers.) The key rotation operations are more complicated than first appears, too. The short summary, above, neglects to consider the fact that DNS updates have a nonzero propagation time: if you change the DNS, not everyone on the Internet will experience the change immediately. So as well as a timeout for how long it might take an email to be delivered (ie, how long the DKIM signature remains valid), there is also a timeout for how long to wait after updating the DNS, before relying on everyone having got the memo. (This same timeout applies both before starting to sign emails with a new key, and before deliberately compromising a key which has been withdrawn and deadvertised.) Updating the DNS, and the MTA configuration, are fallible operations. So we need to cope with out-of-course situations, where a previous DNS or MTA update failed. In that case, we need to retry the failed update, and not proceed with key rotation. We mustn t start the timer for the key rotation until the update has been implemented. The rotation script will usually be run by cron, but it might be run by hand, and when it is run by hand it ought not to jump the gun and do anything too early (ie, before the relevant timeout has expired). cron jobs don t always run, and don t always run at precisely the right time. (And there s daylight saving time, to consider, too.) So overall, it s not sufficient to drive the system via cron and have it proceed by one unit of rotation on each run. And, hardest of all, I wanted to support post-deployment configuration changes, while continuing to keep the whole the system operational. Otherwise, you have to bake in all the timing parameters right at the beginning and can t change anything ever. So for example, I wanted to be able to change the email and DNS propagation delays, and even the number of selectors to use, without adversely affecting the delivery of already-sent emails, and without having to shut anything down. I think I have solved these problems. The resulting system is one which keeps track of the timing constraints, and the next event which might occur, on a per-key basis. It calculates on each run, which key(s) can be advanced to the next stage of their lifecycle, and performs the appropriate operations. The regular key update schedule is then an emergent property of the config parameters and cron job schedule. (I provide some example config.) Exim Integrating dkim-rotate itself with Exim was fairly easy. The lsearch lookup function can be used to fish information out of a suitable data file maintained by dkim-rotate. But a final awkwardness was getting Exim to make the right DKIM signatures, at the right time. When making a DKIM signature, one must choose a signing authority domain name: who should we claim to be? (This is the SDID in DKIM terms.) A mailserver that handles many different mail domains will be able to make good signatures on behalf of many of them. It seems to me that domain to be the mail domain in the From: header of the email. (The RFC doesn t seem to be clear on what is expected.) Exim doesn t seem to have anything builtin to do that. And, you only want to DKIM-sign emails that are originated locally or from trustworthy sources. You don t want to DKIM-sign messages that you received from the global Internet, and are sending out again (eg because of an email alias or mailing list). In theory if you verify DKIM on all incoming emails, you could avoid being fooled into signing bad emails, but rejecting all non-DKIM-verified email would be a very strong policy decision. Again, Exim doesn t seem to have cooked machinery. The resulting Exim configuration parameters run to 22 lines, and because they re parameters to an existing config item (the smtp transport) they can t even easily be deployed as a drop-in file via Debian s split config Exim configuration scheme. (I don t know if the file written for Exim s use by dkim-rotate would be suitable for other MTAs, but this part of dkim-rotate could easily be extended.) Conclusion I have today released dkim-rotate 0.4, which is the first public release for general use. I have it deployed and working, but it s new so there may well be bugs to work out. If you would like to try it out, you can get it via git from Debian Salsa. (Debian folks can also find it freshly in Debian unstable.)

comment count unavailable comments

29 September 2021

Ian Jackson: Rust for the Polyglot Programmer

Rust is definitely in the news. I'm definitely on the bandwagon. (To me it feels like I've been wanting something like Rust for many years.) There're a huge number of intro tutorials, and of course there's the Rust Book. A friend observed to me, though, that while there's a lot of "write your first simple Rust program" there's a dearth of material aimed at the programmer who already knows a dozen diverse languages, and is familiar with computer architecture, basic type theory, and so on. Or indeed, for the impatient and confident reader more generally. I thought I would have a go. Rust for the Polyglot Programmer is the result. Compared to much other information about Rust, Rust for the Polyglot Programmer is: After reading Rust for the Polyglot Programmer, you won't know everything you need to know to use Rust for any project, but should know where to find it. Thanks are due to Simon Tatham, Mark Wooding, Daniel Silverstone, and others, for encouragement, and helpful reviews including important corrections. Particular thanks to Mark Wooding for wrestling pandoc and LaTeX into producing a pretty good-looking PDF. Remaining errors are, of course, mine. Comments are welcome of course, via the Dreamwidth comments or Salsa issue or MR. (If you're making a contribution, please indicate your agreement with the Developer Certificate of Origin.)
edited 2021-09-29 16:58 UTC to fix Salsa link targe, and 17:01 and 17:21 to for minor grammar fixes


comment count unavailable comments

2 September 2021

Ian Jackson: partial-borrow: references to restricted views of a Rust struct

tl;dr:
With these two crazy proc-macros you can hand out multipe (perhaps mutable) references to suitable subsets/views of the same struct. Why In Otter I have adopted a style where I try to avoid giving code mutable access that doesn't need it, and try to make mutable access come with some code structures to prevent "oh I forgot a thing" type mistakes. For example, mutable access to a game state is only available in contexts that have to return a value for the updates to send to the players. This makes it harder to forget to send the update. But there is a downside. The game state is inside another struct, an Instance, and much code needs (immutable) access to it. I can't pass both &Instance and &mut GameState because one is inside the other. My workaround involves passing separate references to the other fields of Instance, leading to some functions taking far too many arguments. 14 in one case. (They're all different types so argument ordering mistakes just result in compiler errors talking about arguments 9 and 11 having wrong types, rather than actual bugs.) I felt this problem was purely a restriction arising from limitations of the borrow checker. I thought it might be possible to improve on it. Weeks passed and the question gradually wormed its way into my consciousness. Eventually, I tried some experiments. Encouraged, I persisted. What and how partial-borrow is a Rust library which solves this problem. You sprinkle #[Derive(PartialBorrow)] and partial!(...) and then you can pass a reference which grants mutable access to only some of the fields. You can also pass a reference through which some fields are inaccessible. You can even split a single mut reference into multiple compatible references, for example granting mut access to mutually-nonverlapping subsets. The core type is Struct__Partial (for some Struct). It is a zero-sized type, but we prevent anyone from constructing one. Instead we magic up references to it, always ensuring that they have the same address as some Struct. The fields of Struct__Partial are also ZSTs that exist ony as references, and they Deref to the actual field (subject to compile-type borrow compatibility checking). Soundness and testing partial-borrow is primarily a nontrivial procedural macro which autogenerates reams of unsafe. Of course I think it's sound, but I thought that the last two times before I added a test which demonstrated otherwise. So it might be fairer to say that I have tried to make it sound and that I don't know of any problems... Reasoning about the correctness of macro-generated code is not so easy. One problem is that there is nowhere good to put the kind of soundness arguments you would normally add near uses of unsafe. I decided to solve this by annotating an instance of the macro output. There's a not very complicated script using diff3 to help fold in changes if the macro output changes - merge conflicts there mean a possible re-review of the argument text. Of course I also have test cases that run with miri, and test cases for expected compiler errors for uses that need to be forbidden for soundness. But this is quite hairy and I'm worried that it might be rather "my first insane unsafe contraption". Also the pointer/reference trickery is definitely subtle, and depends heavily on knowing what Rust's aliasing and pointer provenance rules really are. Stacked Borrows is not entirely trivial to reason about in fiddly corner cases. So for now I have only called it 0.1.0 and left a note in the docs. I haven't actually made Otter use it yet but that's the rather more boring software integration part, not the fun "can I do this mad thing" part so I will probably leave that for a rainy day. Possibly a rainy day after someone other than me has looked at partial-borrow (preferably someone who understands Stacked Borrows...). Fun! This was great fun. I even enjoyed writing the docs. The proc-macro programming environment is not entirely straightforward and there are a number of things to watch out for. For my first non-adhoc proc-macro this was, perhaps, ambitious. But you don't learn anything without trying...
edited 2021-09-02 16:28 UTC to fix a typo


comment count unavailable comments

17 August 2021

Ian Jackson: Releasing nailing-cargo 1.0.0

Summary I have just tagged nailing-cargo/1.0.0. nailing-cargo is a wrapper around the Rust build tool cargo. nailing-cargo can: Background and history It's not really possible to make a nontrivial Rust project without using cargo. But the build process automatically downloads and executes code from crates.io, which is a minimally-curated repository. I didn't want to expose my main account to that. And, at the time, I was working on a project which for which I was also writing a library as a dependency, and I found that cargo couldn't cope with this unless I were to commit (to my git repository) the path (on my local laptop) of my dependency. I filed some bugs, including about the unpublished crate problem. But also, I was stubborn enough to try to find a workaround that didn't involve committing junk to my git history. The result was a short but horrific shell script. I wrote about this at the time (March 2019). Over the last few years the difficulties I have with cargo have remained un-resolved. I found my interactions with upstream rather discouraging. It didn't seem like I would get anywhere by trying to help improve cargo to better support my needs. So instead I have gradually improved nailing-cargo. It is now a Perl script. It is rather less horrific, and has proper documentation (sorry, JS needed because GitLab). Why Perl ? Rust would have been my language of choice. But I wanted to avoid a chicken-and-egg situation. When you're doing privsep, nailing-cargo has to run in your more privileged environment. I wanted something easy to get going with. nailing-cargo has to contain a TOML parser; and I found a small one, TOML-Tiny, which was good enough as a starting point, and small enough I could bundle it as a git subtree. Perl is nicely fast to start up (nailing-cargo --- true runs in about 170ms on my laptop), and it is easy to write a Perl script that will work on pretty much any Perl installation. Still unsolved: embedding cargo in another build system A number of my projects contain a mixture of Rust code with other languages. Unfortunately, nailing-cargo doesn't help with the problems which arise trying to integrate cargo into another build system. I generally resort to find runes for finding Rust source files that might influence cargo, and stamp files for seeing if I have run it recently enough; and I simply live with the fact that cargo sometimes builds more stuff than I needed it to. Future There are a number of ways nailing-cargo could be improved. Notably, the need to overwrite your actual Cargo.toml is very annoying, even if nailing-cargo puts it back afterwards. A big problem with this is that it means that nailing-cargo has to take a lock, while your cargo rune runs. This effectively prevents using nailing-cargo with long-running processes. Notably, editor integrations like rls and racer. I could perhaps solve this with more linkfarm-juggling, but that wouldn't help in-tree builds and it's hard to keep things up to date. I am considering using LD_PRELOAD trickery or maybe bwrap(1) to "implement" the alternative Cargo.toml feature which was rejected by cargo upstream in 2019 (and again in April when someone else asked). Currently there is no support for using sudo for out-of-tree privsep. This should be easy to add but it needs someone who uses sudo to want it (and to test it!) The documentation has some other dicusssion of limitations, some of which aren't too hard to improve. Patches welcome!

comment count unavailable comments

26 May 2021

Ian Jackson: Disconnecting from Freenode

I have just disconnected from irc.freenode.net for the last time. You should do the same. The awful new de facto operators are using user numbers as a public justification for their behaviour. Specifically, I recommend that you: Note that mentioning libera in the channel topic of your old channels on freenode is likely to get your channel forcibly taken over by the new de facto operators of freenode. They won't tolerate you officially directing people to the competition. I did an investigation and writeup of this situation for the Xen Project. It's a little out of date - it doesn't have the latest horrible behaviours from the new regime - but I think it is worth pasting it here:
Message-ID: <24741.12566.639691.461134@mariner.uk.xensource.com>
From: Ian Jackson <iwj@xenproject.org>
To: xen-devel@lists.xenproject.org
CC: community.manager@xenproject.org
Subject: IRC networks
Date: Wed, 19 May 2021 16:39:02 +0100
Summary:
We have for many years used the Freenode IRC network for real-time
chat about Xen.  Unfortunately, Freenode is undergoing a crisis.
There is a dispute between, on the one hand, Andrew Lee, and on the
other hand, all (or almost all) Freenode volunteer staff.  We must
make a decision.
I have read all the publicly available materials and asked around with
my contacts.  My conclusions:
 * We do not want to continue to use irc.freenode.*.
 * We might want to use libera.chat, but:
 * Our best option is probably to move to OFTC https://www.oftc.net/
Discussion:
Firstly, my starting point.
I have been on IRC since at least 1993.  Currently my main public
networks are OFTC and Freenode.
I do not have any personal involvement with public IRC networks.  Of
the principals in the current Freenode dispute, I have only heard of
one, who is a person I have experience of in a Debian context but have
not worked closely with.
George asked me informally to use my knowledge and contacts to shed
light on the situation.  I decided that, having done my research, I
would report more formally and publicly here rather than just
informally to George.
Historical background:
 * Freenode has had drama before.  In about 2001 OFTC split off from
   Freenode after an argument over governance.  IIRC there was drama
   again in 2006.  Significant proportion of the Free Software world,
   including Debian, now use OFTC.  Debian switched in 2006.
Facts that I'm (now) pretty sure of:
 * Freenode's actual servers run on donated services; that is,
   the hardware is owned by those donating the services, and the
   systems are managed by Freenode volunteers, known as "staff".
 * The freenode domain names are currently registered to a limited
   liability company owned by Andrew Lee (rasengan).
 * At least 10 Freenode staff have quit in protest, writing similar
   resignation letters protesting about Andrew Lee's actions [1].  It
   does not appear that any Andrew Lee has the public support of any
   Freenode staff.
 * Andrew Lee claims that he "owns" Freenode.[2]
 * A large number of channel owners for particular Free Software
   projects who previously used Freenode have said they will switch
   away from Freenode.
Discussion and findings on Freenode:
There is, as might be expected, some murk about who said what to whom
when, what promises were made and/or broken, and so on.  The matter
was also complicated by the leaking earlier this week of draft(s) of
(at least one of) the Freenode staffers' resignation letters.
Andrew Lee has put forward a position statement [2].  A large part of
the thrust of that statement is allegations that the current head of
Freenode staff, tomaw, "forced out" the previous head, christel.  This
allegation is strongly disputed by by all those current (resigning)
Freenode staff I have seen comment.  In any case it does not seem to
be particularly germane; in none of my reading did tomaw seem to be
playing any kind of leading role.  tomaw is not mentioned in the
resignation letters.
Some of the links led to me to logs of discussions on #freenode.  I
read some of these in particular[3].  MB I haven't been able to verify
that these logs have not been tampered with.  Having said that and
taking the logs at face value, I found the rasengan writing there to
be disingenuous and obtuse.
Andrew Lee has been heavily involved in Bitcoin.  Bitcoin is a hive of
scum and villainy, a pyramid scheme, and an environmental disaster,
all rolled into one.  This does not make me think well of Lee.
Additionally, it seems that Andrew Lee has been involved in previous
governance drama involving a different IRC network, Snoonet.
I have come to the very firm conclusion that we should have nothing to
do with Andrew Lee, and avoid using services that he has some
effective control over.
Alternatives:
The departing Freenode staff are setting up a replacement,
"libera.chat".  This is operational but still suffering from teething
problems and of course has a significant load as it deals with an
influx of users on a new setup.
On the staff and trust question: As I say, I haven't heard of any of
the Freenode staff, with one exception.  Unfortunately the one
exception does not inspire confidence in me[4] - although NB that is
only one data point.
On the other hand, Debian has had many many years of drama-free
involvement with OFTC.  OFTC has a formal governance arrangement and
it is associated with Software in the Public Interest.  I notice that
the last few OFTC'[s annual officer elections have been run partly by
Steve McIntyre.  Steve is a friend of mine (and he is a former Debian
Project Leader) and I take his involvement as a good sign.
I recommend that we switch to using OFTC as soon as possible.
Ian.
References:
Starting point for the resigning Freenode staff's side [1]:
  https://gist.github.com/joepie91/df80d8d36cd9d1bde46ba018af497409
Andrew Lee's side [2]:
  https://gist.github.com/realrasengan/88549ec34ee32d01629354e4075d2d48
[3]
https://paste.sr.ht/~ircwright/7e751d2162e4eb27cba25f6f8893c1f38930f7c4
[4] I won't give the name since I don't want to be shitposting.


comment count unavailable comments

24 April 2021

Antoine Beaupr : A dead game clock

Time flies. Back in 2008, I wrote a game clock. Since then, what was first called "chess clock" was renamed to pychessclock and then Gameclock (2008). It shipped with Debian 6 squeeze (2011), 7 wheezy (4.0, 2013, with a new UI), 8 jessie (5.0, 2015, with a code cleanup, translation, go timers), 9 stretch (2017), and 10 buster (2019), phew! Eight years in Debian over 4 releases, not bad! But alas, Debian 11 bullseye (2021) won't ship with Gameclock because both Python 2 and GTK 2 were removed from Debian. I lack the time, interest, and energy to port this program. Even if I could find the time, everyone is on their phone nowadays. So finding the right toolkit would require some serious thinking about how to make a portable program that can run on Linux and Android. I care less about Mac, iOS, and Windows, but, interestingly, it feels it wouldn't be much harder to cover those as well if I hit both Linux and Android (which is already hard enough, paradoxically). (And before you ask, no, Java is not an option for me thanks. If I switch to anything else than Python, it would be Golang or Rust. And I did look at some toolkit options a few years ago, was excited by none.) So there you have it: that is how software dies, I guess. Alternatives include: PS: Monkeysign also suffered the same fate, for what that's worth. Alternatives include caff, GNOME Keysign, and pius. Note that this does not affect the larger Monkeysphere project, which will ship with Debian bullseye.

19 April 2021

Ian Jackson: Otter - a game server for arbitrary board games

One of the things that I found most vexing about lockdown was that I was unable to play some of my favourite board games. There are online systems for many games, but not all. And online systems cannot support games like Mao where the players make up the rules as we go along. I had an idea for how to solve this problem, and set about implementing it. The result is Otter (the Online Table Top Environment Renderer). We have played a number of fun games of Penultima with it, and have recently branched out into Mao. The Otter is now ready to be released! More about Otter (cribbed shamelessly from the README) Otter, the Online Table Top Environment Renderer, is an online game system. But it is not like most online game systems. It does not know (nor does it need to know) the rules of the game you are playing. Instead, it lets you and your friends play with common tabletop/boardgame elements such as hands of cards, boards, and so on. So it s something like a tabletop simulator (but it does not have any 3D, or a physics engine, or anything like that). This means that with Otter: Installation and usage Otter is fully functional, but the installation and account management arrangements are rather unsophisticated and un-webby. And there is not currently any publicly available instance you can use to try it out. Users on chiark will find an instance there. Other people who who are interested in hosting games (of Penultima or Mao, or other games we might support) will have to find a Unix host or VM to install Otter on, and will probably want help from a Unix sysadmin. Otter is distributed via git, and is available on Salsa, Debian's gitlab instance. There is documentation online. Future plans I have a number of ideas for improvement, which go off in many different directions. Quite high up on my priority list is making it possible for players to upload and share game materials (cards, boards, pieces, and so on), rather than just using the ones which are bundled with Otter itself (or dumping files ad-hoc on the server). This will make it much easier to play new games. One big reason I wrote Otter is that I wanted to liberate boardgame players from the need to implemet their game rules as computer code. The game management and account management is currently done with a command line tool. It would be lovely to improve that, but making a fully-featured management web ui would be a lot of work. Screenshots! (Click for the full size images.)

comment count unavailable comments

21 October 2014

Lucas Nussbaum: Tentative summary of the amendments of the init system coupling GR

This is an update of my previous attempt at summarizing this discussion. As I proposed one of the amendments, you should not blindly trust me, of course. :-) First, let s address two FAQ: What is the impact on jessie?
On the technical level, none. The current state of jessie already matches what is expected by all proposals. It s a different story on the social level. Why are we voting now, then?
Ian Jackson, who submitted the original proposal, explained his motivation in this mail. We now have four different proposals: (summaries are mine) [plessy] is the simplest, and does not discuss the questions that the other proposals are answering, given it considers that the normal Debian decision-making processes have not been exhausted. In order to understand the three other proposals, it s useful to break them down into several questions. Q1: support for the default init system on Linux
A1.1: packages MUST work with the default init system on Linux as PID 1.
(That is the case in both [iwj] and [lucas]) A1.2: packages SHOULD work with the default init system on Linux as PID 1.
With [dktrkranz], it would no longer be required to support the default init system, as maintainers could choose to require another init system than the default, if they consider this a prerequisite for its proper operation; and no patches or other derived works exist in order to support other init systems. That would not be a policy violation. (see this mail and its reply for details). Theoretically, it could also create fragmentation among Debian packages requiring different init systems: you would not be able to run pkgA and pkgB at the same time, because they would require different init systems. Q2: support for alternative init systems as PID 1
A2.1: packages MUST work with one alternative init system (in [iwj])
(Initially, I thought that one here should be understood as sysvinit , as this mail, Ian detailed why he chose to be unspecific about the target init system. However, in that mail, he later clarified that a package requiring systemd or uselessd would be fine as well, given that in practice there aren t going to be many packages that would want to couple specifically to systemd _or_ uselessd, but where support for other init systems is hard to provide.)
To the user, that brings the freedom to switch init systems (assuming that the package will not just support two init systems with specific interfaces, but rather a generic interface common to many init systems).
However, it might require the maintainer to do the required work to support additional init systems, possibly without upstream cooperation.
Lack of support is a policy violation (severity >= serious, RC).
Bugs about degraded operation on some init systems follow the normal bug severity rules. A2.2: packages SHOULD work with alternative init systems as PID 1. (in [lucas])
This is a recommendation. Lack of support is not a policy violation (bug severity < serious, not RC). A2.3: nothing is said about alternative init systems (in [dktrkranz]). Lack of support would likely be a wishlist bug. Q3: special rule for sysvinit to ease wheezy->jessie upgrades
(this question is implicitly dealt with in [iwj], assuming that one of the supported init systems is sysvinit)
A3.1: continue support for sysvinit (in [lucas])
For the jessie release, all software available in Debian wheezy that supports being run under sysvinit should continue to support sysvinit unless there is no technically feasible way to do so. A3.2: no requirement to support sysvinit (in [dktrkranz])
Theoretically, this could require two-step upgrades: first reboot with systemd, then upgrade other packages Q4: non-binding recommendation to maintainers
A4.1: recommend that maintainers accept patches that add or improve
support for alternative init systems. (in both [iwj] and [lucas], with a different wording) A4.2: say nothing (in [dktrkranz]) Q5: support for init systems with are the default on non-Linux ports
A5.1: non-binding recommendation to add/improve support with a high priority (in [lucas]) A5.2: say nothing (in [iwj] and [dktrkranz]) Comments are closed: please discuss by replying to that mail.

31 August 2013

Chris Lamb: Decrypting the Caesar cipher using shell

In cryptography, a Caesar cipher, also known as Caesar's cipher, the shift cipher, Caesar's code or Caesar shift, is one of the simplest and most widely known encryption techniques. It is a type of substitution cipher in which each letter in the plaintext is replaced by a letter some fixed number of positions down the alphabet. [...]
#!/bin/sh
IN="MJHVIZN ZPIO YJHPN"
for I in $(seq 25); do
    echo $I $IN   tr $(printf %$ I s   tr ' ' '.')\A-Z A-ZA-Z
done
This outputs:
1 NKIWJAO AQJP ZKIQO
2 OLJXKBP BRKQ ALJRP
3 PMKYLCQ CSLR BMKSQ
4 QNLZMDR DTMS CNLTR
5 ROMANES EUNT DOMUS
6 SPNBOFT FVOU EPNVT
7 TQOCPGU GWPV FQOWU
8 URPDQHV HXQW GRPXV
9 VSQERIW IYRX HSQYW
10 WTRFSJX JZSY ITRZX
11 XUSGTKY KATZ JUSAY
12 YVTHULZ LBUA KVTBZ
13 ZWUIVMA MCVB LWUCA
14 AXVJWNB NDWC MXVDB
15 BYWKXOC OEXD NYWEC
16 CZXLYPD PFYE OZXFD
17 DAYMZQE QGZF PAYGE
18 EBZNARF RHAG QBZHF
19 FCAOBSG SIBH RCAIG
20 GDBPCTH TJCI SDBJH
21 HECQDUI UKDJ TECKI
22 IFDREVJ VLEK UFDLJ
23 JGESFWK WMFL VGEMK
24 KHFTGXL XNGM WHFNL
25 LIGUHYM YOHN XIGOM
The only line which makes sense is 5 ROMANES EUNT DOMUS, giving us our solution.

Chris Lamb: Decrypting the Caesar cipher using shell

In cryptography, a Caesar cipher, also known as Caesar's cipher, the shift cipher, Caesar's code or Caesar shift, is one of the simplest and most widely known encryption techniques. It is a type of substitution cipher in which each letter in the plaintext is replaced by a letter some fixed number of positions down the alphabet. [...]
#!/bin/sh
IN="MJHVIZN ZPIO YJHPN"
for I in $(seq 25); do
    echo $I $IN   tr $(printf %$ I s   tr ' ' '.')\A-Z A-ZA-Z
done
This outputs:
1 NKIWJAO AQJP ZKIQO
2 OLJXKBP BRKQ ALJRP
3 PMKYLCQ CSLR BMKSQ
4 QNLZMDR DTMS CNLTR
5 ROMANES EUNT DOMUS
6 SPNBOFT FVOU EPNVT
7 TQOCPGU GWPV FQOWU
8 URPDQHV HXQW GRPXV
9 VSQERIW IYRX HSQYW
10 WTRFSJX JZSY ITRZX
11 XUSGTKY KATZ JUSAY
12 YVTHULZ LBUA KVTBZ
13 ZWUIVMA MCVB LWUCA
14 AXVJWNB NDWC MXVDB
15 BYWKXOC OEXD NYWEC
16 CZXLYPD PFYE OZXFD
17 DAYMZQE QGZF PAYGE
18 EBZNARF RHAG QBZHF
19 FCAOBSG SIBH RCAIG
20 GDBPCTH TJCI SDBJH
21 HECQDUI UKDJ TECKI
22 IFDREVJ VLEK UFDLJ
23 JGESFWK WMFL VGEMK
24 KHFTGXL XNGM WHFNL
25 LIGUHYM YOHN XIGOM
The only line which makes sense is 5 ROMANES EUNT DOMUS, giving us our solution.

13 February 2013

Jan Wagner: Searching painting: Strawberry and Man on Pumps with Sparkling wine

We are searching a motive for a painting or a painting itself for a quite while now. This should find it's place in our living room. Unfortunately we didn't found one, which matched our both prospect and/or wasn't compatible with the rest of our living room. Yesterday we stumbled upon a motive which was quite nice, but was too small and it was neighter possible to get it in a bigger size nor to find out who was the origin painter of the picture. Now we are searching for the name of the picture and/or the painter. Any hints appreciated at 'blog - at - waja - dot - info'. A photo with higher resolution can be found here Update: Okay ... an unknown people (many thanks) hinted me, that google image search is the tool that could be very usefull. Google revealed that the painter is Inna Panasenko. P.S. Is it noticeable that I'm in vacation mode? ;)

28 August 2011

Micha Lenk: Finally transitioning to a new GnuPG key

Finally I managed to write up a transition statement for my not so new, but stronger GnuPG key. See below:
-----BEGIN PGP SIGNED MESSAGE-----
Hash: SHA1,SHA256
I am transitioning my GPG key from an old 1024-bit key to a new 4096-bit key.
The old key will continue to be valid for some time, but I prefer all new
correspondence to be encrypted for the new key, and will be making all
signatures going forward with the new key.
If you have signed my old key, I would appreciate signatures on my new key as
well, provided that your signing policy permits that without reauthenticating
me.
The old key, which I am transitioning away from, is:
pub   1024D/99E141B4 2004-02-10
      Key fingerprint = 25FE 4741 4770 0558 949D  1DB1 58DD 3FE2 99E1 41B4
The new key, to which I am transitioning, is:
pub   4096R/51B85139 2009-06-18
      Key fingerprint = A3EB B41F C5AB D675 CEE4  1C45 EA6C A6B9 51B8 5139
Thanks in advance.
Cheers,
Micha Lenk
-----BEGIN PGP SIGNATURE-----
Version: GnuPG v1.4.10 (GNU/Linux)
iEYEARECAAYFAk5aOnkACgkQWN0/4pnhQbTxPgCgzRhREZlQiKJyI9UIdJLLs3Zq
bH4AnA1myFxgDWM7aUMHXgvvsujLTjiWiQIcBAEBCAAGBQJOWjp5AAoJEOpsprlR
uFE5gCsP/0dtCUPl9aQHV1MbQl7+bMofpsC2ikkpdZmrzi68jTG16We49BuzY+PV
S8FhXqg17/YxhKYDnDNTowfztUOyjAOJxy5vrqm3X5xiLwTqN3js9mra+vb4s35k
EVbKMzLLDhj3i0FjeargWEmJmm9cVhaZWKvOvQUhDJAilqvEQ0/50P7B8I+1YvtV
UHoQKbweTljVSlK5R1YPPy9i6r2/oZBYxK4nrknWwS+qPQ5luqelJd+mZdgQ6tow
7HIvtmPgCblJ+hYZWFpoZK6vxs8RaBbuCQcKwYArNhZT/v4TeD/LAaUmIkbQMyKV
J2TKuEHya4+5GMbtg6BGKeiZpleEHPnAq1AfvGpz6opkxjxCLG3RO3X8D3EuM3RW
mkq60mWM8+Zwu1yKbb62iHplp6jpyiQgdjJlB6eHjX7SdY7CvHgYZxGDx4kclP0/
HAdig2U1T+nG6Nn5XflmmKwvNLuKlQlIwJ5NeXyCONRnYvdomQn2hgvkjwMLCdFh
ulIhxa4UvDY7/aQNPZeOrvDHb2XYpiV3TwA9hLgQXXWd0FPmUMVVPpKpRjilaWth
Mtq/QiMGP5Mq/YxgLInRZHGyajDtE67RD4+RgHYOP50cP3UGPoB4ncc2EEtM2kfe
BzJrXPHmdtyGEA6Korl0YwUTRnYsqkqkY1VqDsO0UkOLlV7RAzb9
=9vw1
-----END PGP SIGNATURE-----

23 April 2010

Jonathan McDowell: Out, damn'd PGP v3

Nearly a year ago people starting worrying about the complexity of SHA-1 being reduced and the potential availability of viable attacks against things such as PGP keys that used SHA-1. Many people (myself included) generated a new key, or updated preferences on keys that were otherwise strong enough. There were worries about what this might mean for Debian. We were getting ahead of ourselves a bit though. Firstly there haven't been any public viable attacks that I'm aware of (though of course this doesn't mean we shouldn't continue to migrate away), but secondly there's a much easier method of attack. PGP v3 keys. To quote RFC4880:

V3 keys are deprecated. They contain three weaknesses. First, it is relatively easy to construct a V3 key that has the same Key ID as any other key because the Key ID is simply the low 64 bits of the public modulus. Secondly, because the fingerprint of a V3 key hashes the key material, but not its length, there is an increased opportunity for fingerprint collisions. Third, there are weaknesses in the MD5 hash algorithm that make developers prefer other algorithms. See below for a fuller discussion of Key IDs and fingerprints.
At the time of writing Debian has 21 remaining v3 keys. This is a significant improvement over a year ago, when we had 200, but it's still 21 more than I'd like. I've been chasing people since last May (starting with those who had v3 + v4 keys, all of whom now only have a v4 key) and we're down to the stragglers. So it's time to name and shame, in the hope of kicking them into action. The following keys are what's left (doesn't match the currently active keyring because we've had a few replacements since the last promote):

0x0D2156BD3D97C149 Michael Stone <mstone>
0x225FD911CD269B31 Carlos Barros <cbf>
0x31E73F14E298966D James R. Van Zandt <jrv>
0x366CD3FEEBC11B01 Chris Waters <xtifr>
0x37A73FE355E8BC4D Frederic Lepied <lepied>
0x3E973117DCC528E9 Ardo van Rangelrooij <ardo>
0x5C7A46637953F711 Rich Sahlender <rsahlen>
0x5D6560F85F30F005 Craig Brozefsky <craig>
0x6B0E322836129171 Jim Westveer <jwest>
0x723724B4A5B6DD31 Christian Meder <meder>
0x7629B22ED71DAABD Adrian Bridgett <bridgett>
0x8FFC405EFD5A67CD Adam Di Carlo <aph>
0xB0D269DE17F3D4D1 Matthew Vernon <matthew>
0xBC151FC8D2A913A1 Peter S Galbraith <psg>
0xC1A0A171C2DCD3B1 Jim Mintha <jmintha>
0xC3168EBA23F5ADDB Ian Jackson <iwj>
0xCE951B1160D74C7D Patrick Cole <ltd>
0xE82A8B0D57137FE5 Paul Seelig <pseelig>
0xF20E242CE77AC835 Brian White <bcwhite>
0xFBAA570C3087194D Alan Bain <afrb2>
0xFFD1B4AC7C19FD19 David Engel <david>

Of these keys only 2 voted in the recent DPL election. 8 have failed to make any response to my mails (3 since last August). Only 9 have uploaded a package since August 2008. And 10 were already known to the MIA database. Some of them have stated they'll sort out a new key, but not yet done so.

If you are one of these people, please either get a new key sorted and signed and reply to the mails I've sent you, or reply and say you no longer wish to be involved in Debian. And if you know any of these people, encourage them to get a new key sorted and offer to sign it for them.

1 March 2008

Anthony Towns: Been a while...

So, sometime over the past few weeks I clocked up ten years as a Debian developer:
From: Anthony Towns <aj@humbug.org.au>
Subject: Wannabe maintainer.
Date: Sun, 8 Feb 1998 18:35:28 +1000 (EST)
To: new-maintainer@debian.org
Hello world,
I'd like to become a debian maintainer.
I'd like an account on master, and for it to be subscribed to the
debian-private list.
My preferred login on master would have been aj, but as that's taken
ajt or atowns would be great.
I've run a debian system at home for half a year, and a system at work
for about two months. I've run Linux for two and a half years at home,
two years at work. I've been active in my local linux users' group for
just over a year. I've written a few programs, and am part way through
packaging the distributed.net personal proxy for Debian (pending
approval for non-free distribution from distributed.net).
I've read the Debian Social Contract.
My PGP public key is attached, and also available as
<http://azure.humbug.org.au/~aj/aj_key.asc>.
If there's anything more you need to know, please email me.
Thanks in advance.
Cheers,
aj
-- 
Anthony Towns <aj@humbug.org.au> <http://azure.humbug.org.au/~aj/>
I don't speak for anyone save myself. PGP encrypted mail preferred.
On Netscape GPLing their browser:  How can you trust a browser that
ANYONE can hack? For the secure choice, choose Microsoft.''
        -- <oryx@pobox.com> in a comment on slashdot.org
Apparently that also means I’ve clocked up ten and a half years as a Debian user; I think my previous two years of Linux (mid-95 to mid-97) were split between Slackware and Red Hat, though I couldn’t say for sure at this point. There’s already been a few other grand ten-year reviews, such as Joey Hess’s twenty-part serial, or LWN’s week-by-week review, or ONLamp’s interview with Bruce Perens, Eric Raymond and Michael Tiemann on ten years of “open source”. I don’t think I’m going to try matching that sort of depth though, so here are some of my highlights (after the break).
Hrm, this is going on longer than I’d hoped. Oh well, to be continued!

20 February 2008

MJ Ray: SPI Meeting Wed 20 Feb 1900Z irc.oftc.net/#spi

The notice for the February meeting was posted on Monday. Late again - it's meant to be a week's notice, according to this. I'm disappointed that the current SPI board still hasn't got basic member communications working, even as the year end approaches. Is a default cron-job mail (like we use at TTLLP ) so difficult to set up? Anyway, one of the few interesting items on the agenda is
"Decision on new board member"
but I've not applied for the post for three reasons. Firstly, on reflection, enough SPI members placed me as last choice (which I think is mostly based on old mud-slinging, but it still happened) in its last election that I don't think being appointed would give me a strong enough mandate for the changes I want (more transparency, consultation and work towards meeting the Better Business Bureau standards). Secondly, I'm finding that my Cooperatives-SW board post is taking more time this year. Finally, I hope to be appointed to a representative post in another organisation soon and it's nearly always better to work where I'm obviously wanted. I've asked if the applications of any candidates for the SPI board post will be posted for member review somewhere (because members are supposed to oversee the board, but how can that happen if we don't see basic info on them?) and for there to be a members Q+A during the meeting, but I don't know whether there's more than one candidate or whether the Q+A will happen. Members are a bit in the dark on this one. Again. I hope to see some other SPI members online about 1900Z tonight (20 Feb). I should actually be online during this one, for a change.

Next.