Search Results: "mms"

31 October 2023

Russell Coker: Links October 2023

The Daily Kos has an interesting article about a new more effective method of desalination [1]. Here is a video of a crazy guy zapping things with 100 car batteries [2]. This is sonmething you should avoid if you want to die of natural causes. Does dying while making a science video count for a Darwin Award? A Hacker News comment has an interesting explanation of Unix signals [3]. Interesting documentary on the rise of mega corporations [4]. We need to split up Google, Facebook, and Amazon ASAP. Also every phone platform should have competing app stores. Dave Taht gave an interesting LCA lecture about Internet congestion control [5]. He also referenced a web site about projects to alleviate the buffer bloat problem [6]. This tiny event based sensor is an interesting product [7]. It could lead to some interesting (but possibly invasive) technological developments in phones. Tara Barnett s Everything Open lecture Swiss Army GLAM had some interesting ideas for community software development [8]. Having lots of small programs communicating with APIs is an interesting way to get people into development. Actually Hardcore Overclocking has an interesting youtube video about the differences between x8 and x14 DDR4 DIMMs [9]. Interesting YouTube video from someone who helped the Kurds defend against Turkey about how war tunnels work [10]. He makes a strong case that the Israeli invasion of the Gaza Strip won t be easy or pleasant.

30 June 2023

Russell Coker: Links June 2023

Tablet Magazine has an interesting article about Jewish men who fought in the military for Nazi Germany [1]. I m surprised that they didn t frag their colleagues. Dropbox has an insightful interview with a lawyer about the future of machine learning in the legal profession [2]. This seems like it could give real benefits to society in giving legal assistance to more people and giving less uncertainty about the result of court cases. It could also find unclear laws for legislators who want to improve things. Some people have started a software to produce a free software version of Victoria 2 [3]. Hopefully OpenVic will become as successful as FreeCiv and FreeCraft! Hackster has an interesting article about work to create a machine that does a realistic impersonation of someone s handwriting [4]. The aim is to be good enough to fool people who want manually written assignments. Ars technica has an interesting article about a side channel attack using the power LEDs of smart-card readers to extract cryptographic secret key data [5]. As usual for articles about side channels it turns out to be really hard to do and their proof of concept involved recording a card being repeatedly scanned for an hour. This doesn t mean it s a non-issue, they should harden readers against this. Vice has an interesting article on the search for chemical remnants of ancient organisms in 1.6 billion year old fossils [6]. Bleeping Computer has an interesting article about pirate Windows 10 ISOs infecting systems with EFI malware [7]. That s a particularly nasty attack and shows yet another down-side to commercial software. For Linux the ISOs are always clean and the systems aren t contaminated. The Register has an interesting article about a robot being used for chilled RAM attacks to get access to boot time secrets [8]. They monitor EMF output to stop it at the same time in each boot which I consider the most noteworthy part of this attack. The BBC has an interesting article about personalised medicine [9]. There are 400 million people in the world with rare diseases and an estimated 60 million of them will die before the age of 5. Personalised medicine can save many lives. Let s hope it is used outside the first world. Knuth s thoughts about ChatGPT are interesting [10]. Interesting article about Brown M&Ms and assessing the likely quality of work from a devops team [11]. The ABC has an interesting article about the use of AI and robot traps to catch feral cats [12].

12 February 2023

Russell Coker: T320 iDRAC Failure and new HP Z640

The Dell T320 Almost 2 years ago I made a Dell PowerEdge T320 my home server [1]. It was a decent upgrade from the PowerEdge T110 II that I had used previously. One benefit of that system was that I needed more RAM and the PowerEdge T1xx series use unbuffered ECC RAM which is unreasonably expensive as well as the DIMMs tending to be smaller (no Load Reduced DIMMS) and only having 4 slots. As I had bought two T320s I put all the RAM in a single server getting a total of 96G and then put some cheap DIMMs in the other one and sold it with 48G. The T320 has all the server reliability features including hot-swap redundant PSUs and hot-swap hard drives. One thing it doesn t have redundancy on is the motherboard management system known as iDRAC. 3 days ago my suburb had a power outage and when power came back on the T320 gave an error message about a failure to initialise the iDRAC and put all the fans on maximum speed, which is extremely loud. When a T320 is running in a room that s not particularly hot and it doesn t have SAS disks it s a very quiet server, one of the quietest I ve ever owned. When it goes into emergency cooling mode due to iDRAC failure it s loud enough to be heard from the other end of the house with doors closed in between. Googling this failure gave a few possible answers. One was for some combinations of booting with the iDRAC button held down, turning off for a while and booting with the iDRAC button held down, etc (this didn t work). One was for putting a iDRAC firmware file on the SD card so iDRAC could automatically load it (which I tested even though I didn t have the flashing LED which indicates that it is likely to work, but it didn t do anything). The last was to enable serial console and configure the iDRAC to load new firmware via TFTP, I didn t get a iDRAC message from the serial console just the regular BIOS stuff. So it looks like I ll have to sell the T320 for parts or find someone who wants to run it in it s current form. Currently to boot it I have to press F1 a few times to bypass BIOS messages (someone on the Internet reported making a device to key-jam F1). Then when it boots it s unreasonably loud, but apparently if you are really keen you can buy fans that have temperature sensors to control their own speed and bypass the motherboard control. I d appreciate any advice on how to get this going. At this stage I m not going to go back to it but if I can get it working properly I can sell it for a decent price. The HP Z640 I ve replaced the T320 with a HP Z640 workstation with 32G of RAM which I had recently bought to play with Stable Diffusion. There were hundreds of Z640 workstations with NVidia Quadro M6000 GPUs going on eBay for under $400 each, it looked like a company that did a lot of ML work had either gone bankrupt or upgraded all their employees systems. The price for the systems was surprisingly cheap, at regular eBay prices it seems that the GPU and the RAM go for about the same price as the system. It turned out that Stable Diffusion didn t like the video card in my setup for unknown reasons but also that the E5-1650v3 CPU could render an image in 15 minutes which is fast enough to test it out but not fast enough for serious use. I had been planning to blog about that. When I bought the T320 server the DDR3 Registered ECC RAM it uses cost about $100 for 8*8G DIMMs, with 16G DIMMs being much more expensive. Now the DDR4 Registered ECC RAM used by my Z640 goes for about $120 for 2*16G DIMMs. In the near future I ll upgrade that system to 64G of RAM. It s disappointing that the Z640 only has 4 DIMM sockets per CPU so if you get a single-CPU version (as I did) and don t get the really expensive Load Reduced RAM then you are limited to 64G. So the supposed capacity benefit of going from DDR3 to DDR4 doesn t seem to apply to this upgrade. The Z640 I got has 4 bays for hot-swap SAS/SATA 2.5 SSD/HDDs and 2 internal bays for 3.5 hard drives. The T320 has 8*3.5 hot swap bays and I had 3 hard drives in them in a BTRFS RAID-10 configuration. Currently I ve got one hard drive attached via USB but that s obviously not a long-term solution. The 3 hard drives are 4TB, they have worked since 4TB was a good size. I have a spare 8TB disk so I could buy a second ($179 for a shingle HDD) to make a 8TB RAID-1 array. The other option is to pay $369 for a 4TB SSD (or $389 for a 4TB NVMe + $10 for the PCIe card) to keep the 3 device RAID-10. As tempting as 4TB SSDs are I ll probably get a cheap 8TB disk which will take capacity from 6TB to 8TB and I could use some extra 4TB disks for backups. I haven t played with the AMT/MEBX features on this system, I presume that they will work the same way as AMT/MEBX on the HP Z420 I ve used previously [2]. Update: HP has free updates for the BIOS etc available here [3]. Unfortunately it seems to require loading a kernel module supplied by HP to do this. This is a bad thing, kernel code that isn t in the mainline kernel is either of poor quality or isn t licensed correctly. I had to change my monitoring system to alert on temperatures over 100% of the high range while on the T320 I had it set at 95% of high and never got warnings. This is disappointing, enterprise class gear running in a reasonably cool environment (ambient temperature of about 22C) should be able to run all CPU cores at full performance without hitting 95% of the high temperature level.

6 February 2023

Reproducible Builds: Reproducible Builds in January 2023

Welcome to the first report for 2023 from the Reproducible Builds project! In these reports we try and outline the most important things that we have been up to over the past month, as well as the most important things in/around the community. As a quick recap, the motivation behind the reproducible builds effort is to ensure no malicious flaws can be deliberately introduced during compilation and distribution of the software that we run on our devices. As ever, if you are interested in contributing to the project, please visit our Contribute page on our website.


News In a curious turn of events, GitHub first announced this month that the checksums of various Git archives may be subject to change, specifically that because:
the default compression for Git archives has recently changed. As result, archives downloaded from GitHub may have different checksums even though the contents are completely unchanged.
This change (which was brought up on our mailing list last October) would have had quite wide-ranging implications for anyone wishing to validate and verify downloaded archives using cryptographic signatures. However, GitHub reversed this decision, updating their original announcement with a message that We are reverting this change for now. More details to follow. It appears that this was informed in part by an in-depth discussion in the GitHub Community issue tracker.
The Bundesamt f r Sicherheit in der Informationstechnik (BSI) (trans: The Federal Office for Information Security ) is the agency in charge of managing computer and communication security for the German federal government. They recently produced a report that touches on attacks on software supply-chains (Supply-Chain-Angriff). (German PDF)
Contributor Seb35 updated our website to fix broken links to Tails Git repository [ ][ ], and Holger updated a large number of pages around our recent summit in Venice [ ][ ][ ][ ].
Noak J nsson has written an interesting paper entitled The State of Software Diversity in the Software Supply Chain of Ethereum Clients. As the paper outlines:
In this report, the software supply chains of the most popular Ethereum clients are cataloged and analyzed. The dependency graphs of Ethereum clients developed in Go, Rust, and Java, are studied. These client are Geth, Prysm, OpenEthereum, Lighthouse, Besu, and Teku. To do so, their dependency graphs are transformed into a unified format. Quantitative metrics are used to depict the software supply chain of the blockchain. The results show a clear difference in the size of the software supply chain required for the execution layer and consensus layer of Ethereum.

Yongkui Han posted to our mailing list discussing making reproducible builds & GitBOM work together without gitBOM-ID embedding. GitBOM (now renamed to OmniBOR) is a project to enable automatic, verifiable artifact resolution across today s diverse software supply-chains [ ]. In addition, Fabian Keil wrote to us asking whether anyone in the community would be at Chemnitz Linux Days 2023, which is due to take place on 11th and 12th March (event info). Separate to this, Akihiro Suda posted to our mailing list just after the end of the month with a status report of bit-for-bit reproducible Docker/OCI images. As Akihiro mentions in their post, they will be giving a talk at FOSDEM in the Containers devroom titled Bit-for-bit reproducible builds with Dockerfile and that my talk will also mention how to pin the apt/dnf/apk/pacman packages with my repro-get tool.
The extremely popular Signal messenger app added upstream support for the SOURCE_DATE_EPOCH environment variable this month. This means that release tarballs of the Signal desktop client do not embed nondeterministic release information. [ ][ ]

Distribution work

F-Droid & Android There was a very large number of changes in the F-Droid and wider Android ecosystem this month: On January 15th, a blog post entitled Towards a reproducible F-Droid was published on the F-Droid website, outlining the reasons why F-Droid signs published APKs with its own keys and how reproducible builds allow using upstream developers keys instead. In particular:
In response to [ ] criticisms, we started encouraging new apps to enable reproducible builds. It turns out that reproducible builds are not so difficult to achieve for many apps. In the past few months we ve gotten many more reproducible apps in F-Droid than before. Currently we can t highlight which apps are reproducible in the client, so maybe you haven t noticed that there are many new apps signed with upstream developers keys.
(There was a discussion about this post on Hacker News.) In addition:
  • F-Droid added 13 apps published with reproducible builds this month. [ ]
  • FC Stegerman outlined a bug where baseline.profm files are nondeterministic, developed a workaround, and provided all the details required for a fix. As they note, this issue has now been fixed but the fix is not yet part of an official Android Gradle plugin release.
  • GitLab user Parwor discovered that the number of CPU cores can affect the reproducibility of .dex files. [ ]
  • FC Stegerman also announced the 0.2.0 and 0.2.1 releases of reproducible-apk-tools, a suite of tools to help make .apk files reproducible. Several new subcommands and scripts were added, and a number of bugs were fixed as well [ ][ ]. They also updated the F-Droid website to improve the reproducibility-related documentation. [ ][ ]
  • On the F-Droid issue tracker, FC Stegerman discussed reproducible builds with one of the developers of the Threema messenger app and reported that Android SDK build-tools 31.0.0 and 32.0.0 (unlike earlier and later versions) have a zipalign command that produces incorrect padding.
  • A number of bugs related to reproducibility were discovered in Android itself. Firstly, the non-deterministic order of .zip entries in .apk files [ ] and then newline differences between building on Windows versus Linux that can make builds not reproducible as well. [ ] (Note that these links may require a Google account to view.)
  • And just before the end of the month, FC Stegerman started a thread on our mailing list on the topic of hiding data/code in APK embedded signatures which has been made possible by the Android APK Signature Scheme v2/v3. As part of this, they made an Android app that reads the APK Signing block of its own APK and extracts a payload in order to alter its behaviour called sigblock-code-poc.

Debian As mentioned in last month s report, Vagrant Cascadian has been organising a series of online sprints in order to clear the huge backlog of reproducible builds patches submitted by performing NMUs (Non-Maintainer Uploads). During January, a sprint took place on the 10th, resulting in the following uploads: During this sprint, Holger Levsen filed Debian bug #1028615 to request that the tracker.debian.org service display results of reproducible rebuilds, not just reproducible CI results. Elsewhere in Debian, strip-nondeterminism is our tool to remove specific non-deterministic results from a completed build. This month, version 1.13.1-1 was uploaded to Debian unstable by Holger Levsen, including a fix by FC Stegerman (obfusk) to update a regular expression for the latest version of file(1) [ ]. (#1028892) Lastly, 65 reviews of Debian packages were added, 21 were updated and 35 were removed this month adding to our knowledge about identified issues.

Other distributions In other distributions:

diffoscope diffoscope is our in-depth and content-aware diff utility. Not only can it locate and diagnose reproducibility issues, it can provide human-readable diffs from many kinds of binary formats. This month, Chris Lamb made the following changes to diffoscope, including preparing and uploading versions 231, 232, 233 and 234 to Debian:
  • No need for from __future__ import print_function import anymore. [ ]
  • Comment and tidy the extras_require.json handling. [ ]
  • Split inline Python code to generate test Recommends into a separate Python script. [ ]
  • Update debian/tests/control after merging support for PyPDF support. [ ]
  • Correctly catch segfaulting cd-iccdump binary. [ ]
  • Drop some old debugging code. [ ]
  • Allow ICC tests to (temporarily) fail. [ ]
In addition, FC Stegerman (obfusk) made a number of changes, including:
  • Updating the test_text_proper_indentation test to support the latest version(s) of file(1). [ ]
  • Use an extras_require.json file to store some build/release metadata, instead of accessing the internet. [ ]
  • Updating an APK-related file(1) regular expression. [ ]
  • On the diffoscope.org website, de-duplicate contributors by e-mail. [ ]
Lastly, Sam James added support for PyPDF version 3 [ ] and Vagrant Cascadian updated a handful of tool references for GNU Guix. [ ][ ]

Upstream patches The Reproducible Builds project attempts to fix as many currently-unreproducible packages as possible. This month, we wrote a large number of such patches, including:

Testing framework The Reproducible Builds project operates a comprehensive testing framework at tests.reproducible-builds.org in order to check packages and other artifacts for reproducibility. In January, the following changes were made by Holger Levsen:
  • Node changes:
  • Debian-related changes:
    • Only keep diffoscope s HTML output (ie. no .json or .txt) for LTS suites and older in order to save diskspace on the Jenkins host. [ ]
    • Re-create pbuilder base less frequently for the stretch, bookworm and experimental suites. [ ]
  • OpenWrt-related changes:
    • Add gcc-multilib to OPENWRT_HOST_PACKAGES and install it on the nodes that need it. [ ]
    • Detect more problems in the health check when failing to build OpenWrt. [ ]
  • Misc changes:
    • Update the chroot-run script to correctly manage /dev and /dev/pts. [ ][ ][ ]
    • Update the Jenkins shell monitor script to collect disk stats less frequently [ ] and to include various directory stats. [ ][ ]
    • Update the real year in the configuration in order to be able to detect whether a node is running in the future or not. [ ]
    • Bump copyright years in the default page footer. [ ]
In addition, Christian Marangi submitted a patch to build OpenWrt packages with the V=s flag to enable debugging. [ ]
If you are interested in contributing to the Reproducible Builds project, please visit the Contribute page on our website. You can get in touch with us via:

2 February 2023

John Goerzen: Using Yggdrasil As an Automatic Mesh Fabric to Connect All Your Docker Containers, VMs, and Servers

Sometimes you might want to run Docker containers on more than one host. Maybe you want to run some at one hosting facility, some at another, and so forth. Maybe you d like run VMs at various places, and let them talk to Docker containers and bare metal servers wherever they are. And maybe you d like to be able to easily migrate any of these from one provider to another. There are all sorts of very complicated ways to set all this stuff up. But there s also a simple one: Yggdrasil. My blog post Make the Internet Yours Again With an Instant Mesh Network explains some of the possibilities of Yggdrasil in general terms. Here I want to show you how to use Yggdrasil to solve some of these issues more specifically. Because Yggdrasil is always Encrypted, some of the security lifting is done for us.

Background Often in Docker, we connect multiple containers to a single network that runs on a given host. That much is easy. Once you start talking about containers on multiple hosts, then you start adding layers and layers of complexity. Once you start talking multiple providers, maybe multiple continents, then the complexity can increase. And, if you want to integrate everything from bare metal servers to VMs into this well, there are ways, but they re not easy. I m a believer in the KISS principle. Let s not make things complex when we don t have to.

Enter Yggdrasil As I ve explained before, Yggdrasil can automatically form a global mesh network. This is pretty cool! As most people use it, they join it to the main Yggdrasil network. But Yggdrasil can be run entirely privately as well. You can run your own private mesh, and that s what we ll talk about here. All we have to do is run Yggdrasil inside each container, VM, server, or whatever. We handle some basics of connectivity, and bam! Everything is host- and location-agnostic.

Setup in Docker The installation of Yggdrasil on a regular system is pretty straightforward. Docker is a bit more complicated for several reasons:
  • It blocks IPv6 inside containers by default
  • The default set of permissions doesn t permit you to set up tunnels inside a container
  • It doesn t typically pass multicast (broadcast) packets
Normally, Yggdrasil could auto-discover peers on a LAN interface. However, aside from some esoteric Docker networking approaches, Docker doesn t permit that. So my approach is going to be setting up one or more Yggdrasil router containers on a given Docker host. All the other containers talk directly to the router container and it s all good.

Basic installation In my Dockerfile, I have something like this:
FROM jgoerzen/debian-base-security:bullseye
RUN echo "deb http://deb.debian.org/debian bullseye-backports main" >> /etc/apt/sources.list && \
    apt-get --allow-releaseinfo-change update && \
    apt-get -y --no-install-recommends -t bullseye-backports install yggdrasil
...
COPY yggdrasil.conf /etc/yggdrasil/
RUN set -x; \
    chown root:yggdrasil /etc/yggdrasil/yggdrasil.conf && \
    chmod 0750 /etc/yggdrasil/yggdrasil.conf && \
    systemctl enable yggdrasil
The magic parameters to docker run to make Yggdrasil work are:
--cap-add=NET_ADMIN --sysctl net.ipv6.conf.all.disable_ipv6=0 --device=/dev/net/tun:/dev/net/tun
This example uses my docker-debian-base images, so if you use them as well, you ll also need to add their parameters. Note that it is NOT necessary to use --privileged. In fact, due to the network namespaces in use in Docker, this command does not let the container modify the host s networking (unless you use --net=host, which I do not recommend). The --sysctl parameter was the result of a lot of banging my head against the wall. Apparently Docker tries to disable IPv6 in the container by default. Annoying.

Configuration of the router container(s) The idea is that the router node (or more than one, if you want redundancy) will be the only ones to have an open incoming port. Although the normal Yggdrasil case of directly detecting peers in a broadcast domain is more convenient and more robust, this can work pretty well too. You can, of course, generate a template yggdrasil.conf with yggdrasil -genconf like usual. Some things to note for this one:
  • You ll want to change Listen to something like Listen: ["tls://[::]:12345"] where 12345 is the port number you ll be listening on.
  • You ll want to disable the MulticastInterfaces entirely by just setting it to [] since it doesn t work anyway.
  • If you expose the port to the Internet, you ll certainly want to firewall it to only authorized peers. Setting AllowedPublicKeys is another useful step.
  • If you have more than one router container on a host, each of them will both Listen and act as a client to the others. See below.

Configuration of the non-router nodes Again, you can start with a simple configuration. Some notes here:
  • You ll want to set Peers to something like Peers: ["tls://routernode:12345"] where routernode is the Docker hostname of the router container, and 12345 is its port number as defined above. If you have more than one local router container, you can simply list them all here. Yggdrasil will then fail over nicely if any one of them go down.
  • Listen should be empty.
  • As above, MulticastInterfaces should be empty.

Using the interfaces At this point, you should be able to ping6 between your containers. If you have multiple hosts running Docker, you can simply set up the router nodes on each to connect to each other. Now you have direct, secure, container-to-container communication that is host-agnostic! You can also set up Yggdrasil on a bare metal server or VM using standard procedures and everything will just talk nicely!

Security notes Yggdrasil s mesh is aggressively greedy. It will peer with any node it can find (unless told otherwise) and will find a route to anywhere it can. There are two main ways to make sure your internal comms stay private: by restricting who can talk to your mesh, and by firewalling the Yggdrasil interface. Both can be used, and they can be used simultaneously. By disabling multicast discovery, you eliminate the chance for random machines on the LAN to join the mesh. By making sure that you firewall off (outside of Yggdrasil) who can connect to a Yggdrasil node with a listening port, you can authorize only your own machines. And, by setting AllowedPublicKeys on the nodes with listening ports, you can authenticate the Yggdrasil peers. Note that part of the benefit of the Yggdrasil mesh is normally that you don t have to propagate a configuration change to every participatory node that s a nice thing in general! You can also run a firewall inside your container (I like firehol for this purpose) and aggressively firewall the IPs that are allowed to connect via the Yggdrasil interface. I like to set a stable interface name like ygg0 in yggdrasil.conf, and then it becomes pretty easy to firewall the services. The Docker parameters that allow Yggdrasil to run are also sufficient to run firehol.

Naming Yggdrasil peers You probably don t want to hard-code Yggdrasil IPs all over the place. There are a few solutions:
  • You could run an internal DNS service
  • You can do a bit of scripting around Docker s --add-host command to add things to /etc/hosts

Other hints & conclusion Here are some other helpful use cases:
  • If you are migrating between hosts, you could leave your reverse proxy up at both hosts, both pointing to the target containers over Yggdrasil. The targets will be automatically found from both sides of the migration while you wait for DNS caches to update and such.
  • This can make services integrate with local networks a lot more painlessly than they might otherwise.
This is just an idea. The point of Yggdrasil is expanding our ideas of what we can do with a network, so here s one such expansion. Have fun!
Note: This post also has a permanent home on my webiste, where it may be periodically updated.

28 January 2023

Craig Small: Fixing iCalendar feeds

The local government here has all the schools use an iCalendar feed for things like when school terms start and stop and other school events occur. The department s website also has events like public holidays. The issue is that all of them don t make it an all-day event but one that happens at midnight, or one past midnight. The events synchronise fine, though Google s calendar is known for synchronising when it feels like it, not at any particular time you would like it to.
Screenshot of Android Calendar showing a tiny bar at midnight which is the event.
Even though a public holiday is all day, they are sent as appointments for midnight. That means on my phone all the events are these tiny bars that appear right up the top of the screen and are easily missed, especially when the focus of the calendar is during the day. On the phone, you can see the tiny purple bar at midnight. This is how the events appear. It s not the calendar s fault, as far as it knows the school events are happening at midnight. You can also see Lunar New Year and Australia Day appear in the all-day part of the calendar and don t scroll away. That s where these events should be.
Why are all the events appearing at midnight? The reason is the feed is incorrectly set up and has the time. The events are sent in an iCalendar format and a typical event looks like this:
BEGIN:VEVENT
DTSTART;TZID=Australia/Sydney:20230206T000000
DTEND;TZID=Australia/Sydney:20230206T000000
SUMMARY:School Term starts
END:VEVENT
The event starting and stopping date and time are the DTSTART and DTEND lines. Both of them have the date of 2023/02/06 or 6th February 2023 and a time of 00:00:00 or midnight. So the calendar is doing the right thing, we need to fix the feed! The Fix I wrote a quick and dirty PHP script to download the feed from the real site, change the DTSTART and DTEND lines to all-day events and leave the rest of it alone.
<?php
$site = $_GET['s'];
if ($site == 'site1')  
    $REMOTE_URL='https://site1.example.net/ical_feed';
  elseif ($site == 'site2')  
    $REMOTE_URL='https://site2.example.net/ical_feed';
  else  
    http_response_code(400);
    die();
 
$fp = fopen($REMOTE_URL, "r");
if (!$fp)  
    die("fopen");
 
header('Content-Type: text/calendar');
while (( $line = fgets($fp, 1024)) !== false)  
    $line = preg_replace(
        '/^(DTSTART DTEND);[^:]+:([0-9] 8 )T000[01]00/',
        '$ 1 ;VALUE=DATE:$ 2 ',
        $line);
    echo $line;
 
?>
It s pretty quick and nasty but gets the job done. So what is it doing? You need to save the script on your web server somewhere, possibly with an alias command. The whole point of this is to change the type from a date/time to a date-only event and only print the date part of it for the start and end of it. The resulting iCalendar event looks like this:
BEGIN:VEVENT
DTSTART;VALUE=DATE:20230206
DTEND;VALUE=DATE:20230206
SUMMARY:School Term starts
END:VEVENT
The calendar then shows it properly as an all-day event. I would check the script works before doing the next step. You can use things like curl or wget to download it. If you use a normal browser, it will probably just download the translated file. If you re not seeing the right thing then it s probably the PCRE failing. You can check it online with a regex checker such as https://regex101.com. The site has saved my PCRE and match so you got something to start with. Calendar settings The last thing to do is to change the URL in your calendar settings. Each calendar system has a different way of doing it. For Google Calendar they provide instructions and you want to follow the section titled Use a link to add a public Calendar . The URL here is not the actual site s URL (which you would have put into the REMOTE_URL variable before) but the URL of your script plus the ?s=site1 part. So if you put your script aliased to /myical.php and the site ID was site1 and your website is www.example.com the URL would be https://www.example.com/myical.php?s=site1 . You should then see the events appear as all-day events on your calendar.

31 December 2022

Guido G nther: Phosh 2022 in retrospect

I wanted to look back at what changed in phosh in 2022 and figured I could share it with you. I'll be focusing on things very close to the mobile shell, for a broader overview see Evangelos upcoming FOSDEM talk. Some numbers We're usually aiming for a phosh release at the end of each month. In 2022 We did 10 releases like that, 7 major releases (bumping the middle version number) and three betas. We skipped the April and November releases. We also did one bug fix relesae out of line (bumping the last bit of the version number). I hope we can keep that cadence in 2023 as it allows us to get changes to users in a timely fashion (thus closing usability gaps as early as possible) as well as giving distributions a way to plan ahead. Ideally we'd not skip any release but sometimes real life just interferes. Those releases contain code contributions from about 20 different people and translations from about 30 translators. These numbers are roughly the same as 2021 which is great. Thanks everyone! In phosh's git repository we had a bit over 730 non-merge commits (roughly 2 per day), which is about 10% less than in 2021. Looking closer this is easily compensated by commits to phoc (which needed quite some work for the gestures) and phosh-mobile-settings which didn't exist in 2021. User visible features Most notable new features are likely the swipe gestures for top and bottom bar, the possibility to use the quick settings on the lock screen as well as the style refresh driven by Sam Hewitt that e.g. touched the modal dialogs (but also sliders, dialpads, etc): Style Refresh Swipe up gesture We also added the possibility to have custom widgets via loadable plugins on the lock screen so the user can decide which information should be available. We currently ship plugins to show These are maintained within phosh's source tree although out of tree plugins should be doable too. There's a settings application (the above mentioned phosh-mobile-settings) to enable these. It also allows those plugins to have individual preferences: TODO A Plugin Plugin Preferenes Speaking of configurability: Scale-to-fit settings (to work around applications that don't fit the screen) and haptic/led feedback are now configurable without resorting to the command line: Scale to fit Feedbackd settings We can also have device specific settings which helps to temporarily accumulate special workaround without affecting other phones. Other user visible features include the ability to shuffle the digits on the lockscreen's keypad, a VPN quick settings, improved screenshot support and automatic high contrast theme switching when in bright sunlight (based on ambient sensor readings) as shown here. As mentioned above Evangelos will talk at FOSDEM 2023 about the broader ecosystem improvements including GNOME, GTK, wlroots, phoc, feedbackd, ModemManager, mmsd, NetworkManager and many others without phosh wouldn't be possible. What else As I wanted a T-shirt for Debconf 2022 in Prizren so I created a logo heavily inspired by those cute tiger images you often see in Southeast Asia. Based on that I also made a first batch of stickers mostly distributed at FrOSCon 2022: Phosh stickers That's it for 2022. If you want to get involved into phosh testing, development or documentation then just drop by in the matrix room.

10 August 2022

Russell Coker: TSIG Error From SSSD

A common error when using the sssd daemon to authenticate via Active Directory on Linux seems to be:
sssd[$PID]: ; TSIG error with server: tsig verify failure
This is from sssd launching the command nsupdate -g to do dynamic DNS updates. It is possible to specify the DNS server in /etc/sssd/sssd.conf but that will only be used AFTER the default servers have been attempted, so it seems impossible to stop this error from happening. It doesn t appear to do any harm as the correct server is discovered and used eventually. The commands piped to the nsupdate command will be something like:
server $SERVERIP
realm $DOMAIN
update delete $HOSTNAME.$DOMAIN. in A
update add $HOSTNAME.$DOMAIN. 3600 in A $HOSTIP
send
update delete $HOSTNAME.$DOMAIN. in AAAA
send

31 July 2022

Russell Coker: Workstations With ECC RAM

The last new PC I bought was a Dell PowerEdge T110II in 2013. That model had been out for a while and I got it for under $2000. Since then the CPI has gone up by about 20% so it s probably about $2000 in today s money. Currently Dell has a special on the T150 tower server (the latest replacement for the T110II) which has a G6405T CPU that isn t even twice as fast as the i3-3220 (3746 vs 2219) in the T110II according to passmark.com (AKA cpubenchmark.net). The special price is $2600. I can t remember the details of my choices when purchasing the T110II but I recall that CPU speed wasn t a priority and I wanted a cheap reliable server for storage and for light desktop use. So it seems that the current entry model in the Dell T1xx server line is less than twice as fast as fast as it was in 2013 while costing about 25% more! An option is to spend an extra $989 to get a Xeon E-2378 which delivers a reasonable 18,248 in that benchmark. The upside of a T150 is that is uses buffered DDR4 ECC RAM which is pretty cheap nowadays, you can get 32G for about $120. For systems sold as workstations (as opposed to T1xx servers that make great workstations but aren t described as such) Dell has the Precision line. The Precision 3260 Compact Workstation currently starts at $1740, it has a fast CPU but takes SO-DIMMs and doesn t come with ECC RAM. So to use it as a proper workstation you need to discard the RAM and buy DDR5 unbuffered/unregistered ECC SO-DIMMS which don t seem to be on sale yet. The Precision 3460 is slightly larger, slightly more expensive, and also takes SO-DIMMs. The Precision 3660 starts at $2550 and takes unbuffered DDR5 ECC RAM which is available and costs half as much as the SO-DIMM equivalent would cost (if you could even buy it), but the general trend in RAM prices is that unbuffered ECC RAM is more expensive than buffered ECC RAM. The upside to Precision workstations is that the range of CPUs available is significantly faster than for the T150. The HP web site doesn t offer prices on their Z workstations and is generally worse than the Dell web site in most ways. Overall I m disappointed in the range of workstations available now. As an aside if anyone knows of any other company selling workstations in Australia that support ECC RAM then please let me know.

23 February 2022

Jonathan McDowell: Upgrading my home internet; a story of yak shaving

RB5009 This has ended up longer than I expected. I ll write up posts about some of the individual steps with some more details at some point, but this is an overview of the yak shaving I engaged in. The TL;DR is:

The desire for a faster connection When I migrated my home connection to FTTP I kept the same 80M/20M profile I d had on FTTC. I didn t have a pressing need for faster, and I saved money because I was no longer paying for the phone line portion. I wanted more, but at the time I think the only option was for a 160M/30M profile instead and I didn t need it and it wasn t enough better to convince me. Time passed and BT rolled out their GigE (really 900M) download option. And again, I didn t need it, but I wanted it. My provider, Aquiss, initially didn t offer this (I think they had up to 330M download options available by this point). So I stayed on 80M/20M. And the only time I really wanted it to be faster was when pushing off-site backups to rsync.net. Of course, we ve had the pandemic, and that s involved 2 adults working from home with plenty of video calls throughout the day. The 80M/20M connection has proved rock solid for this, so again, I didn t feel an upgrade was justified. We got a 4K capable TV last year and while the bandwidth usage for 4K streaming is noticeably higher, again the connection can handle it no problem. At some point last year I noticed Aquiss had added speed options all the way to 900M down. At the end of the year I accepted a new role, which is fully remote, so I had a bit of an acceptance about the fact that I wasn t going back into an office any time soon. The combination (and the desire for the increased upload speed) finally allowed me to justify the upgrade to myself.

Testing the current setup for bottlenecks The first thing to do was see whether my internal network could cope with an upgrade. I m mostly running Cat6 GigE so I wasn t worried about that side of things. However I m using an RB3011 as my core router, and while it has some coprocessors for routing acceleration they re not supported under mainline Linux (and unlikely to be any time soon). So I had to benchmark what it was capable of routing. I run a handful of VLANs within my home network, with stateful firewalling between them, so I felt that would be a good approximation of the maximum speed to the outside world I might be able to get if I had the external connection upgraded. I went for the easy approach and fired up iPerf3 on 2 hosts, both connected via ethernet but on separate networks, so routed through the RB3011. That resulted in slightly more than a 300Mb/s throughput. Ok. I confirmed that I could get 900Mb/s+ on 2 hosts both on the same network, just to be sure there wasn t some other issue I was missing. Nope, so unsurprisingly the router was the bottleneck. So. To upgrade my internet speed I need to upgrade my router. I could just buy something off the shelf, but I like being able to run Debian (or OpenWRT) on the router rather than some horrible vendor firmware. Lucky MikroTik launched the RB5009 towards the end of last year. RouterOS is probably more than capable, but what really interested me was the fact it s an ARM64 platform based on an Armada 7040, which is pretty well supported in mainline kernels already. There s a 10G connection from the internal switch to the CPU, as well as a 2.5Gb/s ethernet port and a 10G SFP+ cage. All good stuff. I ordered one just before the New Year. Thankfully the OpenWRT folk had done all of the hard work on getting a mainline kernel booting on the device; Sergey Sergeev and Robert Marko in particular fighting RouterBoot and producing a suitable device tree file to get everything up and running. I ended up soldering a serial console connection up to aid debugging, and lightly patching Rob s u-boot to fix the incorrect RAM size reported by RouterBoot. A few kernel tweaks were necessary to make the networking entirely happy and at that point it was time to think about actually doing a replacement.

Upgrading to Debian 11 (bullseye) My RB3011 is currently running Debian 10 (buster); an upgrade has been on my todo list, but with the impending replacement I decided I d hold off and create a new Debian 11 (bullseye) image for the RB5009. Additionally, I don t actually run off the internal NAND in the RB3011; I have a USB flash drive for the rootfs and just the kernel booting off internal NAND. Originally this was for ease of testing, then a combination of needing to figure out a good read-only root solution and a small enough image to fit in the 120M available. For the upgrade I decided to finally look at these pieces. I ve ended up with a script that will build me a squashfs image, and the initial rootfs takes care of mounting this and then a tmpfs as an overlay fs. That means I can easily see what pieces are being written to. The RB5009 has a total of 1G NAND so I m not as space constrained, but the squashfs ends up under 50M. I ve added some additional pieces to allow me to pre-populate the overlay fs with updates rather than always needing to rebuild the squashfs image. With that done I decided to try it out on the RB3011; I tweaked the build script to be able to build for armhf (the RB3011) or arm64 (the RB5009) and to deal with some slight differences in configuration between the two (e.g. interface naming). The idea here was to ensure I d got all the appropriate configuration sorted for the RB5009, in the known-good existing environment. Everything is still on a USB stick at this stage and the new device has an armhf busybox root meaning it can be used on either device, and the init script detects the architecture to select the appropriate squashfs to mount.

A problem with ESP8266 home automation devices Everything seemed to work fine - a few niggles with the watchdog, which is overly sensitive on the RB3011, but I got those sorted (and the build script updated) and the device came up and successfully did the PPPoE dance to bring up external connectivity. And then I noticed that my home automation devices were having problems connecting to the mosquitto MQTT server. It turned out it was only the ESP8266 based devices that were failing, and examining the serial debug output on one of my test devices revealed it was hitting an out of memory issue (displaying E:M 280) when establishing the TLS MQTT connection. I rolled back to the Debian 10 image and set about creating a test environment to look at the ESP8266 issues. My first action was to try and reduce my RAM footprint to try and ensure there was enough spare to establish the connection. I moved a few functions that were still sitting in IRAM into flash. I cleaned up a couple of buffers that are on the stack to be more correctly sized. I tried my new image, and I didn t get the memory issue. Instead I progressed a bit further and got a watchdog reset. Doh! It was obviously something related to the TLS connection, but I couldn t easily see what the difference was; the same x509 cert was in use, it looked like the initial handshake was the same (and trying with openssl s_client looked pretty similar too). I set about instrumenting the ancient Mbed TLS used in the Espressif SDK and discovered that whatever had changed between buster + bullseye meant the EPS8266 was now trying a TLS-DHE-RSA-WITH-AES-256-CBC-SHA256 handshake instead of a TLS-RSA-WITH-AES-256-CBC-SHA256 handshake and that was causing enough extra CPU usage that it couldn t complete in time and the watchdog kicked in. So I commented out MBEDTLS_KEY_EXCHANGE_DHE_RSA_ENABLED in the config_esp.h for mbedtls and rebuilt things. Hacky, but I ll go back to trying to improve this generally at some point.

A detour into interrupt load Now, my testing of the RB3011 image is generally done at weekends, when I have enough time to tear down and rebuild the connection rather than doing it in the evening and having limited time to get things working again in time for work in the morning. So at the point I had an image ready to go I pulled the trigger on the line upgrade. I went with the 500M/75M option rather than the full 900M - I suspect I d have difficulty actually getting that most of the time and 75M of upload bandwidth seems fairly substantial for now. It only took a couple of days from the order to the point the line was regraded (which involved no real downtime - just a reconnection in the night). Of course this happened just after the weekend I d discovered the ESP8266 issue. collectd CPU usage for RB3011 This provided an opportunity to see just what the RB3011 could actually manage. In the configuration I had it turned out to be not much more than the 80Mb/s speeds I had previously seen. The upload jumped from a solid 20Mb/s to 75Mb/s, so I knew the regrade had actually happened. Looking at CPU utilisation clearly showed the problem; softirqs were using almost 100% of a CPU core. Now, the way the hardware is setup on the RB3011 is that there are two separate 5 port switches, each connected back to the CPU via a separate GigE interface. For various reasons I had everything on a single switch, which meant that all traffic was boomeranging in and out of the same CPU interface. The IPQ8064 has dual cores, so I thought I d try moving the external connection to the other switch. That puts it on its own GigE CPU interface, which then allows binding the interrupts to a different CPU core. That helps; throughput to the outside world hits 140Mb/s+. Still a long way from the expected max, but proof we just need more grunt.

Success collectd CPU usage for RB5009 Which brings us to this past weekend, when, having worked out all the other bits, I tried the squashfs root image again on the RB3011. Success! The home automation bits connected to it, the link to the outside world came up, everything seemed happy. So I double checked my bootloader bits on the RB5009, brought it down to the comms room and plugged it in instead. And, modulo my failing to update the nftables config to allow it to do forwarding, it all came up ok. Some testing with iperf3 internally got a nice 912Mb/s sustained between subnets, and some less scientific testing with wget + speedtest-cli saw speeds of over 460Mb/s to the outside world. Time from ordering the router until it was in service? Just under 8 weeks

26 January 2022

Timo Jyrinki: Unboxing Dell XPS 13 - openSUSE Tumbleweed alongside preinstalled Ubuntu

A look at the 2021 model of Dell XPS 13 - available with Linux pre-installed
I received a new laptop for work - a Dell XPS 13. Dell has been long famous for offering certain models with pre-installed Linux as a supported option, and opting for those is nice for moving some euros/dollars from certain PC desktop OS monopoly towards Linux desktop engineering costs. Notably Lenovo also offers Ubuntu and Fedora options on many models these days (like Carbon X1 and P15 Gen 2).
black box

opened box

accessories and a leaflet about Linux support

laptop lifted from the box, closed

laptop with lid open

Ubuntu running

openSUSE runnin
Obviously a smooth, ready-to-rock Ubuntu installation is nice for most people already, but I need openSUSE, so after checking everything is fine with Ubuntu, I continued to install openSUSE Tumbleweed as a dual boot option. As I m a funny little tinkerer, I obviously went with some special things. I wanted:
  • Ubuntu to remain as the reference supported OS on a small(ish) partition, useful to compare to if trying out new development versions of software on openSUSE and finding oddities.
  • openSUSE as the OS consuming most of the space.
  • LUKS encryption for openSUSE without LVM.
  • ext4 s new fancy fast_commit feature in use during filesystem creation.
  • As a result of all that, I ended up juggling back and forth installation screens a couple of times (even more than shown below, and also because I forgot I wanted to use encryption the first time around).
First boots to pre-installed Ubuntu and installation of openSUSE Tumbleweed as the dual-boot option:
(if the embedded video is not shown, use a direct link)
Some notes from the openSUSE installation:
  • openSUSE installer s partition editor apparently does not support resizing or automatically installing side-by-side another Linux distribution, so I did part of the setup completely on my own.
  • Installation package download hanged a couple of times, only passed when I entered a mirror manually. On my TW I ve also noticed download problems recently, there might be a problem with some mirror I need to escalate.
  • The installer doesn t very clearly show encryption status of the target installation - it took me a couple of attempts before I even noticed the small encrypted column and icon (well, very small, see below), which also did not spell out the device mapper name but only the main partition name. In the end it was going to do the right thing right away and use my pre-created encrypted target partition as I wanted, but it could be a better UX. Then again I was doing my very own tweaks anyway.
  • Let s not go to the details why I m so old-fashioned and use ext4 :)
  • openSUSE s installer does not work fine with HiDPI screen. Funnily the tty consoles seem to be fine and with a big font.
  • At the end of the video I install the two GNOME extensions I can t live without, Dash to Dock and Sound Input & Output Device Chooser.

22 July 2021

Junichi Uekawa: Added memory to ACER Chromebox CXI3 (fizz/sion).

Added memory to ACER Chromebox CXI3 (fizz/sion). Got 2 16GB SO-DIMMs and installed them. I could not find correct information on how to open this box on the internet. They seem to be explaining similar boxes from HP or ASUS which seem to have simpler procedure to opening. I had to ply out out the 4 rubber pieces at the bottom, and then open the 4 screws. Then I could ply open the front and back panel by applying force where the screws were. In the front panel there's two more shorter screws that needs to be opened; after taking out the two screws (that's 4+2), I could open the box into two pieces. Be careful they are connected, I think there's audio cable. After opening you can access the memory chips. Pull the metal piece open on left and right hand side of the memory chip so that it raises. Make sure the metal pieces latch closed when you insert the new memory, that should signify memory is in place. I didn't do that at the beginning and the machine didn't boot. So far so good. No longer using zram.

7 June 2021

Russell Coker: Dell PowerEdge T320 and Linux

I recently bought a couple of PowerEdge T320 servers, so now to learn about setting them up. They are a little newer than the R710 I recently setup (which had iDRAC version 6), they have iDRAC version 7. RAM Speed One system has a E5-2440 CPU with 2*16G DDR3 DIMMs and a Memtest86+ speed of 13,043MB/s, the other is essentially identical but with a E5-2430 CPU and 4*16G DDR3 DIMMs and a Memtest86+ speed of 8,270MB/s. I had expected that more DIMMs means better RAM performance but this isn t what happened. I firstly upgraded the BIOS, as I expected it didn t make a difference but it s a good thing to try first. On the E5-2430 I tried removing a DIMM after it was pointed out on Facebook that the CPU has 3 memory channels (here s a link to a great site with information on that CPU and many others [1]). When I did that I was prompted to disable advanced ECC (which treats pairs of DIMMs as a single unit for ECC allowing correcting more than 1 bit errors) and I had to move the 3 remaining DIMMS to different slots. That improved the performance to 13,497MB/s. I then put the spare DIMM into the E5-2440 system and the performance increased to 13,793MB/s, when I installed 4 DIMMs in the E5-2440 system the performance remained at 13,793MB/s and the E5-2430 went down to 12,643MB/s. This is a good result for me, I now have the most RAM and fastest RAM configuration in the system with the fastest CPU. I ll sell the other one to someone who doesn t need so much RAM or performance (it will be really good for a small office mail server and NAS). Firmware Update BIOS The first issue is updating the BIOS, unfortunately the first link I found to the Dell web site didn t have a link to download the Linux installer. It offered a Windows binary, an EFI program, and a DOS binary. I m not about to install Windows if there is any other option and EFI is somewhat annoying, so that leaves DOS. The first Google result for installing FreeDOS advised using unetbootin , that didn t work at all for me (created a USB image that the Dell BIOS didn t recognise as bootable) and even if it did it wouldn t have been a good solution. I went to the FreeDOS download page [2] and got the Lite USB zip file. That contained FD12LITE.img which I could just dd to a USB stick. I then used fdisk to create a second 32MB partition, used mkfs.fat to format it, and then copied the BIOS image file to it. I booted the USB stick and then ran the BIOS update program from drive D:. After the BIOS update this became the first system I ve seen get a totally green result from spectre-meltdown-checker ! I found the link to the Linux installer for the new Dell BIOS afterwards, but it was still good to play with FreeDOS. PERC Driver I probably didn t really need to update the PERC (PowerEdge Raid Controller) firmware as I m just going to run it in JBOD mode. But it was easy to do, a simple bash shell script to update it. Here are the perccli commands needed to access disks, it s all hot-plug so you can insert disks and do all this without a reboot:
# show overview
perccli show
# show controller 0 details
perccli /c0 show all
# show controller 0 info with less detail
perccli /c0 show
# clear all "foreign" RAID members
perccli /c0 /fall delete
# add a vd (RAID) of level RAID0 (r0) with the drive 32:0 (enclosure:slot from above command)
perccli /c0 add vd r0 drives=32:0
The perccli /c0 show command gives the following summary of disk ( PD in perccli terminology) information amongst other information. The EID is the enclosure, Slt is the slot (IE the bay you plug the disk into) and the DID is the disk identifier (not sure what happens if you have multiple enclosures). The allocation of device names (sda, sdb, etc) will be in order of EID:Slt or DID at boot time, and any drives added at run time will get the next letters available.
----------------------------------------------------------------------------------
EID:Slt DID State DG       Size Intf Med SED PI SeSz Model                     Sp 
----------------------------------------------------------------------------------
32:0      0 Onln   0  465.25 GB SATA SSD Y   N  512B Samsung SSD 850 EVO 500GB U  
32:1      1 Onln   1  465.25 GB SATA SSD Y   N  512B Samsung SSD 850 EVO 500GB U  
32:3      3 Onln   2   3.637 TB SATA HDD N   N  512B ST4000DM000-1F2168        U  
32:4      4 Onln   3   3.637 TB SATA HDD N   N  512B WDC WD40EURX-64WRWY0      U  
32:5      5 Onln   5 278.875 GB SAS  HDD Y   N  512B ST300MM0026               U  
32:6      6 Onln   6 558.375 GB SAS  HDD N   N  512B AL13SXL600N               U  
32:7      7 Onln   4   3.637 TB SATA HDD N   N  512B ST4000DM000-1F2168        U  
----------------------------------------------------------------------------------
The PERC controller is a MegaRAID with possibly some minor changes, there are reports of Linux MegaRAID management utilities working on it for similar functionality to perccli. The version of MegaRAID utilities I tried didn t work on my PERC hardware. The smartctl utility works on those disks if you tell it you have a MegaRAID controller (so obviously there s enough similarity that some MegaRAID utilities will work). Here are example smartctl commands for the first and last disks on my system. Note that the disk device node doesn t matter as all device nodes associated with the PERC/MegaRAID are equal for smartctl.
# get model number etc on DID 0 (Samsung SSD)
smartctl -d megaraid,0 -i /dev/sda
# get all the basic information on DID 0
smartctl -d megaraid,0 -a /dev/sda
# get model number etc on DID 7 (Seagate 4TB disk)
smartctl -d megaraid,7 -i /dev/sda
# exactly the same output as the previous command
smartctl -d megaraid,7 -i /dev/sdc
I have uploaded etbemon version 1.3.5-6 to Debian which has support for monitoring smartctl status of MegaRAID devices and NVMe devices. IDRAC To update IDRAC on Linux there s a bash script with the firmware in the same file (binary stuff at the end of a shell script). To make things a little more exciting the script insists that rpm be available (running apt install rpm fixes that for a Debian system). It also creates and runs other shell scripts which start with #!/bin/sh but depend on bash syntax. So I had to make /bin/sh a symlink to /bin/bash. You know you need this if you see errors like typeset: not found and [: -eq: unexpected operator and then the system reboots. Dell people, please test your scripts on dash (the Debian /bin/sh) or just specify #!/bin/bash. If the IDRAC update works it will take about 8 minutes. Lifecycle Controller The Lifecycle Controller is apparently for installing OS and firmware updates. I use Linux tools to update Linux and I generally don t plan to update the firmware after deployment (although I could do so from Linux if needed). So it doesn t seem to offer anything useful to me. Setting Up IDRAC For extra excitement I decided to try to setup IDRAC from the Linux command-line. To install the RAC setup tool you run apt install srvadmin-idracadm7 libargtable2-0 (because srvadmin-idracadm7 doesn t have the right dependencies).
# srvadmin-idracadm7 is missing a dependency
apt install srvadmin-idracadm7 libargtable2-0
# set the IP address, netmask, and gatewat for IDRAC
idracadm7 setniccfg -s 192.168.0.2 255.255.255.0 192.168.0.1
# put my name on the front panel LCD
idracadm7 set System.LCD.UserDefinedString "Russell Coker"
Conclusion This is a very nice deskside workstation/server. It s extremely quiet with hardly any fan noise and the case is strong enough to contain the noise of hard drives. When running with 3* 3.5 SATA disks and 2*10k 2.5 SAS disks on a wooden floor it wasn t annoyingly loud. Without the SAS disks it was as quiet as you can expect any PC to be, definitely not the volume you expect from a serious server! I bought the T320 systems loaded with SAS disks which made them quite loud, I immediately put the disks on ebay and installed SATA SSDs and hard drives which gives me more performance and more space than the SAS disks with less cost and almost no noise. 8*3.5 drive bays gives room for expansion. I currently have 2*SATA SSDs and 3*SATA disks, the SSDs are for the root filesystem (including /home) and the disks are for a separate filesystem for large files.

11 January 2021

Bastian Venthur: Dear Apple,

In the light of WhatsApp s recent move to enforce new Privacy Agreements onto its users, alternative messenger services like Signal are currently gaining some more momentum. While this sounds good, it is hard to believe that this will be more than a dent in WhatsApp s user base. WhatsApp is way too ubiquitous, and the whole point of using such a service for most users is to use the one that everyone is using. Unfortunately. Convincing WhatsApp users to additionally install Signal is hard: they already have SMS for the few people that are not using WhatsApp, now expecting them to install a third app for the same purpose seems ridiculous. Android mitigates this problem a lot by allowing to make other apps like Signal the default SMS/MMS app on the phone. Suddenly people are able to use Signal for SMS/MMS and Signal messages transparently. Signal is smart enough to figure out if the conversation partner is using Signal and enables encryption, video calls and other features. If not, it just falls back to plain old SMS. All in the same app, very convenient for the user! I don t really get why the same thing is not possible on iOS? Apple is well known for taking things like privacy and security for its users seriously, and this seems like a low-hanging fruit. So dear Apple, wouldn t now be a good time to team up with WhatsApp-alternatives like Signal to help the users to make the right choice?

12 December 2020

Russell Coker: Electromagnetic Space Launch

The G-Force Wikipedia page says that humans can survive 20G horizontally eyes in for up to 10 seconds and 10G for 1 minute. An accelerator of 14G for 10 seconds (well below the level that s unsafe) gives a speed of mach 4 and an acceleration distance of just under 7km. Launching a 100 metric ton spacecraft in that way would require 14MW at the end of the launch path plus some extra for the weight of the part that contains magnets which would be retrieved by parachute. 14MW is a small fraction of the power used by a train or tram network and brown-outs of the transit network is something that they deal with so such a launch could be powered by diverting power from a transit network. The Rocky Mountains in the US peak at 4.4KM above sea level, so a magnetic launch that starts 2.6KM below sea level and extends the height of the Rocky Mountains would do. A speed of mach 4 vertically would get a height of 96Km if we disregard drag, that s almost 1/4 of the orbital altitude of the ISS. This seems like a more practical way to launch humans into space than a space elevator. The Mass Driver page on Wikipedia documents some of the past research on launching satellites that way, with shorter launch hardware and significantly higher G forces.

12 October 2020

Russell Coker: First Attempt at Gnocchi-Statsd

I ve been investigating the options for tracking system statistics to diagnose performance problems. The idea is to track all sorts of data about the system (network use, disk IO, CPU, etc) and look for correlations at times of performance problems. DataDog is pretty good for this but expensive, it s apparently based on or inspired by the Etsy Statsd. It s claimed that the gnocchi-statsd is the best implementation of the protoco used by the Etsy Statsd, so I decided to install that. I use Debian/Buster for this as that s what I m using for the hardware that runs KVM VMs. Here is what I did:
# it depends on a local MySQL database
apt -y install mariadb-server mariadb-client
# install the basic packages for gnocchi
apt -y install gnocchi-common python3-gnocchiclient gnocchi-statsd uuid
In the Debconf prompts I told it to setup a database and not to manage keystone_authtoken with debconf (because I m not doing a full OpenStack installation). This gave a non-working configuration as it didn t configure the MySQL database for the [indexer] section and the sqlite database that was configured didn t work for unknown reasons. I filed Debian bug #971996 about this [1]. To get this working you need to edit /etc/gnocchi/gnocchi.conf and change the url line in the [indexer] section to something like the following (where the password is taken from the [database] section).
url = mysql+pymysql://gnocchi-common:PASS@localhost:3306/gnocchidb
To get the statsd interface going you have to install the gnocchi-statsd package and edit /etc/gnocchi/gnocchi.conf to put a UUID in the resource_id field (the Debian package uuid is good for this). I filed Debian bug #972092 requesting that the UUID be set by default on install [2]. Here s an official page about how to operate Gnocchi [3]. The main thing I got from this was that the following commands need to be run from the command-line (I ran them as root in a VM for test purposes but would do so with minimum privs for a real deployment).
gnocchi-api
gnocchi-metricd
To communicate with Gnocchi you need the gnocchi-api program running, which uses the uwsgi program to provide the web interface by default. It seems that this was written for a version of uwsgi different than the one in Buster. I filed Debian bug #972087 with a patch to make it work with uwsgi [4]. Note that I didn t get to the stage of an end to end test, I just got it to basically run without error. After getting gnocchi-api running (in a terminal not as a daemon as Debian doesn t seem to have a service file for it), I ran the client program gnocchi and then gave it the status command which failed (presumably due to the metrics daemon not running), but at least indicated that the client and the API could communicate. Then I ran the gnocchi-metricd and got the following error:
2020-10-12 14:59:30,491 [9037] ERROR    gnocchi.cli.metricd: Unexpected error during processing job
Traceback (most recent call last):
  File "/usr/lib/python3/dist-packages/gnocchi/cli/metricd.py", line 87, in run
    self._run_job()
  File "/usr/lib/python3/dist-packages/gnocchi/cli/metricd.py", line 248, in _run_job
    self.coord.update_capabilities(self.GROUP_ID, self.store.statistics)
  File "/usr/lib/python3/dist-packages/tooz/coordination.py", line 592, in update_capabilities
    raise tooz.NotImplemented
tooz.NotImplemented
At this stage I ve had enough of gnocchi. I ll give the Etsy Statsd a go next. Update Thomas has responded to this post [5]. At this stage I m not really interested in giving Gnocchi another go. There s still the issue of the indexer database which should be different from the main database somehow and sqlite (the config file default) doesn t work. I expect that if I was to persist with Gnocchi I would encounter more poorly described error messages from the code which either don t have Google hits when I search for them or have Google hits to unanswered questions from 5+ years ago. The Gnocchi systemd config files are in different packages to the programs, this confused me and I thought that there weren t any systemd service files. I had expected that installing a package with a daemon binary would also get the systemd unit file to match. The cluster features of Gnocchi are probably really good if you need that sort of thing. But if you have a small instance (EG a single VM server) then it s not needed. Also one of the original design ideas of the Etsy Statsd was that UDP was used because data could just be dropped if there was a problem. I think for many situations the same concept could apply to the entire stats service. If the other statsd programs don t do what I need then I may give Gnocchi another go.

31 August 2020

Dirk Eddelbuettel: RcppCCTZ 0.2.9: API Header Added

A new minor release 0.2.9 of RcppCCTZ is now on CRAN. RcppCCTZ uses Rcpp to bring CCTZ to R. CCTZ is a C++ library for translating between absolute and civil times using the rules of a time zone. In fact, it is two libraries. One for dealing with civil time: human-readable dates and times, and one for converting between between absolute and civil times via time zones. And while CCTZ is made by Google(rs), it is not an official Google product. The RcppCCTZ page has a few usage examples and details. This package was the first CRAN package to use CCTZ; by now at least three others do using copies in their packages which remains less than ideal. This version adds a header file for the recently-exported three functions.

Changes in version 0.2.9 (2020-08-30)
  • Provide a header RcppCCZT_API.h for client packages.
  • Show a simple example of parsing a YYYYMMDD HHMMSS.FFFFFF date.

We also have a diff to the previous version thanks to CRANberries. More details are at the RcppCCTZ page; code, issue tickets etc at the GitHub repository. If you like this or other open-source work I do, you can now sponsor me at GitHub. For the first year, GitHub will match your contributions.

This post by Dirk Eddelbuettel originated on his Thinking inside the box blog. Please report excessive re-aggregation in third-party for-profit settings.

7 April 2020

Steve Kemp: A busy few days

Over the past few weeks things have been pretty hectic. Since I'm not working at the moment I'm mostly doing childcare instead. I need a break, now and again, so I've been sending our child to p iv koti two days a week with him home the rest of the time. I love taking care of the child, because he's seriously awesome, but it's a hell of a lot of work when most of our usual escapes are unavailable. For example we can't go to the (awesome) Helsinki Central Library as that is closed. Instead of doing things outdoors we've been baking bread together, painting, listening to music and similar. He's a big fan of any music with drums and shouting, so we've been listening to Rammstein, The Prodigy, and as much Queen as I can slip in without him complaining ("more bang bang!"). I've also signed up for some courses at the Helsinki open university, including Devops with Docker so perhaps I have a future career working with computers? I'm hazy. Finally I saw a fun post the other day on reddit asking about the creation of a DSL for server-setup. I wrote a reply which basically said two things: Anyway I had an idea of just expressing things in a simple fashion, borrowing Puppet syntax (which I guess is just Ruby hash literals). So a module to do stuff with files would just look like this:
file   name   => "This is my rule",
       target => "/tmp/blah",
       ensure => "absent"  
The next thing to do is to allow that to notify another rule, when it results in a change. So you add in:
notify => "Name of rule"
# or
notify => [ "Name of rule", "Name of another rule" ]
You could also express dependencies the other way round:
shell   name => "Do stuff",
        command => "wc -l /etc/passwd > /tmp/foo",
        requires => [ "Rule 1", "Rule 2"]  
Anyway the end result is a simple syntax which allows you to do things; I wrote a file to allow me to take a clean system and configure it to run a simple golang application in an hour or so. The downside? Well the obvious one is that there's no support for setting up cron jobs, setting up docker images, MySQL usernames/passwords, etc. Just a core set of primitives. Adding new things is easy, but also an endless job. So I added the ability to run external/binary plugins stored outside the project. To support that is simple with the syntax we have: All good. People can write modules, if they like, and they can do that in any language they like. Fun times. We'll call it marionette since it's all puppet-inspired: And that concludes this irregular update.

31 October 2017

Paul Wise: FLOSS Activities October 2017

Changes

Issues

Review

Administration
  • Debian: respond to mail debug request, redirect hardware access seeker to guest account, redirect hardware donors to porters, redirect interview seeker to DPL, reboot system with dead service
  • Debian mentors: security updates, reboot
  • Debian wiki: upgrade search db format, remove incorrect bans, whitelist email addresses, disable accounts with bouncing email, update email for accounts with bouncing email
  • Debian website: remove need for a website rebuild
  • Openmoko: restart web server, set web server process limits, install monitoring tool

Sponsors The talloc/cmocka uploads and the remmina issue were sponsored by my employer. All other work was done on a volunteer basis.

19 May 2017

Michael Prokop: Debian stretch: changes in util-linux #newinstretch

We re coming closer to the Debian/stretch stable release and similar to what we had with #newinwheezy and #newinjessie it s time for #newinstretch! Hideki Yamane already started the game by blogging about GitHub s Icon font, fonts-octicons and Arturo Borrero Gonzalez wrote a nice article about nftables in Debian/stretch. One package that isn t new but its tools are used by many of us is util-linux, providing many essential system utilities. We have util-linux v2.25.2 in Debian/jessie and in Debian/stretch there will be util-linux >=v2.29.2. There are many new options available and we also have a few new tools available. Tools that have been taken over from other packages New tools New features/options addpart (show or change the real-time scheduling attributes of a process):
--reload reload prompts on running agetty instances
blkdiscard (discard the content of sectors on a device):
-p, --step <num>    size of the discard iterations within the offset
-z, --zeroout       zero-fill rather than discard
chrt (show or change the real-time scheduling attributes of a process):
-d, --deadline            set policy to SCHED_DEADLINE
-T, --sched-runtime <ns>  runtime parameter for DEADLINE
-P, --sched-period <ns>   period parameter for DEADLINE
-D, --sched-deadline <ns> deadline parameter for DEADLINE
fdformat (do a low-level formatting of a floppy disk):
-f, --from <N>    start at the track N (default 0)
-t, --to <N>      stop at the track N
-r, --repair <N>  try to repair tracks failed during the verification (max N retries)
fdisk (display or manipulate a disk partition table):
-B, --protect-boot            don't erase bootbits when creating a new label
-o, --output <list>           output columns
    --bytes                   print SIZE in bytes rather than in human readable format
-w, --wipe <mode>             wipe signatures (auto, always or never)
-W, --wipe-partitions <mode>  wipe signatures from new partitions (auto, always or never)
New available columns (for -o):
 gpt: Device Start End Sectors Size Type Type-UUID Attrs Name UUID
 dos: Device Start End Sectors Cylinders Size Type Id Attrs Boot End-C/H/S Start-C/H/S
 bsd: Slice Start End Sectors Cylinders Size Type Bsize Cpg Fsize
 sgi: Device Start End Sectors Cylinders Size Type Id Attrs
 sun: Device Start End Sectors Cylinders Size Type Id Flags
findmnt (find a (mounted) filesystem):
-J, --json             use JSON output format
-M, --mountpoint <dir> the mountpoint directory
-x, --verify           verify mount table content (default is fstab)
    --verbose          print more details
flock (manage file locks from shell scripts):
-F, --no-fork            execute command without forking
    --verbose            increase verbosity
getty (open a terminal and set its mode):
--reload               reload prompts on running agetty instances
hwclock (query or set the hardware clock):
--get            read hardware clock and print drift corrected result
--update-drift   update drift factor in /etc/adjtime (requires --set or --systohc)
ldattach (attach a line discipline to a serial line):
-c, --intro-command <string>  intro sent before ldattach
-p, --pause <seconds>         pause between intro and ldattach
logger (enter messages into the system log):
-e, --skip-empty         do not log empty lines when processing files
    --no-act             do everything except the write the log
    --octet-count        use rfc6587 octet counting
-S, --size <size>        maximum size for a single message
    --rfc3164            use the obsolete BSD syslog protocol
    --rfc5424[=<snip>]   use the syslog protocol (the default for remote);
                           <snip> can be notime, or notq, and/or nohost
    --sd-id <id>         rfc5424 structured data ID
    --sd-param <data>    rfc5424 structured data name=value
    --msgid <msgid>      set rfc5424 message id field
    --socket-errors[=<on off auto>] print connection errors when using Unix sockets
losetup (set up and control loop devices):
-L, --nooverlap               avoid possible conflict between devices
    --direct-io[=<on off>]    open backing file with O_DIRECT 
-J, --json                    use JSON --list output format
New available --list column:
DIO  access backing file with direct-io
lsblk (list information about block devices):
-J, --json           use JSON output format
New available columns (for --output):
HOTPLUG  removable or hotplug device (usb, pcmcia, ...)
SUBSYSTEMS  de-duplicated chain of subsystems
lscpu (display information about the CPU architecture):
-y, --physical          print physical instead of logical IDs
New available column:
DRAWER  logical drawer number
lslocks (list local system locks):
-J, --json             use JSON output format
-i, --noinaccessible   ignore locks without read permissions
nsenter (run a program with namespaces of other processes):
-C, --cgroup[=<file>]      enter cgroup namespace
    --preserve-credentials do not touch uids or gids
-Z, --follow-context       set SELinux context according to --target PID
rtcwake (enter a system sleep state until a specified wakeup time):
--date <timestamp>   date time of timestamp to wake
--list-modes         list available modes
-r, --reorder <dev>  fix partitions order (by start offset)
sfdisk (display or manipulate a disk partition table):
New Commands:
-J, --json <dev>                  dump partition table in JSON format
-F, --list-free [<dev> ...]       list unpartitioned free areas of each device
-r, --reorder <dev>               fix partitions order (by start offset)
    --delete <dev> [<part> ...]   delete all or specified partitions
--part-label <dev> <part> [<str>] print or change partition label
--part-type <dev> <part> [<type>] print or change partition type
--part-uuid <dev> <part> [<uuid>] print or change partition uuid
--part-attrs <dev> <part> [<str>] print or change partition attributes
New Options:
-a, --append                   append partitions to existing partition table
-b, --backup                   backup partition table sectors (see -O)
    --bytes                    print SIZE in bytes rather than in human readable format
    --move-data[=<typescript>] move partition data after relocation (requires -N)
    --color[=<when>]           colorize output (auto, always or never)
                               colors are enabled by default
-N, --partno <num>             specify partition number
-n, --no-act                   do everything except write to device
    --no-tell-kernel           do not tell kernel about changes
-O, --backup-file <path>       override default backup file name
-o, --output <list>            output columns
-w, --wipe <mode>              wipe signatures (auto, always or never)
-W, --wipe-partitions <mode>   wipe signatures from new partitions (auto, always or never)
-X, --label <name>             specify label type (dos, gpt, ...)
-Y, --label-nested <name>      specify nested label type (dos, bsd)
Available columns (for -o):
 gpt: Device Start End Sectors Size Type Type-UUID Attrs Name UUID
 dos: Device Start End Sectors Cylinders Size Type Id Attrs Boot End-C/H/S Start-C/H/S
 bsd: Slice Start  End Sectors Cylinders Size Type Bsize Cpg Fsize
 sgi: Device Start End Sectors Cylinders Size Type Id Attrs
 sun: Device Start End Sectors Cylinders Size Type Id Flags
swapon (enable devices and files for paging and swapping):
-o, --options <list>     comma-separated list of swap options
New available columns (for --show):
UUID   swap uuid
LABEL  swap label
unshare (run a program with some namespaces unshared from the parent):
-C, --cgroup[=<file>]                              unshare cgroup namespace
    --propagation slave shared private unchanged   modify mount propagation in mount namespace
-s, --setgroups allow deny                         control the setgroups syscall in user namespaces
Deprecated / removed options sfdisk (display or manipulate a disk partition table):
-c, --id                  change or print partition Id
    --change-id           change Id
    --print-id            print Id
-C, --cylinders <number>  set the number of cylinders to use
-H, --heads <number>      set the number of heads to use
-S, --sectors <number>    set the number of sectors to use
-G, --show-pt-geometry    deprecated, alias to --show-geometry
-L, --Linux               deprecated, only for backward compatibility
-u, --unit S              deprecated, only sector unit is supported

Next.