Search Results: "etbe"

12 April 2021

Russell Coker: Riverdale

I ve been watching the show Riverdale on Netflix recently. It s an interesting modern take on the Archie comics. Having watched Josie and the Pussycats in Outer Space when I was younger I was anticipating something aimed towards a similar audience. As solving mysteries and crimes was apparently a major theme of the show I anticipated something along similar lines to Scooby Doo, some suspense and some spooky things, but then a happy ending where criminals get arrested and no-one gets hurt or killed while the vast majority of people are nice. Instead the first episode has a teen being murdered and Ms Grundy being obsessed with 15yo boys and sleeping with Archie (who s supposed to be 15 but played by a 20yo actor). Everyone in the show has some dark secret. The filming has a dark theme, the sky is usually overcast and it s generally gloomy. This is a significant contrast to Veronica Mars which has some similarities in having a young cast, a sassy female sleuth, and some similar plot elements. Veronica Mars has a bright theme and a significant comedy element in spite of dealing with some dark issues (murder, rape, child sex abuse, and more). But Riverdale is just dark. Anyone who watches this with their kids expecting something like Scooby Doo is in for a big surprise. There are lots of interesting stylistic elements in the show. Lots of clothing and uniform designs that seem to date from the 1940 s. It seems like some alternate universe where kids have smartphones and laptops while dressing in the style of the 1940s. One thing that annoyed me was construction workers using tools like sledge-hammers instead of excavators. A society that has smart phones but no earth-moving equipment isn t plausible. On the upside there is a racial mix in the show that more accurately reflects American society than the original Archie comics and homophobia is much less common than in most parts of our society. For both race issues and gay/lesbian issues the show treats them in an accurate way (portraying some bigotry) while the main characters aren t racist or homophobic. I think it s generally an OK show and recommend it to people who want a dark show. It s a good show to watch while doing something on a laptop so you can check Wikipedia for the references to 1940s stuff (like when Bikinis were invented). I m half way through season 3 which isn t as good as the first 2, I don t know if it will get better later in the season or whether I should have stopped after season 2. I don t usually review fiction, but the interesting aesthetics of the show made it deserve a review.

Russell Coker: Storage Trends 2021

The Viability of Small Disks Less than a year ago I wrote a blog post about storage trends [1]. My main point in that post was that disks smaller than 2TB weren t viable then and 2TB disks wouldn t be economically viable in the near future. Now MSY has 2TB disks for $72 and 2TB SSD for $245, saving $173 if you get a hard drive (compared to saving $240 10 months ago). Given the difference in performance and noise 2TB hard drives won t be worth using for most applications nowadays. NVMe vs SSD Last year NVMe prices were very comparable for SSD prices, I was hoping that trend would continue and SSDs would go away. Now for sizes 1TB and smaller NVMe and SSD prices are very similar, but for 2TB the NVMe prices are twice that of SSD presumably partly due to poor demand for 2TB NVMe. There are also no NVMe devices larger than 2TB on sale at MSY (a store which caters to home stuff not special server equipment) but SSDs go up to 8TB. It seems that NVMe is only really suitable for workstation storage and for cache etc on a server. So SATA SSDs will be around for a while. Small Servers There are a range of low end servers which support a limited number of disks. Dell has 2 disk servers and 4 disk servers. If one of those had 8TB SSDs you could have 8TB of RAID-1 or 24TB of RAID-Z storage in a low end server. That covers the vast majority of servers (small business or workgroup servers tend to have less than 8TB of storage). Larger Servers Anandtech has an article on Seagates roadmap to 120TB disks [2]. They currently sell 20TB disks using HAMR technology Currently the biggest disks that MSY sells are 10TB for $395, which was also the biggest disk they were selling last year. Last year MSY only sold SSDs up to 2TB in size (larger ones were available from other companies at much higher prices), now they sell 8TB SSDs for $949 (4* capacity increase in less than a year). Seagate is planning 30TB disks for 2023, if SSDs continue to increase in capacity by 4* per year we could have 128TB SSDs in 2023. If you needed a server with 100TB of storage then having 2 or 3 SSDs in a RAID array would be much easier to manage and faster than 4*30TB disks in an array. When you have a server with many disks you can expect to have more disk failures due to vibration. One time I built a server with 18 disks and took disks from 2 smaller servers that had 4 and 5 disks. The 9 disks which had been working reliably for years started having problems within weeks of running in the bigger server. This is one of the many reasons for paying extra for SSD storage. Seagate is apparently planning 50TB disks for 2026 and 100TB disks for 2030. If that s the best they can do then SSD vendors should be able to sell larger products sooner at prices that are competitive. Matching hard drive prices is not required, getting to less than 4* the price should be enough for most customers. The Anandtech article is worth reading, it mentions some interesting features that Seagate are developing such as having 2 actuators (which they call Mach.2) so the drive can access 2 different tracks at the same time. That can double the performance of a disk, but that doesn t change things much when SSDs are more than 100* faster. Presumably the Mach.2 disks will be SAS and incredibly expensive while providing significantly less performance than affordable SATA SSDs. Computer Cases In my last post I speculated on the appearance of smaller cases designed to not have DVD drives or 3.5 hard drives. Such cases still haven t appeared apart from special purpose machines like the NUC that were available last year. It would be nice if we could get a new industry standard for smaller power supplies. Currently power supplies are expected to be almost 5 inches wide (due to the expectation of a 5.25 DVD drive mounted horizontally). We need some industry standards for smaller PCs that aren t like the NUC, the NUC is very nice, but most people who build their own PC need more space than that. I still think that planning on USB DVD drives is the right way to go. I ve got 4PCs in my home that are regularly used and CDs and DVDs are used so rarely that sharing a single DVD drive among all 4 wouldn t be a problem. Conclusion I m tempted to get a couple of 4TB SSDs for my home server which cost $487 each, it currently has 2*500G SSDs and 3*4TB disks. I would have to remove some unused files but that s probably not too hard to do as I have lots of old backups etc on there. Another possibility is to use 2*4TB SSDs for most stuff and 2*4TB disks for backups. I m recommending that all my clients only use SSDs for their storage. I only have one client with enough storage that disks are the only option (100TB of storage) but they moved all the functions of that server to AWS and use S3 for the storage. Now I don t have any clients doing anything with storage that can t be done in a better way on SSD for a price difference that s easy for them to afford. Affordable SSD also makes RAID-1 in workstations more viable. 2 disks in a PC is noisy if you have an office full of them and produces enough waste heat to be a reliability issue (most people don t cool their offices adequately on weekends). 2 SSDs in a PC is no problem at all. As 500G SSDs are available for $73 it s not a significant cost to install 2 of them in every PC in the office (more cost for my time than hardware). I generally won t recommend that hard drives be replaced with SSDs in systems that are working well. But if a machine runs out of space then replacing it with SSDs in a RAID-1 is a good choice. Moore s law might cover SSDs, but it definitely doesn t cover hard drives. Hard drives have fallen way behind developments of most other parts of computers over the last 30 years, hopefully they will go away soon.

1 April 2021

Russell Coker: Censoring Images

A client asked me to develop a system for censoring images from an automatic camera. The situation is that we have a camera taking regular photos from a fixed location which includes part of someone else s property. So my client made a JPEG with some black rectangles in the sections that need to be covered. The first thing I needed to do was convert the JPEG to a PNG with transparency for the sections that aren t to be covered. To convert it I loaded the JPEG in the GIMP and went to the Layer->Transparency->Add Alpha Channel menu to enabled the Alpha channel. Then I selected the Bucket Fill tool and used Mode Erase and Fill by Composite and then clicked on the background (the part of the JPEG that was white) to make it transparent. Then I exported it to PNG. If anyone knows of an easy way to convert the file then please let me know. It would be nice if there was a command-line program I could run to convert a specified color (default white) to transparent. I say this because I can imagine my client going through a dozen iterations of an overlay file that doesn t quite fit. To censor the image I ran the composite command from imagemagick. The command I used was composite -gravity center overlay.png in.jpg out.jpg . If anyone knows a better way of doing this then please let me know. The platform I m using is a ARM926EJ-S rev 5 (v5l) which takes 8 minutes of CPU time to convert a single JPEG at full DSLR resolution (4 megapixel). It also required enabling swap on a SD card to avoid running out of RAM and running systemctl disable tmp.mount to stop using tmpfs for /tmp as the system only has 256M of RAM.

27 February 2021

Russell Coker: Links February 2021

Elestic Search gets a new license to deal with AWS not paying them [1]. Of course AWS will fork the products in question. We need some anti-trust action against Amazon. Big Think has an interesting article about what appears to be ritualistic behaviour in chompanzees [2]. The next issue is that if they are developing a stone-age culture does that mean we should treat them differently from other less developed animals? Last Week in AWS has an informative article about Parler s new serverless architecture [3]. They explain why it s not easy to move away from a cloud platform even for a service that s designed to not be dependent on it. The moral of the story is that running a service so horrible that none of the major cloud providers will touch it doesn t scale. Patheos has an insightful article about people who spread the most easily disproved lies for their religion [4]. A lot of political commentary nowadays is like that. Indi Samarajiva wrote an insightful article comparing terrorism in Sri Lanka with the right-wing terrorism in the US [5]. The conclusion is that it s only just starting in the US. Belling Cat has an interesting article about the FSB attempt to murder Russian presidential candidate Alexey Navalny [6]. Russ Allbery wrote an interesting review of Anti-Social, a book about the work of an anti-social behavior officer in the UK [7]. The book (and Russ s review) has some good insights into how crime can be reduced. Of course a large part of that is allowing people who want to use drugs to do so in an affordable way. Informative post from Electrical Engineering Materials about the difference between KVW and KW [8]. KVA is bigger than KW, sometimes a lot bigger. Arstechnica has an interesting but not surprising article about a supply chain attack on software development [9]. Exploiting the way npm and similar tools resolve dependencies to make them download hostile code. There is no possibility of automatic downloads being OK for security unless they are from known good sites that don t allow random people to upload. Any sort of system that allows automatic download from sites like the Node or Python repositories, Github, etc is ripe for abuse. I think the correct solution is to have dependencies installed manually or automatically from a distribution like Debian, Ubuntu, Fedora, etc where there have been checks on the source of the source. Devon Price wrote an insightful Medium article Laziness Does Not Exist about the psychological factors which can lead to poor results that many people interpret as laziness [10]. Everyone who supervises other people s work should read this.

21 January 2021

Russell Coker: Links January 2021

Krebs on Security has an informative article about web notifications and how they are being used for spamming and promoting malware [1]. He also includes links for how to permanently disable them. If nothing else clicking no on each new site that wants to send notifications is annoying. Michael Stapelberg wrote an insightful posts about inefficiencies in the Debian development processes [2]. While I agree with most of his assessment of Debian issues I am not going to decrease my involvement in Debian. Of the issues he mentions the 2 that seem to have the best effort to reward ratio are improvements to mailing list archives (to ideally make it practical to post to lists without subscribing and read responses in the archives) and the issues of forgetting all the complexities of the development process which can be alleviated by better Wiki pages. In my Debian work I ve contributed more to the Wiki in recent times but not nearly as much as I should. Jacobin has an insightful article Ending Poverty in the United States Would Actually Be Pretty Easy [3]. Mark Brown wrote an interesting blog post about the Rust programming language [4]. He links to a couple of longer blog posts about it. Rust has some great features and I ve been meaning to learn it. Scientific America has an informative article about research on the spread of fake news and memes [5]. Something to consider when using social media. Bruce Schneier wrote an insightful blog post on whether there should be limits on persuasive technology [6]. Jonathan Dowland wrote an interesting blog post about git rebasing and lab books [7]. I think it s an interesting thought experiment to compare the process of developing code worthy of being committed to a master branch of a VCS to the process of developing a Ph.D thesis. CBS has a disturbing article about the effect of Covid19 on people s lungs [8]. Apparently it usually does more lung damage than long-term smoking and even 70%+ of people who don t have symptoms of the disease get significant lung damage. People who live in heavily affected countries like the US now have to worry that they might have had the disease and got lung damage without knowing it. Russ Allbery wrote an interesting review of the book Because Internet about modern linguistics [9]. The topic is interesting and I might read that book at some future time (I have many good books I want to read). Jonathan Carter wrote an interesting blog post about CentOS Streams and why using a totally free OS like Debian is going to be a better option for most users [10]. Linus has slammed Intel for using ECC support as a way of segmenting the market between server and desktop to maximise profits [11]. It would be nice if a company made a line of Ryzen systems with ECC RAM support, but most manufacturers seem to be in on the market segmentation scam. Russ Allbery wrote an interesting review of the book Can t Even about millenials as the burnout generation and the blame that the corporate culture deserves for this [12].

12 January 2021

Russell Coker: PSI and Cgroup2

In the comments on my post about Load Average Monitoring [1] an anonymous person recommended that I investigate PSI. As an aside, why do I get so many great comments anonymously? Don t people want to get credit for having good ideas and learning about new technology before others? PSI is the Pressure Stall Information subsystem for Linux that is included in kernels 4.20 and above, if you want to use it in Debian then you need a kernel from Testing or Unstable (Bullseye has kernel 4.19). The place to start reading about PSI is the main Facebook page about it, it was originally developed at Facebook [2]. I am a little confused by the actual numbers I get out of PSI, while for the load average I can often see where they come from (EG have 2 processes each taking 100% of a core and the load average will be about 2) it s difficult to work out where the PSI numbers come from. For my own use I decided to treat them as unscaled numbers that just indicate problems, higher number is worse and not worry too much about what the number really means. With the cgroup2 interface which is supported by the version of systemd in Testing (and which has been included in Debian backports for Buster) you get PSI files for each cgroup. I ve just uploaded version 1.3.5-2 of etbemon (package mon) to Debian/Unstable which displays the cgroups with PSI numbers greater than 0.5% when the load average test fails.
System CPU Pressure: avg10=0.87 avg60=0.99 avg300=1.00 total=20556310510
/system.slice avg10=0.86 avg60=0.92 avg300=0.97 total=18238772699
/system.slice/system-tor.slice avg10=0.85 avg60=0.69 avg300=0.60 total=11996599996
/system.slice/system-tor.slice/tor@default.service avg10=0.83 avg60=0.69 avg300=0.59 total=5358485146
System IO Pressure: avg10=18.30 avg60=35.85 avg300=42.85 total=310383148314
 full avg10=13.95 avg60=27.72 avg300=33.60 total=216001337513
/system.slice avg10=2.78 avg60=3.86 avg300=5.74 total=51574347007
/system.slice full avg10=1.87 avg60=2.87 avg300=4.36 total=35513103577
/system.slice/mariadb.service avg10=1.33 avg60=3.07 avg300=3.68 total=2559016514
/system.slice/mariadb.service full avg10=1.29 avg60=3.01 avg300=3.61 total=2508485595
/system.slice/matrix-synapse.service avg10=2.74 avg60=3.92 avg300=4.95 total=20466738903
/system.slice/matrix-synapse.service full avg10=2.74 avg60=3.92 avg300=4.95 total=20435187166
Above is an extract from the output of the loadaverage check. It shows that tor is a major user of CPU time (the VM runs a ToR relay node and has close to 100% of one core devoted to that task). It also shows that Mariadb and Matrix are the main users of disk IO. When I installed Matrix the Debian package told me that using SQLite would give lower performance than MySQL, but that didn t seem like a big deal as the server only has a few users. Maybe I should move Matrix to the Mariadb instance. to improve overall system performance. So far I have not written any code to display the memory PSI files. I don t have a lack of RAM on systems I run at the moment and don t have a good test case for this. I welcome patches from people who have the ability to test this and get some benefit from it. We are probably about 6 months away from a new release of Debian and this is probably the last thing I need to do to make etbemon ready for that.

Russell Coker: RISC-V and Qemu

RISC-V is the latest RISC architecture that s become popular. It is the 5th RISC architecture from the University of California Berkeley. It seems to be a competitor to ARM due to not having license fees or restrictions on alterations to the architecture (something you have to pay extra for when using ARM). RISC-V seems the most popular architecture to implement in FPGA. When I first tried to run RISC-V under QEMU it didn t work, which was probably due to running Debian/Unstable on my QEMU/KVM system and there being QEMU bugs in Unstable at the time. I have just tried it again and got it working. The Debian Wiki page about RISC-V is pretty good [1]. The instructions there got it going for me. One thing I wasted some time on before reading that page was trying to get a netinst CD image, which is what I usually do for setting up a VM. Apparently there isn t RISC-V hardware that boots from a CD/DVD so there isn t a Debian netinst CD image. But debootstrap can install directly from the Debian web server (something I ve never wanted to do in the past) and that gave me a successful installation. Here are the commands I used to setup the base image:
apt-get install debootstrap qemu-user-static binfmt-support debian-ports-archive-keyring
debootstrap --arch=riscv64 --keyring /usr/share/keyrings/debian-ports-archive-keyring.gpg --include=debian-ports-archive-keyring unstable /mnt/tmp http://deb.debian.org/debian-ports
I first tried running RISC-V Qemu on Buster, but even ls didn t work properly and the installation failed.
chroot /mnt/tmp bin/bash
# ls -ld .
/usr/bin/ls: cannot access '.': Function not implemented
When I ran it on Unstable ls works but strace doesn t work in a chroot, this gave enough functionality to complete the installation.
chroot /mnt/tmp bin/bash
# strace ls -l
/usr/bin/strace: test_ptrace_get_syscall_info: PTRACE_TRACEME: Function not implemented
/usr/bin/strace: ptrace(PTRACE_TRACEME, ...): Function not implemented
/usr/bin/strace: PTRACE_SETOPTIONS: Function not implemented
/usr/bin/strace: detach: waitpid(1602629): No child processes
/usr/bin/strace: Process 1602629 detached
When running the VM the operation was noticably slower than the emulation of PPC64 and S/390x which both ran at an apparently normal speed. When running on a server with equivalent speed CPU a ssh login was obviously slower due to the CPU time taken for encryption, a ssh connection from a system on the same LAN took 6 seconds to connect. I presume that because RISC-V is a newer architecture there hasn t been as much effort made on optimising the Qemu emulation and that a future version of Qemu will be faster. But I don t think that Debian/Bullseye will give good Qemu performance for RISC-V, probably more changes are needed than can happen before the freeze. Maybe a version of Qemu with better RISC-V performance can be uploaded to backports some time after Bullseye is released. Here s the Qemu command I use to run RISC-V emulation:
qemu-system-riscv64 -machine virt -device virtio-blk-device,drive=hd0 -drive file=/vmstore/riscv,format=raw,id=hd0 -device virtio-blk-device,drive=hd1 -drive file=/vmswap/riscv,format=raw,id=hd1 -m 1024 -kernel /boot/riscv/vmlinux-5.10.0-1-riscv64 -initrd /boot/riscv/initrd.img-5.10.0-1-riscv64 -nographic -append net.ifnames=0 noresume security=selinux root=/dev/vda ro -object rng-random,filename=/dev/urandom,id=rng0 -device virtio-rng-device,rng=rng0 -device virtio-net-device,netdev=net0,mac=02:02:00:00:01:03 -netdev tap,id=net0,helper=/usr/lib/qemu/qemu-bridge-helper
Currently the program /usr/sbin/sefcontext_compile from the selinux-utils package needs execmem access on RISC-V while it doesn t on any other architecture I have tested. I don t know why and support for debugging such things seems to be in early stages of development, for example the execstack program doesn t work on RISC-V now. RISC-V emulation in Unstable seems adequate for people who are serious about RISC-V development. But if you want to just try a different architecture then PPC64 and S/390 will work better.

7 January 2021

Russell Coker: Monopoly the Game

The Smithsonian Mag has an informative article about the history of the game Monopoly [1]. The main point about Monopoly teaching about the problems of inequality is one I was already aware of, but there are some aspects of the history that I learned from the article. Here s an article about using modified version of Monopoly to teach Sociology [2]. Maria Paino and Jeffrey Chin wrote an interesting paper about using Monopoly with revised rules to teach Sociology [3]. They publish the rules which are interesting and seem good for a class. I think it would be good to have some new games which can teach about class differences. Maybe have an Escape From Poverty game where you have choices that include drug dealing to try and improve your situation or a cooperative game where people try to create a small business. While Monopoly can be instructive it s based on the economic circumstances of the past. The vast majority of rich people aren t rich from land ownership.

5 January 2021

Russell Coker: Planet Linux Australia

Linux Australia have decided to cease running the Planet installation on planet.linux.org.au. I believe that blogging is still useful and a web page with a feed of Australian Linux blogs is a useful service. So I have started running a new Planet Linux Australia on https://planet.luv.asn.au/. There has been discussion about getting some sort of redirection from the old Linux Australia page, but they don t seem able to do that. If you have a blog that has a reasonable portion of Linux and FOSS content and is based in or connected to Australia then email me on russell at coker.com.au to get it added. When I started running this I took the old list of feeds from planet.linux.org.au, deleted all blogs that didn t have posts for 5 years and all blogs that were broken and had no recent posts. I emailed people who had recently broken blogs so they could fix them. It seems that many people who run personal blogs aren t bothered by a bit of downtime. As an aside I would be happy to setup the monitoring system I use to monitor any personal web site of a Linux person and notify them by Jabber or email of an outage. I could set it to not alert for a specified period (10 mins, 1 hour, whatever you like) so it doesn t alert needlessly on routine sysadmin work and I could have it check SSL certificate validity as well as the basic page header.

Russell Coker: Weather and Boinc

I just wrote a Perl script to look at the Australian Bureau of Meteorology pages to find the current temperature in an area and then adjust BOINC settings accordingly. The Perl script (in this post after the break, which shouldn t be in the RSS feed) takes the URL of a Bureau of Meteorology observation point as ARGV[0] and parses that to find the current (within the last hour) temperature. Then successive command line arguments are of the form 24:100 and 30:50 which indicate that at below 24C 100% of CPU cores should be used and below 30C 50% of CPU cores should be used. In warm weather having a couple of workstations in a room running BOINC (or any other CPU intensive task) will increase the temperature and also make excessive noise from cooling fans. To change the number of CPU cores used the script changes /etc/boinc-client/global_prefs_override.xml and then tells BOINC to reload that config file. This code is a little ugly (it doesn t properly parse XML, it just replaces a line of text) and could fail on a valid configuration file that wasn t produced by the current BOINC code. The parsing of the BoM page is a little ugly too, it relies on the HTML code in the BoM page they could make a page that looks identical which breaks the parsing or even a page that contains the same data that looks different. It would be nice if the BoM published some APIs for getting the weather. One thing that would be good is TXT records in the DNS. DNS supports caching with specified lifetime and is designed for high throughput in aggregate. If you had a million IOT devices polling the current temperature and forecasts every minute via DNS the people running the servers wouldn t even notice the load, while a million devices polling a web based API would be a significant load. As an aside I recommend playing nice and only running such a script every 30 minutes, the BoM page seems to be updated on the half hour so I have my cron jobs running at 5 and 35 minutes past the hour. If this code works for you then that s great. If it merely acts as an inspiration for developing your own code then that s great too! BOINC users outside Australia could replace the code for getting meteorological data (or even interface to a digital thermometer). Australians who use other CPU intensive batch jobs could take the BoM parsing code and replace the BOINC related code. If you write scripts inspired by this please blog about it and comment here with a link to your blog post.
#!/usr/bin/perl
use strict;
use Sys::Syslog;
# St Kilda Harbour RMYS
# http://www.bom.gov.au/products/IDV60901/IDV60901.95864.shtml
my $URL = $ARGV[0];
open(IN, "wget -o /dev/null -O - $URL ") or die "Can't get $URL";
while(<IN>)
 
  if($_ =~ /tr class=.rowleftcolumn/)
   
    last;
   
 
sub get_data
 
  if(not $_[0] =~ /headers=.t1-$_[1]/)
   
    return undef;
   
  $_[0] =~ s/^.*headers=.t1-$_[1]..//;
  $_[0] =~ s/<.td.*$//;
  return $_[0];
 
my @datetime;
my $cur_temp -100;
while(<IN>)
 
  chomp;
  if($_ =~ /^<.tr>$/)
   
    last;
   
  my $res;
  if($res = get_data($_, "datetime"))
   
    @datetime = split(/\//, $res)
   
  elsif($res = get_data($_, "tmp"))
   
    $cur_temp = $res;
   
 
close(IN);
if($#datetime != 1 or $cur_temp == -100)
 
  die "Can't parse BOM data";
 
my ($sec,$min,$hour,$mday,$mon,$year,$wday,$yday,$isdst) = localtime();
if($mday - $datetime[0] > 1 or ($datetime[0] > $mday and $mday != 1))
 
  die "Date wrong\n";
 
my $mins;
my @timearr = split(/:/, $datetime[1]);
$mins = $timearr[0] * 60 + $timearr [1];
if($timearr[1] =~ /pm/)
 
  $mins += 720;
 
if($mday != $datetime[0])
 
  $mins += 1440;
 
if($mins + 60 < $hour * 60 + $min)
 
  die "page outdated\n";
 
my %temp_hash;
foreach ( @ARGV[1..$#ARGV] )
 
  my @tmparr = split(/:/, $_);
  $temp_hash $tmparr[0]  = $tmparr[1];
 
my @temp_list = sort(keys(%temp_hash));
my $percent = 0;
my $i;
for($i = $#temp_list; $i >= 0 and $temp_list[$i] > $cur_temp; $i--)
 
  $percent = $temp_hash $temp_list[$i] 
 
my $prefs = "/etc/boinc-client/global_prefs_override.xml";
open(IN, "<$prefs") or die "Can't read $prefs";
my @prefs_contents;
while(<IN>)
 
  push(@prefs_contents, $_);
 
close(IN);
openlog("boincmgr-cron", "", "daemon");
my @cpus_pct = grep(/max_ncpus_pct/, @prefs_contents);
my $cpus_line = $cpus_pct[0];
$cpus_line =~ s/..max_ncpus_pct.$//;
$cpus_line =~ s/^.*max_ncpus_pct.//;
if($cpus_line == $percent)
 
  syslog("info", "Temp $cur_temp" . "C, already set to $percent");
  exit 0;
 
open(OUT, ">$prefs.new") or die "Can't read $prefs.new";
for($i = 0; $i <= $#prefs_contents; $i++)
 
  if($prefs_contents[$i] =~ /max_ncpus_pct/)
   
    print OUT "   <max_ncpus_pct>$percent.000000</max_ncpus_pct>\n";
   
  else
   
    print OUT $prefs_contents[$i];
   
 
close(OUT);
rename "$prefs.new", "$prefs" or die "can't rename";
system("boinccmd --read_global_prefs_override");
syslog("info", "Temp $cur_temp" . "C, set percentage to $percent");

21 December 2020

Russell Coker: MPV vs Mplayer

After writing my post about VDPAU in Debian [1] I received two great comments from anonymous people. One pointed out that I should be using VA-API (also known as VAAPI) on my Intel based Thinkpad and gave a reference to an Arch Linux Wiki page, as usual Arch Linux Wiki is awesome and I learnt a lot of great stuff there. I also found the Debian Wiki page on Hardware Video Acceleration [2] which has some good information (unfortunately I had already found all that out through more difficult methods first, I should read the Debian Wiki more often. It seems that mplayer doesn t suppoer VAAPI. The other comment suggested that I try the mpv fork of Mplayer which does support VAAPI but that feature is disabled by default in Debian. I did a number of tests on playing different videos on my laptop running Debian/Buster with Intel video and my workstation running Debian/Unstable with ATI video. The first thing I noticed is that mpv was unable to use VAAPI on my laptop and that VDPAU won t decode VP9 videos on my workstation and most 4K videos from YouTube seem to be VP9. So in most cases hardware decoding isn t going to help me. The Wikipedia page about Unified Video Decoder [3] shows that only VCN (Video Core Next) supports VP9 decoding while my R7-260x video card [4] has version 4.2 of the Unified Video Decoder which doesn t support VP9, H.265, or JPEG. Basically I need a new high-end video card to get VP9 decoding and that s not something I m interested in buying now (I only recently bought this video card to do 4K at 60Hz). The next thing I noticed is that for my combination of hardware and software at least mpv tends to take about 2/3 the CPU time to play videos that mplayer does on every video I tested. So it seems that using mpv will save me 1/3 of the power and heat from playing videos on my laptop and save me 1/3 of the CPU power on my workstation in the worst case while sometimes saving me significantly more than that. Conclusion> To summarise quite a bit of time experimenting with video playing and testing things: I shouldn t think too much about hardware decoding until VP9 hardware is available (years for me). But mpv provides some real benefits right now on the same hardware, I m not sure why.

20 December 2020

Russell Coker: SMART and SSDs

The Hetzner server that hosts my blog among other things has 2*256G SSDs for the root filesystem. The smartctl and smartd programs report both SSDs as in FAILING_NOW state for the Wear_Leveling_Count attribute. I don t have a lot of faith in SMART. I run it because it would be stupid not to consider data about possible drive problems, but don t feel obliged to immediately replace disks with SMART errors when not convenient (if I ordered a new server and got those results I would demand replacement before going live). Doing any sort of SMART scan will cause service outage. Replacing devices means 2 outages, 1 for each device. I noticed the SMART errors 2 weeks ago, so I guess that the SMART claims that both of the drives are likely to fail within 24 hours have been disproved. The system is running BTRFS so I know there aren t any unseen data corruption issues and it uses BTRFS RAID-1 so if one disk has an unreadable sector that won t cause data loss. Currently Hetzner Server Bidding has ridiculous offerings for systems with SSD storage. Search for a server with 16G of RAM and SSD storage and the minimum prices are only 2E cheaper than a new server with 64G of RAM and 2*512G NVMe. In the past Server Bidding has had servers with specs not much smaller than the newest systems going for rates well below the costs of the newer systems. The current Hetzner server is under a contract from Server Bidding which is significantly cheaper than the current Server Bidding offerings, so financially it wouldn t be a good plan to replace the server now. Monitoring I have just released a new version of etbe-mon [1] which has a new monitor for SMART data (smartctl). It also has a change to the sslcert check to search all IPv6 and IPv4 addresses for each hostname, makes freespace check look for filesystem mountpoint, and makes the smtpswaks check use latest swaks command-line. For the new smartctl check there is an option to treat Marginal alert status from smartctl as errors and there is an option to name attributes that will be treated as marginal even if smartctl thinks they are significant. So now I have my monitoring system checking the SMART data on the servers of mine which have real hard drives (not VMs) and aren t using RAID hardware that obscures such things. Also it s not alerting me about the Wear_Leveling_Count on that particular Hetzner server.

19 December 2020

Russell Coker: Testing VDPAU in Debian

VDPAU is the Video Decode and Presentation API for Unix [1]. I noticed an error with mplayer Failed to open VDPAU backend libvdpau_i965.so: cannot open shared object file: No such file or directory , Googling that turned up Debian Bug #869815 [2] which suggested installing the packages vdpau-va-driver and libvdpau-va-gl1 and setting the environment variable VDPAU_DRIVER=va_gl to enable VPDAU. The command vdpauinfo from the vdpauinfo shows the VPDAU capabilities, which showed that VPDAU was working with va_gl. When mplayer was getting the error about a missing i915 driver it took 35.822s of user time and 1.929s of system time to play Self Control by Laura Branigan [3] (a good music video to watch several times while testing IMHO) on my Thinkpad Carbon X1 Gen1 with Intel video and a i7-3667U CPU. When I set VDPAU_DRIVER=va_gl mplayer took 50.875s of user time and 4.207s of system time but didn t have the error. It s possible that other applications on my Thinkpad might benefit from VPDAU with the va_gl driver, but it seems unlikely that any will benefit to such a degree that it makes up for mplayer taking more time. It s also possible that the Self Control video I tested with was a worst case scenario, but even so taking almost 50% more CPU time made it unlikely that other videos would get a benefit. For this sort of video (640 480 resolution) it s not a problem, 38 seconds of CPU time to play a 5 minute video isn t a real problem (although it would be nice to use less battery). For a 1600*900 resolution video (the resolution of the laptop screen) it took 131 seconds of user time to play a 433 second video. That s still not going to be a problem when playing on mains power but will suck a bit when on battery. Most Thinkpads have Intel video and some have NVidia as well (which has issues from having 2 video cards and from having poor Linux driver support). So it seems that the only option for battery efficient video playing on the go right now is to use a tablet. On the upside, screen resolution is not increasing at a comparable rate to Moore s law so eventually CPUs will get powerful enough to do all this without using much electricity.

18 December 2020

Russell Coker: Thinkpad Storage Problem

For a while I ve had a problem with my Thinkpad X1 Carbon Gen 1 [1] where storage pauses for 30 seconds, it s become more common recently and unfortunately everything seems to depend on storage (ideally a web browser playing a video from the Internet could keep doing so with no disk access, but that s not what happens).
echo 3 > /sys/block/sda/device/timeout
I ve put the above in /etc/rc.local which should make it only take 3 seconds to recover from storage problems instead of 30 seconds, the problem hasn t recurred since that change so I don t know if it works as desired. The Thinkpad is running BTRFS and no data is reported as being lost, so it s not a problem to use it at this time. The bottom of this post has an example of the errors I m getting. A friend said that his searches for such errors shows people claiming that it s a SATA controller or cable problem, which for a SATA device bolted directly to the motherboard means it s likely to be a motherboard problem (IE more expensive to fix than the $289 I paid for the laptop almost 3 years ago). A Lenovo forum discussion says that the X1 Carbon Gen1 uses an unusual connector that s not M.2 SATA and not mSATA [2]. This means that a replacement is going to be expensive, $100US for a replacement from eBay when a M.2 SATA or NVMe device would cost about $50AU from my local store. It also means that I can t use a replacement for anything else. If my laptop took a regular NVMe I d buy one and test it out, if it didn t solve the problem a spare NVMe device is always handy to have. But I m not interested in spending $100US for a part that may turn out to be useless. I bought the laptop expecting that there was nothing I could fix inside it. While it is theoretically possible to upgrade the CPU etc in a laptop most people find that the effort and expense makes it better to just replace the entire laptop. With the ultra light category of laptops the RAM is soldered on the motherboard and there are even less options for replacing things. So while it s annoying that Lenovo didn t use a standard interface for this device (they did so for other models in the range) it s not a situation that I hadn t expected. Now the question is what to do next. The Thinkpad has a SD socket and micro SD cards are getting large capacities nowadays. If it won t boot from a SD card I could boot from USB and then run from SD. So even if the internal SATA device fails entirely it should still be useful. I wonder if I could find a Thinkpad with a similar problem going cheap as I m pretty sure that a Windows user couldn t make any storage device usable the way I can with Linux. I ve looked at prices on auction sites but haven t seen a Thinkpad X1 Carbon going for anywhere near the price I got this one. The nearest I ve seen is around $350 for a laptop with significant damage. While I am a little unhappy at Lenovo s choice of interface, I ve definitely got a lot more than $289 of value out of this laptop. So I m not really complaining.
[315041.837612] ata1.00: status:   DRDY  
[315041.837613] ata1.00: failed command: WRITE FPDMA QUEUED
[315041.837616] ata1.00: cmd 61/20:48:28:1e:3e/00:00:00:00:00/40 tag 9 ncq dma 
16384 out
                         res 40/00:01:00:00:00/00:00:00:00:00/e0 Emask 0x4 
(timeout)                
[315041.837617] ata1.00: status:   DRDY  
[315041.837618] ata1.00: failed command: READ FPDMA QUEUED
[315041.837621] ata1.00: cmd 60/08:50:e0:26:84/00:00:00:00:00/40 tag 10 ncq 
dma 4096 in
                         res 40/00:01:00:00:00/00:00:00:00:00/e0 Emask 0x4 
(timeout)                
[315041.837622] ata1.00: status:   DRDY  
[315041.837625] ata1: hard resetting link
[315042.151781] ata1: SATA link up 6.0 Gbps (SStatus 133 SControl 300)
[315042.163368] ata1.00: ACPI cmd ef/02:00:00:00:00:a0 (SET FEATURES) 
succeeded
[315042.163370] ata1.00: ACPI cmd f5/00:00:00:00:00:a0 (SECURITY FREEZE LOCK) 
filtered out
[315042.163372] ata1.00: ACPI cmd ef/10:03:00:00:00:a0 (SET FEATURES) filtered 
out
[315042.183332] ata1.00: ACPI cmd ef/02:00:00:00:00:a0 (SET FEATURES) 
succeeded
[315042.183334] ata1.00: ACPI cmd f5/00:00:00:00:00:a0 (SECURITY FREEZE LOCK) 
filtered out
[315042.183336] ata1.00: ACPI cmd ef/10:03:00:00:00:a0 (SET FEATURES) filtered 
out
[315042.193332] ata1.00: configured for UDMA/133
[315042.193789] sd 0:0:0:0: [sda] tag#10 FAILED Result: hostbyte=DID_OK 
driverbyte=DRIVER_SENSE
[315042.193791] sd 0:0:0:0: [sda] tag#10 Sense Key : Illegal Request [current] 
[315042.193793] sd 0:0:0:0: [sda] tag#10 Add. Sense: Unaligned write command
[315042.193795] sd 0:0:0:0: [sda] tag#10 CDB: Read(10) 28 00 00 84 26 e0 00 00 
08 00
[315042.193797] print_req_error: I/O error, dev sda, sector 8660704
[315042.193810] ata1: EH complete

15 December 2020

Russell Coker: Phone Plan Comparison 2020

It s been over 6 years since my last post comparing 3G data plans in Australia. I m currently using Aldi Mobile [1]. They have a $15 package which gives 3G of data and unlimited calls and SMS in Australia for 30 days, and a $25 package that gives 20G of data and unlimited international calls to 15 unspecified countries (presumably the US and much of the EU). It s on the Telstra network so has good access all around Australia. I m on the grandfathered $20 per 30 days plan which gives me more than 3G of data (not sure how much but I need more than 3G) while costing less than the current offer of $25.
Telco Cost Data Network Calling
Aldi $15/30 days 3G Telstra Unlimited National
Aldi $25/30 days 20G Telstra Unlimited International
Dodo [2] $5/month none Optus Unlimited National + Unlimited International SMS
Moose [3] $8.80/month 1G Optus 300min national calls + Unlimited National SMS
AmaySIM [4] $10/month 2G Optus Unlimited National
Kogan [5] $135/year 100G Vodafone Unlimited National
Kogan $180/year 180G Vodafone Unlimited National
Dodo $20/month 12G Optus Unlimited National + Unlimited International SMS + 100mins International talk
In the above table I put Aldi at the top because Telstra has the best network. If you want to use a mobile phone anywhere in Australia then Telstra is the best choice. If you only care about city areas then the other networks are usually good enough. The rest of the services are in order of price, cheapest first. For every service I only mentioned the offerings that are price competitive. I didn t list the offerings that are larger than these. There are offerings of 300G/year or more, but most people don t need them. Every service in this table has more expensive offerings with more data. Please inform me in the comments of any services I missed.

12 December 2020

Russell Coker: Electromagnetic Space Launch

The G-Force Wikipedia page says that humans can survive 20G horizontally eyes in for up to 10 seconds and 10G for 1 minute. An accelerator of 14G for 10 seconds (well below the level that s unsafe) gives a speed of mach 4 and an acceleration distance of just under 7km. Launching a 100 metric ton spacecraft in that way would require 14MW at the end of the launch path plus some extra for the weight of the part that contains magnets which would be retrieved by parachute. 14MW is a small fraction of the power used by a train or tram network and brown-outs of the transit network is something that they deal with so such a launch could be powered by diverting power from a transit network. The Rocky Mountains in the US peak at 4.4KM above sea level, so a magnetic launch that starts 2.6KM below sea level and extends the height of the Rocky Mountains would do. A speed of mach 4 vertically would get a height of 96Km if we disregard drag, that s almost 1/4 of the orbital altitude of the ISS. This seems like a more practical way to launch humans into space than a space elevator. The Mass Driver page on Wikipedia documents some of the past research on launching satellites that way, with shorter launch hardware and significantly higher G forces.

8 December 2020

Russell Coker: Links December 2020

Business Insider has an informative article about the way that Google users can get locked out with no apparent reason and no recourse [1]. Something to share with clients when they consider putting everything in the cloud . Vice has an interestoing article about people jailbreaking used Teslas after Tesla has stolen software licenses that were bought with the car [2]. The Atlantic has an interesting article titled This Article Won t Change Your Mind [3]. It s one of many on the topic of echo chambers but has some interesting points that others don t seem to cover, such as regarding the benefits of groups when not everyone agrees. Inequality.org has lots of useful information about global inequality [4]. Jeffrey Goldberg has an insightful interview with Barack Obama for the Atlantic about the future course of American politics and a retrospective on his term in office [5]. A Game Designer s Analysis Of QAnon is an insightful Medium article comparing QAnon to an augmented reality game [6]. This is one of the best analysis of QAnon operations that I ve seen. Decrypting Rita is one of the most interesting web comics I ve read [7]. It makes good use of side scrolling and different layers to tell multiple stories at once. PC Mag has an article about the new features in Chrome 87 to reduce CPU use [8]. On my laptop I have 1/3 of all CPU time being used when it is idle, the majority of which is from Chrome. As the CPU has 2 cores this means the equivalent of 1 core running about 66% of the time just for background tabs. I have over 100 tabs open which I admit is a lot. But it means that the active tabs (as opposed to the plain HTML or PDF ones) are averaging more than 1% CPU time on an i7 which seems obviously unreasonable. So Chrome 87 doesn t seem to live up to Google s claims. The movie Bad President starring Stormy Daniels as herself is out [9]. Poe s Law is passe. Interesting summary of Parler, seems that it was designed by the Russians [10]. Wired has an interesting article about Indistinguishability Obfuscation, how to encrypt the operation of a program [11]. Joerg Jaspert wrote an interesting blog post about the difficulties packagine Rust and Go for Debian [12]. I think that the problem is many modern languages aren t designed well for library updates. This isn t just a problem for Debian, it s a problem for any long term support of software that doesn t involve transferring a complete archive of everything and it s a problem for any disconnected development (remote sites and sites dealing with serious security. Having an automatic system for downloading libraries is fine. But there should be an easy way of getting the same source via an archive format (zip will do as any archive can be converted to any other easily enough) and with version numbers.

4 December 2020

Russell Coker: ZFS 2.0.0 Released

Version 2.0 of ZFS has been released, it s now known as OpenZFS and has a unified release for Linux and BSD which is nice. One new feature is persistent L2ARC (which means that when you use SSD or NVMe to cache hard drives that cache will remain after a reboot) is an obvious feature that was needed for a long time. The Zstd compression invented by Facebook is the best way of compressing things nowadays, it s generally faster while giving better compression than all other compression algorithms and it s nice to have that in the filesystem. The PAM module for encrypted home directories is interesting, I haven t had a need for directory level encryption as block device encryption has worked well for my needs. But there are good use cases for directory encryption. I just did a quick test of OpenZFS on a VM, the one thing it doesn t do is let me remove a block device accidentally added to a zpool. If you have a zpool with a RAID-Z array and mistakenly add a new disk as a separate device instead of replacing a part of the RAID-Z then you can t remove it. There are patches to allow such removal but they didn t appear to get in to the OpenZFS 2.0.0 release. For my use ZFS on Linux 0.8.5 has been working fairly well and apart from the inability to remove a mistakenly added device I haven t had any problems with it. So I have no immediate plans to upgrade existing servers. But if I was about to install a new ZFS server I d definitely use the latest version. People keep asking about license issues. I m pretty sure that Oracle lawyers have known about sites like zfsonlinux.org for years, the fact that they have decided not to do anything about it seems like clear evidence that it s OK for me to keep using that code.

Russell Coker: KDE Icons Disappearing in Debian/Unstable

One of my workstations is running Debian/Unstable with KDE and SDDM on an AMD Radeon R7 260X video card. Recently it stopped displaying things correctly after a reboot, all the icons failed to display as well as many of the Qt controls. When I ran a KDE application from the command line I got the error QSGTextureAtlas: texture atlas allocation failed, code=501 . Googling that error gave a blog post about a very similar issue in 2017 [1]. From that blog post I learned that I could stop the problem by setting MESA_EXTENSION_OVERRIDE= -GL_EXT_bgra -GL_EXT_texture_format_BGRA8888 in the environment. In a quick test I found that the environment variable setting worked, making the KDE apps display correctly and not report an error about a texture atlas. I created a file ~/.config/plasma-workspace/env/bgra.sh with the following contents:
export MESA_EXTENSION_OVERRIDE="-GL_EXT_bgra -GL_EXT_texture_format_BGRA8888"
Then after the next login things worked as desired! Now the issue is, where is the bug? GL, X, and the internals of KDE are things I don t track much. I welcome suggestions from readers of my blog as to what the culprit might be and where to file a Debian bug or a URL to a Debian bug report if someone has already filed one. Update When I run the game warzone2100 with this setting it crashes with the below output. So this Mesa extension override isn t always a good thing, just solves one corner case of a bug.
$ warzone2100 
/usr/bin/gdb: warning: Couldn't determine a path for the index cache directory.
27      ../sysdeps/unix/sysv/linux/wait4.c: No such file or directory.
No frame at level 0x7ffc3392ab50.
Saved dump file to '/home/etbe/.local/share/warzone2100-3.3.0//logs/warzone2100.gdmp-VuGo2s'
If you create a bugreport regarding this crash, please include this file.
Segmentation fault (core dumped)
Update 2 Carsten provided the REAL solution to this, run apt remove libqt5quick5-gles which will automatically install libqt5quick5 which makes things work. Another workstation I run that tracks Testing had libqt5quick5 installed which was why it didn t have the problem. The system in question had most of KDE removed due to package dependency issues when tracking Unstable and when I reinstalled it I guess the wrong one was installed.

8 November 2020

Russell Coker: Links November 2020

KDE has a long term problem of excessive CPU time used by the screen locker [1]. Part of it is due to software GL emulation, and part of it is due to the screen locker doing things like flashing the cursor when nothing else is happening. One of my systems has an NVidia card and enabling GL would cause it to crash. So now I have kscreenlocker using 30% of a CPU core even when the screen is powered down. Informative NYT article about the latest security features for iPhones [2]. Android needs new features like this! Russ Allbery wrote an interesting review of the book Hand to Mouth by Linda Tirado [3], it s about poverty in the US and related things. Linda first became Internet famous for her essay Why I Make Terrible Decisions or Poverty Thoughts which is very insightful and well written, this is the latest iteration of that essay [4]. This YouTube video by Ruby Payne gives great insights to class based attitudes towards time and money [5]. News Week has an interesting article about chicken sashimi, apparently you can safely eat raw chicken if it s prepared well [6]. Vanity Fair has an informative article about how Qanon and Trumpism have infected the Catholic Church [7]. Some of Mel Gibson s mental illness is affecting a significant portion of the Catholic Church in the US and some parts in the rest of the world. Noema has an interesting article on toxic Internet culture, Japan s 2chan, 4chan, 8chan/8kun, and the conspiracy theories they spawned [8]. Benjamin Corey is an ex-Fundie who wrote an amusing analysis of the Biblical statements about the anti-Christ [9]. NYMag has an interesting article The Final Gasp of Donald Trump s Presidency [10]. Mother Jones has an informative article about the fact that Jim Watkins (the main person behind QAnon) has a history of hosting child porn on sites he runs [11], but we all knew QAnon was never about protecting kids. Eand has an insightful article America s Problem is That White People Want It to Be a Failed State [12].

Next.

Previous.