Search Results: "sr1"

7 May 2024

Melissa Wen: Get Ready to 2024 Linux Display Next Hackfest in A Coru a!

We re excited to announce the details of our upcoming 2024 Linux Display Next Hackfest in the beautiful city of A Coru a, Spain! This year s hackfest will be hosted by Igalia and will take place from May 14th to 16th. It will be a gathering of minds from a diverse range of companies and open source projects, all coming together to share, learn, and collaborate outside the traditional conference format.

Who s Joining the Fun? We re excited to welcome participants from various backgrounds, including:
  • GPU hardware vendors;
  • Linux distributions;
  • Linux desktop environments and compositors;
  • Color experts, researchers and enthusiasts;
This diverse mix of backgrounds are represented by developers from several companies working on the Linux display stack: AMD, Arm, BlueSystems, Bootlin, Collabora, Google, GravityXR, Igalia, Intel, LittleCMS, Qualcomm, Raspberry Pi, RedHat, SUSE, and System76. It ll ensure a dynamic exchange of perspectives and foster collaboration across the Linux Display community. Please take a look at the list of participants for more info.

What s on the Agenda? The beauty of the hackfest is that the agenda is driven by participants! As this is a hybrid event, we decided to improve the experience for remote participants by creating a dedicated space for them to propose topics and some introductory talks in advance. From those inputs, we defined a schedule that reflects the collective interests of the group, but is still open for amendments and new proposals. Find the schedule details in the official event webpage. Expect discussions on:

  • Proposal with new DRM object type:
    • Brief presentation of GPU-vendor features;
    • Status update of plane color management pipeline per vendor on Linux;
  • HDR/Color Use-cases:
    • HDR gainmap images and how should we think about HDR;
    • Google/ChromeOS GFX view about HDR/per-plane color management, VKMS and lessons learned;
  • Post-blending Color Pipeline.
  • Color/HDR testing/CI
    • VKMS status update;
    • Chamelium boards, video capture.
  • Wayland protocols
    • color-management protocol status update;
    • color-representation and video playback.
Display control
  • HDR signalling status update;
  • backlight status update;
  • EDID and DDC/CI.
Strategy for video and gaming use-cases
  • Multi-plane support in compositors
    • Underlay, overlay, or mixed strategy for video and gaming use-cases;
    • KMS Plane UAPI to simplify the plane arrangement problem;
    • Shared plane arrangement algorithm desired.
  • HDR video and hardware overlay
Frame timing and VRR
  • Frame timing:
    • Limitations of uAPI;
    • Current user space solutions;
    • Brainstorm better uAPI;
  • Cursor/overlay plane updates with VRR;
  • KMS commit and buffer-readiness deadlines;
Power Saving vs Color/Latency
  • ABM (adaptive backlight management);
  • PSR1 latencies;
  • Power optimization vs color accuracy/latency requirements.
Content-Adaptive Scaling & Sharpening
  • Content-Adaptive Scalers on display hardware;
  • New drm_colorop for content adaptive scaling;
  • Proprietary algorithms.
Display Mux
  • Laptop muxes for switching of the embedded panel between the integrated GPU and the discrete GPU;
  • Seamless/atomic hand-off between drivers on Linux desktops.
Real time scheduling & async KMS API
  • Potential benefits: lower latency input feedback, better VRR handling, buffer synchronization, etc.
  • Issues around async uAPI usage and async-call handling.

In-person, but also geographically-distributed event This year Linux Display Next hackfest is a hybrid event, hosted onsite at the Igalia offices and available for remote attendance. In-person participants will find an environment for networking and brainstorming in our inspiring and collaborative office space. Additionally, A Coru a itself is a gem waiting to be explored, with stunning beaches, good food, and historical sites.

Semi-structured structure: how the 2024 Linux Display Next Hackfest will work
  • Agenda: Participants proposed the topics and talks for discussing in sessions.
  • Interactive Sessions: Discussions, workshops, introductory talks and brainstorming sessions lasting around 1h30. There is always a starting point for discussions and new ideas will emerge in real time.
  • Immersive experience: We will have coffee-breaks between sessions and lunch time at the office for all in-person participants. Lunches and coffee-breaks are sponsored by Igalia. This will keep us sharing knowledge and in continuous interaction.
  • Spaces for all group sizes: In-person participants will find different room sizes that match various group sizes at Igalia HQ. Besides that, there will be some devices for showcasing and real-time demonstrations.

Social Activities: building connections beyond the sessions To make the most of your time in A Coru a, we ll be organizing some social activities:
  • First-day Dinner: In-person participants will enjoy a Galician dinner on Tuesday, after a first day of intensive discussions in the hackfest.
  • Getting to know a little of A Coru a: Finding out a little about A Coru a and current local habits.
Participants of a guided tour in one of the sectors of the Museum of Estrella Galicia (MEGA). Source:
  • On Thursday afternoon, we will close the 2024 Linux Display Next hackfest with a guided tour of the Museum of Galicia s favorite beer brand, Estrella Galicia. The guided tour covers the eight sectors of the museum and ends with beer pouring and tasting. After this experience, a transfer bus will take us to the Maria Pita square.
  • At Maria Pita square we will see the charm of some historical landmarks of A Coru a, explore the casual and vibrant style of the city center and taste local foods while chatting with friends.

Sponsorship Igalia sponsors lunches and coffee-breaks on hackfest days, Tuesday s dinner, and the social event on Thursday afternoon for in-person participants. We can t wait to welcome hackfest attendees to A Coru a! Stay tuned for further details and outcomes of this unconventional and unique experience.

26 July 2023

Enrico Zini: Mysterious DNS issues

Uhm, salsa is not resolving:
$ git fetch
ssh: Could not resolve hostname Name or service not known
fatal: Could not read from remote repository.
$ ping
ping: Name or service not known
But... it is?
$ host has address has IPv6 address 2607:f8f0:614:1::1274:44 mail is handled by 10 mail is handled by 10 mail is handled by 10
It really is resolving correctly at each step:
$ cat /etc/resolv.conf
# This is /run/systemd/resolve/stub-resolv.conf managed by man:systemd-resolved(8).
# Do not edit.
# [...]
# Run "resolvectl status" to see details about the uplink DNS servers
# currently in use.
# [...]
options edns0 trust-ad
$ host
Using domain server:
Aliases: has address has IPv6 address 2607:f8f0:614:1::1274:44 mail is handled by 10 mail is handled by 10 mail is handled by 10
# resolvectl status
       Protocols: +LLMNR +mDNS -DNSOverTLS DNSSEC=no/unsupported
resolv.conf mode: stub
Link 3 (wlp108s0)
    Current Scopes: DNS LLMNR/IPv4 LLMNR/IPv6
         Protocols: +DefaultRoute +LLMNR -mDNS -DNSOverTLS DNSSEC=no/unsupported
Current DNS Server:
       DNS Servers: fd00::3e37:12ff:fe99:2301 2a01:b600:6fed:1:3e37:12ff:fe99:2301
        DNS Domain:
Link 4 (virbr0)
Current Scopes: none
     Protocols: -DefaultRoute +LLMNR -mDNS -DNSOverTLS DNSSEC=no/unsupported
Link 9 (enxace2d39ce693)
    Current Scopes: DNS LLMNR/IPv4 LLMNR/IPv6
         Protocols: +DefaultRoute +LLMNR -mDNS -DNSOverTLS DNSSEC=no/unsupported
Current DNS Server:
       DNS Servers: fd00::3e37:12ff:fe99:2301 2a01:b600:6fed:1:3e37:12ff:fe99:2301
        DNS Domain:
$ host
Using domain server:
Aliases: has address has IPv6 address 2607:f8f0:614:1::1274:44 mail is handled by 10 mail is handled by 10 mail is handled by 10
$ host fd00::3e37:12ff:fe99:2301 2a01:b600:6fed:1:3e37:12ff:fe99:2301
Using domain server:
Name: fd00::3e37:12ff:fe99:2301
Address: fd00::3e37:12ff:fe99:2301#53
Aliases: has address has IPv6 address 2607:f8f0:614:1::1274:44 mail is handled by 10 mail is handled by 10 mail is handled by 10
Could it be caching?
# systemctl restart systemd-resolved
$ dpkg -s nscd
dpkg-query: package 'nscd' is not installed and no information is available
$ git fetch
ssh: Could not resolve hostname Name or service not known
fatal: Could not read from remote repository.
Could it be something in ssh's config?
$ grep salsa ~/.ssh/config
$ ssh
ssh: Could not resolve hostname Name or service not known
Something weird with ssh's control sockets?
$ strace -fo /tmp/zz ssh
ssh: Could not resolve hostname Name or service not known
enrico@ploma:~/lavori/legal/legal$ grep salsa /tmp/zz
393990 execve("/usr/bin/ssh", ["ssh", ""], 0x7ffffcfe42d8 /* 54 vars */) = 0
393990 connect(3,  sa_family=AF_UNIX, sun_path="/home/enrico/.ssh/sock/" , 110) = -1 ENOENT (No such file or directory)
$ strace -fo /tmp/zz1 ssh -S none
ssh: Could not resolve hostname Name or service not known
$ grep salsa /tmp/zz1
394069 execve("/usr/bin/ssh", ["ssh", "-S", "none", ""], 0x7ffd36cbfde8 /* 54 vars */) = 0
How is ssh trying to resolve
393990 connect(3,  sa_family=AF_UNIX, sun_path="/run/systemd/resolve/io.systemd.Resolve" , 42) = 0
393990 sendto(3, " \"method\":\"io.systemd.Resolve.Re"..., 99, MSG_DONTWAIT MSG_NOSIGNAL, NULL, 0) = 99
393990 mmap(NULL, 135168, PROT_READ PROT_WRITE, MAP_PRIVATE MAP_ANONYMOUS, -1, 0) = 0x7f4fc71ca000
393990 recvfrom(3, 0x7f4fc71ca010, 135152, MSG_DONTWAIT, NULL, NULL) = -1 EAGAIN (Resource temporarily unavailable)
393990 ppoll([ fd=3, events=POLLIN ], 1,  tv_sec=119, tv_nsec=999917000 , NULL, 8) = 1 ([ fd=3, revents=POLLIN ], left  tv_sec=119, tv_nsec=998915689 )
393990 recvfrom(3, " \"error\":\"io.systemd.System\",\"pa"..., 135152, MSG_DONTWAIT, NULL, NULL) = 56
393990 rt_sigprocmask(SIG_SETMASK, [], NULL, 8) = 0
393990 close(3)                         = 0
393990 munmap(0x7f4fc71ca000, 135168)   = 0
393990 getpid()                         = 393990
393990 write(2, "ssh: Could not resolve hostname "..., 77) = 77
Something weird with resolved?
$ resolvectl query resolve call failed: Lookup failed due to system error: Invalid argument
Let's try disrupting what ssh is trying and failing:
# mv /run/systemd/resolve/io.systemd.Resolve /run/systemd/resolve/io.systemd.Resolve.backup
$ strace -o /tmp/zz2 ssh -S none -vv
OpenSSH_9.2p1 Debian-2, OpenSSL 3.0.9 30 May 2023
debug1: Reading configuration data /home/enrico/.ssh/config
debug1: /home/enrico/.ssh/config line 1: Applying options for *
debug1: /home/enrico/.ssh/config line 228: Applying options for *
debug1: Reading configuration data /etc/ssh/ssh_config
debug1: /etc/ssh/ssh_config line 19: include /etc/ssh/ssh_config.d/*.conf matched no files
debug1: /etc/ssh/ssh_config line 21: Applying options for *
debug2: resolving "" port 22
ssh: Could not resolve hostname Name or service not known
$ tail /tmp/zz2
394748 prctl(PR_CAPBSET_READ, 0x29 /* CAP_??? */) = -1 EINVAL (Invalid argument)
394748 munmap(0x7f27af5ef000, 164622)   = 0
394748 futex(0x7f27ae5feaec, FUTEX_WAKE_PRIVATE, 2147483647) = 0
394748 openat(AT_FDCWD, "/run/systemd/machines/", O_RDONLY O_CLOEXEC) = -1 ENOENT (No such file or directory)
394748 rt_sigprocmask(SIG_SETMASK, [], NULL, 8) = 0
394748 getpid()                         = 394748
394748 write(2, "ssh: Could not resolve hostname "..., 77) = 77
394748 exit_group(255)                  = ?
394748 +++ exited with 255 +++
$ machinectl list
No machines.
# resolvectl flush-caches
$ resolvectl query resolve call failed: Lookup failed due to system error: Invalid argument
# resolvectl reset-statistics
$ resolvectl query resolve call failed: Lookup failed due to system error: Invalid argument
# resolvectl reset-server-features
$ resolvectl query resolve call failed: Lookup failed due to system error: Invalid argument
# resolvectl monitor
  Q: IN A
  A: IN NS
  A: IN NS
  A: IN NS
  A: IN A
  A: IN NS
I guess I won't be using salsa today, and I wish I understood why. Update: as soon as I pushed this post to my blog (via ssh) salsa started resolving again.

16 June 2023

Russell Coker: BOINC and Idle Users

The BOINC distributed computing client in Debian (Bookworm and previous releases) can check the idle time via the X11 protocol and run GPU jobs when the interactive user is idle, so the user gets GPU power for graphics when they need it and when it s idle BOINC uses it. This doesn t work for Wayland and unfortunately no-one has written a Wayland equivalent of xprintidle (which shows the number of milliseconds that the X11 session has been idle in milliseconds. In the Debian bug system there is bug #813728 about a message every second due to failed attempts to find X11 idle time [1]. On my main workstation with Wayland it logs Authorization required, but no authorization protocol specified . There is also bug #775125 about BOINC not detecting mouse movements [2], I added to it about the issues with Wayland. There s the package swayidle in Debian that is designed to manage the screen-save process on Wayland, below is an example of how to use it to display output on 5 seconds and 10 seconds of idle.
swayidle -w timeout 5 'echo 5' timeout 10 'echo 10' resume 'echo resume' before-sleep 'echo before-sleep'
The code for swayidle has only 7 comments and isn t easy to read. I looked in to writing a Wayland equivalent of xprintidle but it would take more work than I m prepared to invest in it. So it seems to me that the best option might be to have BOINC receive SIGUSR1 and SIGUSR2 for the start and stop of idle time and then have scripts call xprintidle, swayidle, a wrapper for w (for systems without graphics) or other methods. To run swayidle as root you can set WAYLAND_DISPLAY=../$USER_ID/wayland-0.

6 June 2023

Shirish Agarwal: Odisha Train Crash and Coverup, Demonetization 2.0 & NHFS-6 Survey

Just a few days back we came to know about the horrific Train Crash that happened in Odisha (Orissa). There are some things that are known and somethings that can be inferred by observance. Sadly, it seems the incident is going to be covered up  . Some of the facts that have not been contested in the public domain are that there were three lines. One loop line on which the Goods Train was standing and there was an up and a down line. So three lines were there. Apparently, the signalling system and the inter-locking system had issues as highlighted by an official about a month back. That letter, thankfully is in the public domain and I have downloaded it as well. It s a letter that goes to 4 pages. The RW is incensed that the letter got leaked and is in public domain. They are blaming everyone and espousing conspiracy theories rather than taking the minister to task. Incidentally, the Minister has three ministries that he currently holds. Ministry of Communication, Ministry of Electronics and Information Technology (MEIT), and Railways Ministry. Each Ministry in itself is important and has revenues of more than 6 lakh crore rupees. How he is able to do justice to all the three ministries is beyond me  The other thing is funds both for safety and relaying of tracks has been either not sanctioned or unutilized. In fact, CAG and the Railway Brass had shared how derailments have increased and unfulfilled vacancies but they were given no importance  In fact, not talking about safety in the recently held Chintan Shivir (brainstorming session) tells you how much the Govt. is serious about safety. In fact, most of the programme was on high speed rail which is a white elephant. I have shared a whitepaper done by RW in the U.S. that tells how high-speed rail doesn t make economic sense. And that is an economy that is 20 times + the Indian Economy. Even the Chinese are stopping with HSR as it doesn t make economic sense. Incidentally, Air Fares again went up 200% yesterday. Somebody shared in the region of 20k + for an Air ticket from their place to Bangalore  Coming back to the story itself. the Goods Train was on the loopline. Some say it was a little bit on the outer, some say otherwise, but it is established that it was on the loopline. This is standard behavior on and around Railway Stations around the world. Whether it was in the Inner or Outer doesn t make much of a difference with what happened next. The first train that collided with the goods train was the 12864 (SMVB-HWH) Yashwantpur Howrah Express and got derailed on to the next track where from the opposite direction 12841 (Shalimar- Bangalore) Coramandel Express was coming. Now they have said that around 300 people have died and that seems to be part of the cover-up. Both the trains are long trains, having between 23 odd coaches each. Even if you have reserved tickets you have 80 odd people in a coach and usually in most of these trains, it is at least double of that. Lot of money goes to TC and then above (Corruption). The Railway fares have gone up enormously but that s a question for perhaps another time  . So at the very least, we could be looking at more than 1000 people having died. The numbers are being under-reported so that nobody has to take responsibility. The Railways itself has told that it is unable to identify 80% of the people who have died. This means that 80% were unreserved ticket holders or a majority of them. There have been disturbing images as how bodies have been flung over on tractors and whatnot to be either buried or cremated without a thought. We are in peak summer season so bodies will start to rot within 24-48 hours  No arrangements made to cool the bodies and take some information and identifying marks or whatever. The whole thing being done in a very callous manner, not giving dignity to even those who have died for no fault of their own. The dissent note also tells that a cover-up is also in the picture. Apparently, India doesn t have nor does it feel to have a need for something like the NTSB that the U.S. used when it hauled both the plane manufacturer (Boeing) and the FAA when the 737 Max went down due to improper data collection and sharing of data with pilots. And with no accountability being fixed to Minister or any of the senior staff, a small junior staff person may be fired. Perhaps the same official that actually told them about the signal failures almost 3 months back  There were and are also some reports that some jugaadu /temporary fixes were applied to signalling and inter-locking just before this incident happened. I do not know nor confirm one way or the other if the above happened. I can however point out that if such a thing happened, then usually a traffic block is announced and all traffic on those lines are stopped. This has been the thing I know for decades. Traveling between Mumbai and Pune multiple times over the years am aware about traffic block. If some repair work was going on and it wasn t able to complete the work within the time-frame then that may well have contributed to the accident. There is also a bit muddying of the waters where it is being said that one of the trains was 4 hours late, which one is conflicting stories. On top of the whole thing, they have put the case to be investigated by CBI and hinting at sabotage. They also tried to paint a religious structure as mosque, later turned out to be a temple. The RW says done by Muslims as it was Friday not taking into account as shared before that most Railway maintenance works are usually done between Friday Monday. This is a practice followed not just in India but world over. There has been also move over a decade to remove wooden sleepers and have concrete sleepers. Unlike the wooden ones they do not expand and contract as much and their life is much more longer than the wooden ones. Funds had been marked (although lower than last few years) but not yet spent. As we know in case of any accident, it is when all the holes in cheese line up it happens. Fukushima is a great example of that, no sea wall even though Japan is no stranger to Tsunamis. External power at the same level as the plant. (10 meters above sea-level), no training for cascading failures scenarios which is what happened. The Days mini-series shares some but not all the faults that happened at Fukushima and the Govt. response to it. There is a difference though, the Japanese Prime Minister resigned on moral grounds. Here, nor the PM, nor the Minister would be resigning on moral grounds or otherwise :(. Zero accountability and that was partly a natural disaster, here it s man-made. In fact, both the Minister and the Prime Minister arrived with their entourages, did a PR blitzkrieg showing how concerned they are. Within 50 hours, the lines were cleared. The part-time Railway Minister shared that he knows the root cause and then few hours later has given the case to CBI. All are saying, wait for the inquiry report. To date, none of the accidents even in this Govt. has produced an investigation report. And even if it did, I am sure it will whitewash as it did in case of Adani as I had shared before in the previous blog post. Incidentally, it is reported that Adani paid off some of its debt, but when questioned as to where they got the money, complete silence on that part :(. As can be seen cover-up after cover-up  FWIW, the Coramandel Express is known as the Migrant train so has a huge number of passengers, the other one which was collided with is known as sick train as huge number of cancer patients use it to travel to Chennai and come back

Demonetization 2.0 Few days back, India announced demonetization 2.0. Surprised, don t be. Apparently, INR 2k/- is being used for corruption and Mr. Modi is unhappy about it. He actually didn t like the INR 2k/- note but was told that it was needed, who told him we are unaware to date. At that time the RBI Governor was Mr. Urjit Patel who didn t say about INR 2k/- he had said that INR 1k/- note redesigned would come in the market. That has yet to happen. What has happened is that just like INR 500/- and INR 1k/- note is concerned, RBI will no longer honor the INR 2k/- note. Obviously, this has made our neighbors angry, namely Nepal, Sri Lanka, Bhutan etc. who do some trading with us. 2 Deccan herald columns share the limelight on it. Apparently, India wants to be the world s currency reserve but doesn t want to play by the rules for everyone else. It was pointed out that both the U.S. and Singapore had retired their currencies but they will honor that promise even today. The Singapore example being a bit closer (as it s in Asia) is perhaps a bit more relevant than the U.S. one. Singapore retired the SGD $10,000 as of 2014 but even in 2022, it remains as legal tender. They also retired the SGD $1,000 in 2020 but still remains legal tender.

So let s have a fictitious example to illustrate what is meant by what Singapore has done. Let s say I go to Singapore, rent a flat, and find a $1000 note in that house somewhere. Both practically and theoretically, I could go down to any of the banks, get the amount transferred to my wallet, bank account etc. and nobody will question. Because they have promised the same. Interestingly, the Singapore Dollar has been pretty resilient against the USD for quite a number of years vis-a-vis other Asian currencies. Most of the INR 2k/- notes were also found and exchanged in Gujarat in just a few days (The PM and HM s state.). I am sure you are looking into the mental gymnastics that the RW indulge in :(. What is sadder that most of the people who try to defend can t make sense one way or the other and start to name-call and get personal as they have nothing else

Disability questions dropped in NHFS-6 Just came to know today that in the upcoming National Family Health Survey-6 disability questions are being dropped. Why is this important. To put it simply, if you don t have numbers, you won t and can t make policies for them. India is one of the worst countries to live if you are disabled. The easiest way to share to draw attention is most Railway platforms are not at level with people. Just as Mick Lynch shares in the UK, the same is pretty much true for India too. Meanwhile in Europe, they do make an effort to be level so even disabled people have some dignity. If your public transport is sorted, then people would want much more and you will be obligated to provide for them as they are citizens. Here, we have had many reports of women being sexually molested when being transferred from platform to coach irrespective of their age or whatnot  The main takeaway is if you do not have their voice, you won t make policies for them. They won t go away but you will make life hell for them. One thing to keep in mind that most people assume that most people are disabled from birth. This may or may not be true. For e.g. in the above triple Railways accidents, there are bound to be disabled people or newly disabled people who were healthy before the accident. The most common accident is road accidents, some involving pedestrians and vehicles or both, the easiest is Ministry of Road Transport data that says 4,00,000 people sustained injuries in 2021 alone in road mishaps. And this is in a country where even accidents are highly under-reported, for more than one reason. The biggest reason especially in 2 and 4 wheeler is the increased premium they would have to pay if in an accident, so they usually compromise with the other and pay off the Traffic Inspector. Sadly, I haven t read a new book, although there are a few books I m looking forward to have. People living in India and neighbors please be careful as more heat waves are expected. Till later.

29 April 2020

Craig Small: Sending data in a signal

The well-known kill system call has been around for decades and is used to send a signal to another process. The most common use is to terminate or kill another process by sending the KILL or TERM signal but it can be used for a form of IPC, usually around giving the other process a kick to do something. One thing that isn t as well known is besides sending a signal to a process, you can send some data to it. This can either be an integer or a pointer and uses similar semantics to the known kill and signal handler. I came across this when there was a merge request for procps. The main changes are using sigqueue instead of kill in the sender and using a signal action not a signal handler in the receiver. To illustrate this feature, I have a small set of programs called sender and receiver that will pass an integer between them. The Sender The sender program is extremely simple, use a random(ish) from time masked to two bytes, put it in the required union and send the lot to sendqueue.
# include <signal.h>
# include <stdlib.h>
# include <stdio.h>
# include <time.h>
int main(int argc, char **argv)
    union sigval sigval;
    pid_t pid;
    if (argc < 2   (pid = atoi(argv[1])) < 0)
    sigval.sival_int = time(NULL) &amp; 0xff;
    printf("sender: sending %d to PID %d\n",
        sigval.sival_int, pid);
    sigqueue(pid, SIGUSR1, sigval);
    return EXIT_SUCCESS;
The key lines are 13 and 16 where the random (ish) integer is stored in the sigval union and then sent to the other process with the sigqueue. The receiver The receiver just sets up the signal handler, sends its PID (so I know what to tell the sender) and sits in a sleeping loop.
# include <stdio.h>
# include <stdlib.h>
# include <sys/types.h>
# include <unistd.h>
# include <signal.h>
void signal_handler(int signum, siginfo_t *siginfo, void *ucontext)
    if (signum != SIGUSR1) return;
    if (siginfo->si_code != SI_QUEUE) return;
    printf("receiver: Got value %d\n",
int main(int argc, char **argv)
    pid_t pid = getpid();
    struct sigaction signal_action;
    printf("receiver: PID is %d\n", pid);
    signal_action.sa_sigaction = signal_handler;
    sigemptyset (&amp;signal_action.sa_mask);
    signal_action.sa_flags = SA_SIGINFO;
    sigaction(SIGUSR1, &amp;signal_action, NULL);
    while(1) sleep(100);
    return EXIT_SUCCESS;
Lines 16-26 setup the signal handler. The main difference here is SA_SIGINFO used for the signal flags and sigaction references a sa_sigaction function rather than sa_handler. We need to use a different function because the sigaction only is passed the signal number but we need more information, including the integer that the sender process stored in sigval. Lines 7-14 are the signal handler function itself. It first checks that the receiver process got the correct signal (SIGUSR1 in this case) and that we got this signal from sigqueue because the type is SI_QUEUE. Checking the type of signal is important because different signals give you different data. For example, if you signalled this process with kill then si_int is undefined. The result As a proof of concept, the results are not terribly exciting. We see the sender say what it will be sending and the receiver saying it got it. It was useful to get some results, especially when things went wrong.
$ ./receiver &amp;
[1] 370216
receiver: PID is 370216
$ ./sender 370216
sender: sending 133 to (gdPID 370216
receiver: Got value 133
Gotchas While testing the two process there was two gotchas I encountered. GDB and the siginfo structure The sigaction manual page shows a simple siginfo_t structure, however when looking at what is passed to the signal handler, it s much more complicated.
(gdb) p *siginfo
$2 =  si_signo = 10, si_errno = 0, si_code = -1, __pad0 = 0, _sifields =  _pad =  371539, 1000, 11, 32766, 0 <repeats 24 times> , _kill =  si_pid = 371539, si_uid = 1000 , _timer =  si_tid = 371539, 
      si_overrun = 1000, si_sigval =  sival_int = 11, sival_ptr = 0x7ffe0000000b , _rt =  si_pid = 371539, si_uid = 1000, si_sigval =  sival_int = 11, sival_ptr = 0x7ffe0000000b , _sigchld =  
      si_pid = 371539, si_uid = 1000, si_status = 11, si_utime = 0, si_stime = 0 , _sigfault =  si_addr = 0x3e80005ab53, si_addr_lsb = 11, _bounds =  _addr_bnd =  _lower = 0x0, _upper = 0x0 , _pkey = 0 , 
    _sigpoll =  si_band = 4294967667539, si_fd = 11 , _sigsys =  _call_addr = 0x3e80005ab53, _syscall = 11, _arch = 32766 
(gdb) p siginfo->_sifields._rt.si_sigval.sival_int
$3 = 11
So the integer is stored in a union in a structure in a structure. Much harder to find than just simply sival_int. The pointer is just a pointer So perhaps sending an integer is not enough. The sigval is a union with an integer and a pointer. Could a string be sent instead? I changed line 13 of the sender so it used a string instead of an integer.
    sigval.sival_ptr = "Hello, World!"
The receiver needed a minor adjustment to print out the string. I tried this and the receiver segmentation faulted. What was going on? The issue is the set of system calls does a simple passing of the values. So if the sender sends a pointer to a string located at 0x1234567 then the receiver will have a pointer to the same location. When the receiver tries to dereference the sival_ptr, it is pointing to memory that is not owned by it but by another process (the sender) so it segmentation faults. The solution would be to use shared memory between the processes. The signal queue would then use the pointer to the shared memory and, in theory, all would be well.

9 October 2016

Norbert Preining: Reload: Android 7.0 Nougat Root Pokemon Go

Ok, it turned out that a combinations of updates has broken my previous guide on playing Pokemon GO on a rooted Android device. What has happened that the October security update of the Android Nougat has changed the SecurityNet that is used for checking for rooted devices, and at the same time the Magisk rooting system as catapulted itself (hopefully temporarily) into the complete irrelevance by removing the working version and providing an improved version that does neither have SuperSU installed, nor the ability to hide root well done, congratulations. android-nougat-root-poke But there is a way around, and I am now back at the latest security patch level, rooted, and playing Pokemon GO (not very often, no time, though!). My previous guide used Magisk Version 6 to root and hide root. But the recent security updated of Andorid Nougat (October 2016) has rendered Magisk-v6 non-working. I first thought that Magisk-v7 could solve the problem, but I was badly disappointed: After reflashing my device to pristine state, installing Magisk-v7, I suddenly was left with: no SuperSU (that means, X-plore, Titanium Backup etc do not work anymore), nor the ability to hide root for Pokemon Go or banking apps. Great update. Thus, I have decided to remove Magisk completely and make a clean start with SuperSU and suhide (and a GUI for suhide). And it turned out to be more convenient and more standard than Magisk, may it rest in peace (until they fix their stuff together). In the following I assume you have a clean slate Android Nougat device, if not, please see one of the previous guides for hints how to flash back without loosing your user data. Ingredients One need the following few items: Rooting Unzip the and either use the included programs (,, root-windows.bat) to root your device, or simply connect your device to your computer, and run (assuming you have adb and fastboot installed):
adb reboot bootloader
sleep 10
fastboot boot image/CF-Auto-Root-angler-angler-nexus6p.img
After that your device will reboot a few times, and you will finally land in your normal Android screen and a new program SuperSU will be available. At this stage you will not be able to play Pokemon GO anymore. Updating SuperSU The version of SuperSU packaged with the CF-AutoRoot is unfortunately too old, so one needs to update using the zip file (or later). Here are two options: Either you use TWRP recovery system, or you install FlashFire (main page, app store page) which allows you to flash zip/ota directly from your Android screen. This time I used for the first time the FlashFire method, and it worked without any problem. Just press the + button in FlashFire, than the Flash zip/ota button, select the, click two times yes, and then wait a bit and a few black screens (don t do anything!) later you will be back in your Nougat environment. Opening the SuperSU app should show you on the Settings tag that the version has been updated to 2.78-SR1. Installing suhide As with the update of SuperSU, install the suhide zip file, smae procedure, nothing special. After this you will be able to add an application (like Pokemon GO) from the command line (shell), but this is not very convenient. Better is to install the suhide GUI from the app store, start it, scroll for Pokemnon GO, add a tick, and you are settled. After that you are free to play Pokemon GO again. At least until the next security update brings again problems. In the long run this is a loosing game, anyway. Enjoy it while you can.

6 May 2016

Matthias Klumpp: Adventures in D programming

I recently wrote a bigger project in the D programming language, the appstream-generator (asgen). Since I rarely leave the C/C++/Python realm, and came to like many aspects of D, I thought blogging about my experience could be useful for people considering to use D. Disclaimer: I am not an expert on programming language design, and this is not universally valid criticism of D just my personal opinion from building one project with it. Why choose D in the first place? The previous AppStream generator was written in Python, which wasn t ideal for the task for multiple reasons, most notably multiprocessing and LMDB not working well together (and in general, multiprocessing being terrible to work with) and the need to reimplement some already existing C code in Python again. So, I wanted a compiled language which would work well together with the existing C code in libappstream. Using C was an option, but my least favourite one (writing this in C would have been much more cumbersome). I looked at Go and Rust and wrote some small programs performing basic operations that I needed for asgen, to get a feeling for the language. Interfacing C code with Go was relatively hard since libappstream is a GObject-based C library, I expected to be able to auto-generate Go bindings from the GIR, but there were only few outdated projects available which did that. Rust on the other hand required the most time in learning it, and since I only briefly looked into it, I still can t write Rust code without having the coding reference open. I started to implement the same examples in D just for fun, as I didn t plan to use D (I was aiming at Go back then), but the language looked interesting. The D language had the huge advantage of being very familiar to me as a C/C++ programmer, while also having a rich standard library, which included great stuff like std.concurrency.Generator, std.parallelism, etc. Translating Python code into D was incredibly easy, additionally a gir-d-generator which is actively maintained exists (I created a small fork anyway, to be able to directly link against the libappstream library, instead of dynamically loading it). What is great about D? This list is just a huge braindump of things I had on my mind at the time of writing  Interfacing with C There are multiple things which make D awesome, for example interfacing with C code and to a limited degree with C++ code is really easy. Also, working with functions from C in D feels natural. Take these C functions imported into D:
struct _mystruct  
alias mystruct_p = _mystruct*;
mystruct_p = mystruct_create ();
mystruct_load_file (mystruct_p my, const(char) *filename);
mystruct_free (mystruct_p my);
You can call them from D code in two ways:
auto test = mystruct_create ();
// treating "test" as function parameter
mystruct_load_file (test, "/tmp/example");
// treating the function as member of "test"
test.mystruct_load_file ("/tmp/example");
test.mystruct_free ();
This allows writing logically sane code, in case the C functions can really be considered member functions of the struct they are acting on. This property of the language is a general concept, so a function which takes a string as first parameter, can also be called like a member function of string. Writing D bindings to existing C code is also really simple, and can even be automatized using tools like dstep. Since D can also easily export C functions, calling D code from C is also possible. Getting rid of C++ cruft There are many things which are bad in C++, some of which are inherited from C. D kills pretty much all of the stuff I found annoying. Some cool stuff from D is now in C++ as well, which makes this point a bit less strong, but it s still valid. E.g. getting rid of the #include preprocessor dance by using symbolic import statements makes sense, and there have IMHO been huge improvements over C++ when it comes to metaprogramming. Incredibly powerful metaprogramming Getting into detail about that would take way too long, but the metaprogramming abilities of D must be mentioned. You can do pretty much anything at compiletime, for example compiling regular expressions to make them run faster at runtime, or mixing in additional code from string constants. The template system is also very well thought out, and never caused me headaches as much as C++ sometimes manages to do. Built-in unit-test support Unittesting with D is really easy: You just add one or more unittest blocks to your code, in which you write your tests. When running the tests, the D compiler will collect the unittest blocks and build a test application out of them. The unittest scope is useful, because you can keep the actual code and the tests close together, and it encourages writing tests and keep them up-to-date. Additionally, D has built-in support for contract programming, which helps to further reduce bugs by validating input/output. Safe D While D gives you the whole power of a low-level system programming language, it also allows you to write safer code and have the compiler check for that, while still being able to use unsafe functions when needed. Unfortunately, @safe is not the default for functions though. Separate operators for addition and concatenation D exclusively uses the + operator for addition, while the ~ operator is used for concatenation. This is likely a personal quirk, but I love it very much that this distinction exists. It s nice for things like addition of two vectors vs. concatenation of vectors, and makes the whole language much more precise in its meaning. Optional garbage collector D has an optional garbage collector. Developing in D without GC is currently a bit cumbersome, but these issues are being addressed. If you can live with a GC though, having it active makes programming much easier. Built-in documentation generator This is almost granted for most new languages, but still something I want to mention: Ddoc is a standard tool to generate code documentation for D code, with a defined syntax for describing function parameters, classes, etc. It will even take the contents of a unittest scope to generate automatic examples for the usage of a function, which is pretty cool. Scope blocks The scope statement allows one to execute a bit of code before the function exists, when it failed or was successful. This is incredibly useful when working with C code, where a free statement needs to be issued when the function is exited, or some arbitrary cleanup needs to be performed on error. Yes, we do have smart pointers in C++ and with some GCC/Clang extensions a similar feature in C too. But the scopes concept in D is much more powerful. See Scope Guard Statement for details. Built-in syntax for parallel programming Working with threads is so much more fun in D compared to C! I recommend taking a look at the parallelism chapter of the Programming in D book. Pure functions D allows to mark functions as purely-functional, which allows the compiler to do optimizations on them, e.g. cache their return value. See pure-functions. D is fast! D matches the speed of C++ in almost all occasions, so you won t lose performance when writing D code that is, unless you have the GC run often in a threaded environment. Very active and friendly community The D community is very active and friendly so far I only had good experience, and I basically came into the community asking some tough questions regarding distro-integration and ABI stability of D. The D community is very enthusiastic about pushing D and especially the metaprogramming features of D to its limits, and consists of very knowledgeable people. Most discussion happens at the forums/newsgroups at What is bad about D? Half-proprietary reference compiler This is probably the biggest issue. Not because the proprietary compiler is bad per se, but because of the implications this has for the D ecosystem. For the reference D compiler, Digital Mars D (DMD), only the frontend is distributed under a free license (Boost), while the backend is proprietary. The FLOSS frontend is what the free compilers, LLVM D Compiler (LDC) and GNU D Compiler (GDC) are based on. But since DMD is the reference compiler, most features land there first, and the Phobos standard library and druntime is tuned to work with DMD first. Since major Linux distributions can t ship with DMD, and the free compilers GDC and LDC lack behind DMD in terms of language, runtime and standard-library compatibility, this creates a split world of code that compiles with LDC, GDC or DMD, but never with all D compilers due to it relying on features not yet in e.g. GDCs Phobos. Especially for Linux distributions, there is no way to say use this compiler to get the best and latest D compatibility . Additionally, if people can t simply apt install latest-d, they are less likely to try the language. This is probably mainly an issue on Linux, but since Linux is the place where web applications are usually written and people are likely to try out new languages, it s really bad that the proprietary reference compiler is hurting D adoption in that way. That being said, I want to make clear DMD is a great compiler, which is very fast and build efficient code. I only criticise the fact that it is the language reference compiler. UPDATE: To clarify the half-proprietary nature of the compiler, let me quote the D FAQ:
The front end for the dmd D compiler is open source. The back end for dmd is licensed from Symantec, and is not compatible with open-source licenses such as the GPL. Nonetheless, the complete source comes with the compiler, and all development takes place publically on github. Compilers using the DMD front end and the GCC and LLVM open source backends are also available. The runtime library is completely open source using the Boost License 1.0. The gdc and ldc D compilers are completely open sourced.
Phobos (standard library) is deprecating features too quickly This basically goes hand in hand with the compiler issue mentioned above. Each D compiler ships its own version of Phobos, which it was tested against. For GDC, which I used to compile my code due to LDC having bugs at that time, this means that it is shipping with a very outdated copy of Phobos. Due to the rapid evolution of Phobos, this meant that the documentation of Phobos and the actual code I was working with were not always in sync, leading to many frustrating experiences. Furthermore, Phobos is sometimes removing deprecated bits about a year after they have been deprecated. Together with the older-Phobos situation, you might find yourself in a place where a feature was dropped, but the cool replacement is not yet available. Or you are unable to import some 3rd-party code because it uses some deprecated-and-removed feature internally. Or you are unable to use other code, because it was developed with a D compiler shipping with a newer Phobos. This is really annoying, and probably the biggest source of unhappiness I had while working with D especially the documentation not matching the actual code is a bad experience for someone new to the language. Incomplete free compilers with varying degrees of maturity LDC and GDC have bugs, and for someone new to the language it s not clear which one to choose. Both LDC and GDC have their own issues at time, but they are rapidly getting better, and I only encountered some actual compiler bugs in LDC (GDC worked fine, but with an incredibly out-of-date Phobos). All issues are fixed meanwhile, but this was a frustrating experience. Some clear advice or explanation which of the free compilers is to prefer when you are new to D would be neat. For GDC in particular, being developed outside of the main GCC project is likely a problem, because distributors need to manually add it to their GCC packaging, instead of having it readily available. I assume this is due to the DRuntime/Phobos not being subjected to the FSF CLA, but I can t actually say anything substantial about this issue. Debian adds GDC to its GCC packaging, but e.g. Fedora does not do that. No ABI compatibility D has a defined ABI too bad that in reality, the compilers are not interoperable. A binary compiled with GDC can t call a library compiled with LDC or DMD. GDC actually doesn t even support building shared libraries yet. For distributions, this is quite terrible, because it means that there must be one default D compiler, without any exception, and that users also need to use that specific compiler to link against distribution-provided D libraries. The different runtimes per compiler complicate that problem further. The D package manager, dub, does not yet play well with distro packaging This is an issue that is important to me, since I want my software to be easily packageable by Linux distributions. The issues causing packaging to be hard are reported as dub issue #838 and issue #839, with quite positive feedback so far, so this might soon be solved. The GC is sometimes an issue The garbage collector in D is quite dated (according to their own docs) and is currently being reworked. While working with asgen, which is a program creating a large amount of interconnected data structures in a threaded environment, I realized that the GC is significantly slowing down the application when threads are used (it also seems to use UNIX signals SIGUSR1 and SIGUSR2 to stop/resume threads, which I still find odd). Also, the GC performed poorly on memory pressure, which did get asgen killed by the OOM killer on some more memory-constrained machines. Triggering a manual collection run after a large amount of these interconnected data structures wasn t needed anymore solved this problem for most systems, but it would of course have been better to not needing to give the GC any hints. The stop-the-world behavior isn t a problem for asgen, but it might be for other applications. These issues are at time being worked on, with a GSoC project laying the foundation for further GC improvements. version is a reserved word Okay, that is admittedly a very tiny nitpick, but when developing an app which works with packages and versions, it s slightly annoying. The version keyword is used for conditional compilation, and needing to abbreviate it to ver in all parts of the code sucks a little (e.g. the Package interface can t have a property version , but now has ver instead). The ecosystem is not (yet) mature In general it can be said that the D ecosystem, while existing for almost 9 years, is not yet that mature. There are various quirks you have to deal with when working with D code on Linux. It s always nothing major, usually you can easily solve these issues and go on, but it s annoying to have these papercuts. This is not something which can be resolved by D itself, this point will solve itself as more people start to use D and D support in Linux distributions gets more polished. Conclusion I like to work with D, and I consider it to be a great language the quirks it has in its toolchain are not that bad to prevent writing great things with it. At time, if I am not writing a shared library or something which uses much existing C++ code, I would prefer D for that task. If a garbage collector is a problem (e.g. for some real-time applications, or when the target architecture can t run a GC), I would not recommend to use D. Rust seems to be the much better choice then. In any case, D s flat learning curve (for C/C++ people) paired with the smart choices taken in language design, the powerful metaprogramming, the rich standard library and helpful community makes it great to try out and to develop software for scenarios where you would otherwise choose C++ or Java. Quite honestly, I think D could be a great language for tasks where you would usually choose Python, Java or C++, and I am seriously considering to replace quite some Python code with D code. For very low-level stuff, C is IMHO still the better choice. As always, choosing the right programming language is only 50% technical aspects, and 50% personal taste  UPDATE: To get some idea of D, check out the D tour on the new website

1 October 2014

Keith Packard: chromium-dri3

Chromium (the browser) and DRI3 I got a note on IRC a week ago that Chromium was crashing with DRI3. The Google team working on Chromium eventually sent me a link to the bug report. That's secret Google stuff, so you won't be able to follow the link, even though it's a bug in a free software application when running on free software drivers. There's a bug report in the freedesktop bugzilla which looks the same to me. In both cases, the recommended fix was to switch from DRI3 back to DRI2. That's not exactly a great plan, given that DRI3 offers better security between GPU-using applications, which seems like a pretty nice thing to have when you're running random GL applications from the web. Chromium Sandboxing I'm not entirely sure how it works, but Chromium creates a process separate from the main browser engine to talk to the GPU. That process has very limited access to the operating system via some fancy library adventures. Presumably, the hope is that security bugs in the GL driver would be harder to leverage into a remote system exploit. Debugging in this environment is a bit tricky as you can't simply run chromium under gdb and expect to be able to set breakpoints in the GL driver. Instead, you have to run chromium with a magic flag which causes the GPU process to pause before loading the driver so you can connect to it with gdb and debug from there, along with a flag that lets you see crashes within the gpu process and the usual flag that causes chromium to ignore the GPU black list which seems to always include the Intel driver for one reason or another:
$ chromium --gpu-startup-dialog --disable-gpu-watchdog --ignore-gpu-blacklist
Once Chromium starts up, it will print out a message telling you to attach gdb to the GPU process and send that process a SIGUSR1 to continue it. Now you can happily debug and get a stack trace when the crash occurs. Locating the Bug The bug manifested with a segfault at the first access to a DRI3-allocated buffer within the application. We've seen this problem in the past; whenever buffer allocation fails for some reason, the driver ignores the problem and attempts to de-reference through the (NULL) buffer pointer, causing a segfault. In this case, Chromium called glClear, which tried (and failed) to allocate a back buffer causing the i965 driver to subsequently segfault. We should probably go fix the i965 driver to not segfault when buffer allocation fails, but that wouldn't provide a lot of additional information. What I have done is add some error messages in the DRI3 buffer allocation path which at least tell you why the buffer allocation failed. That patch has been merged to Mesa master, and should also get merged to the Mesa stable branch for the next stable release. Once I had added the error messages, it was pretty easy to see what happened:
$ chromium --ignore-gpu-blacklist
[10618:10643:0930/] After loading Root Certs, loaded==false: NSS error code: -8018
libGL: pci id for fd 12: 8086:0a16, driver i965
libGL: OpenDriver: trying /local-miki/src/mesa/mesa/lib/
libGL: Can't open configuration file /home/keithp/.drirc: Operation not permitted.
libGL: Can't open configuration file /home/keithp/.drirc: Operation not permitted.
libGL error: DRI3 Fence object allocation failure Operation not permitted
The first two errors were just the sandbox preventing Mesa from using my GL configuration file. I'm not sure how that's a security problem, but it shouldn't harm the driver much. The last error is where the problem lies. In Mesa, the DRI3 implementation uses a chunk of shared memory to hold a fence object that lets Mesa know when buffers are idle without using the X connection. That shared memory segment is allocated by creating a temporary file using the O_TMPFILE flag:
fd = open("/dev/shm", O_TMPFILE O_RDWR O_CLOEXEC O_EXCL, 0666);
This call cannot fail as /dev/shm is used by glibc for shared memory objects, and must therefore be world writable on any glibc system. However, with the Chromium sandbox enabled, it returns EPERM. Running Without a Sandbox Now that the bug appears to be in the sandboxing code, we can re-test with the GPU sandbox disabled:
$ chromium --ignore-gpu-blacklist --disable-gpu-sandbox
And, indeed, without the sandbox getting in the way of allocating a shared memory segment, Chromium appears happy to use the Intel driver with DRI3. Final Thoughts I looked briefly at the Chromium sandbox code. It looks like it needs to know intimate details of the OpenGL implementation for every possible driver it runs on; it seems to contain a fixed list of all possible files and modes that the driver will pass to open(2). That seems incredibly fragile to me, especially when used in a general Linux desktop environment. Minor changes in how the GL driver operates can easily cause the browser to stop working.

5 October 2013

R&#233;mi Vanicat: Key-transition

A recent discussion on debian-project remind me I have to do this:
Hash: SHA1,SHA256
I am transitioning GPG keys from an old 1024-bit DSA key to a new
4096-bit RSA key.  The old key will continue to be valid for some
time, but I prefer all new correspondance to be encrypted in the new
key, and will be making all signatures going forward with the new key.
This transition document is signed with both keys to validate the
If you have signed my old key, I would appreciate signatures on my new
key as well, provided that your signing policy permits that without
reauthenticating me.
The old key, which I am transitional away from, is:
   pub   1024D/9057B5D3 2002-02-07
         Key fingerprint = 7AA1 9755 336C 6D0B 8757  E393 B0E1 98D7 9057 B5D3
The new key, to which I am transitioning, is:
   pub   4096R/31ED8AEF 2009-05-08
         Key fingerprint = DE8F 92CD 16FA 1E5B A16E  E95E D265 C085 31ED 8AEF
To fetch the full new key from a public key server using GnuPG, run:
  gpg --keyserver --recv-key D265C08531ED8AEF
If you have already validated my old key, you can then validate that
the new key is signed by my old key:
  gpg --check-sigs D265C08531ED8AEF
If you then want to sign my new key, a simple and safe way to do that
is by using caff (shipped in Debian as part of the "signing-party"
package) as follows:
  caff D265C08531ED8AEF
Please contact me via e-mail at <> if you have any
questions about this document or this transition.
  Remi vanicat
Version: GnuPG v1.4.14 (GNU/Linux)
Here is the link to the .txt version for easier checking of signature.

13 August 2013

Kees Cook: TPM providing /dev/hwrng

A while ago, I added support for the TPM s pRNG to the rng-tools package in Ubuntu. Since then, Kent Yoder added TPM support directly into the kernel s /dev/hwrng device. This means there s no need to carry the patch in rng-tools any more, since I can use /dev/hwrng directly now:
# modprobe tpm-rng
# echo tpm-rng >> /etc/modules
# grep -v ^# /etc/default/rng-tools
# service rng-tools restart
And as before, once it s been running a while (or you send SIGUSR1 to rngd), you can see reporting in syslog:
# pkill -USR1 rngd
# tail -n 15 /var/log/syslog
Aug 13 09:51:01 linux rngd[39114]: stats: bits received from HRNG source: 260064
Aug 13 09:51:01 linux rngd[39114]: stats: bits sent to kernel pool: 216384
Aug 13 09:51:01 linux rngd[39114]: stats: entropy added to kernel pool: 216384
Aug 13 09:51:01 linux rngd[39114]: stats: FIPS 140-2 successes: 13
Aug 13 09:51:01 linux rngd[39114]: stats: FIPS 140-2 failures: 0
Aug 13 09:51:01 linux rngd[39114]: stats: FIPS 140-2(2001-10-10) Monobit: 0
Aug 13 09:51:01 linux rngd[39114]: stats: FIPS 140-2(2001-10-10) Poker: 0
Aug 13 09:51:01 linux rngd[39114]: stats: FIPS 140-2(2001-10-10) Runs: 0
Aug 13 09:51:01 linux rngd[39114]: stats: FIPS 140-2(2001-10-10) Long run: 0
Aug 13 09:51:01 linux rngd[39114]: stats: FIPS 140-2(2001-10-10) Continuous run: 0
Aug 13 09:51:01 linux rngd[39114]: stats: HRNG source speed: (min=10.433; avg=10.442; max=10.454)Kibits/s
Aug 13 09:51:01 linux rngd[39114]: stats: FIPS tests speed: (min=73.360; avg=75.504; max=86.305)Mibits/s
Aug 13 09:51:01 linux rngd[39114]: stats: Lowest ready-buffers level: 2
Aug 13 09:51:01 linux rngd[39114]: stats: Entropy starvations: 0
Aug 13 09:51:01 linux rngd[39114]: stats: Time spent starving for entropy: (min=0; avg=0.000; max=0)us
I m pondering getting this running in Chrome OS too, but I want to make sure it doesn t suck too much battery.

2013, Kees Cook. This work is licensed under a Creative Commons Attribution-ShareAlike 3.0 License.
Creative Commons License

12 May 2011

Peter Van Eynde: Debian on kfreebsd weirdness

I've installed Debian sid in a FreeBSD 8.2 jail and I'm seeing this weird effect:
root@debian:~# aptitude upgrade
No packages will be installed, upgraded, or removed.
0 packages upgraded, 0 newly installed, 0 to remove and 1 not upgraded.
Need to get 0 B of archives. After unpacking 0 B will be used.
User defined signal 1

This will randomly kill the normal aptitude gui, while other things like vim or dpkg-reconfigure using the 'dialog' gui will work normally.

Google is useless. strace doesn't work. ktrace gives
21131 100410 aptitude-curses CALL stat(0x9ef968,0x7fffffffe810)
21131 100410 aptitude-curses NAMI "/tmp/aptitude-root.21131:mvkyG0"
21131 100410 aptitude-curses STRU invalid record
21131 100410 aptitude-curses RET stat 0
21131 100410 aptitude-curses CALL open(0x9ef968,O_NONBLOCK,<unused>0)
21131 100410 aptitude-curses NAMI "/tmp/aptitude-root.21131:mvkyG0"
21131 100410 aptitude-curses RET open 3
21131 100410 aptitude-curses CALL fstat(0x3,0x7fffffffe810)
21131 100410 aptitude-curses STRU invalid record
21131 100410 aptitude-curses RET fstat 0
21131 100410 aptitude-curses CALL fcntl(0x3,<invalid=2>,FD_CLOEXEC)
21131 100410 aptitude-curses RET fcntl 0
21131 100410 aptitude-curses CALL getdents(0x3,0x1b9e630,0x20000)
21131 100410 aptitude-curses RET getdents 24/0x18
21131 100410 aptitude-curses CALL getdents(0x3,0x1b9e630,0x20000)
21131 100410 aptitude-curses RET getdents 0
21131 100410 aptitude-curses CALL close(0x3)
21131 100410 aptitude-curses RET close 0
21131 100410 aptitude-curses CALL rmdir(0x9ef968)
21131 100410 aptitude-curses NAMI "/tmp/aptitude-root.21131:mvkyG0"
21131 100410 aptitude-curses RET rmdir 0
21131 100410 aptitude-curses CALL write(0x28,0x7fffffffeb20,0x38)
21131 100410 aptitude-curses GIO fd 40 wrote 56 bytes
0x0000 4026 a000 0800 0000 0200 0000 0800 0000 0000 0000 0800 0000 0000 0000 0000 0000 2818 0803 0800 0000 a0f9 9d00 0000 0000 0100 0000 @&..............................(...................
0x0034 0000 0000 ....

21131 100410 aptitude-curses RET write 56/0x38
21131 100410 aptitude-curses CALL sigprocmask(SIG_SETMASK,0,0x7fffffffeaf0)
21131 100410 aptitude-curses RET sigprocmask 0
21131 100410 aptitude-curses PSIG SIGUSR1 SIG_DFL

Anybody has an idea?

root@debian:~# uname -a
GNU/kFreeBSD debian.home 8.2-RELEASE FreeBSD 8.2-RELEASE #1: Wed Apr 6 09:52:09 CEST 2011 root@frost.local:/usr/obj/usr/src/sys/PEVANEYN x86_64 amd64 Intel(R) Core(TM)2 Quad CPU Q6600 @ 2.40GHz GNU/kFreeBSD

This entry was originally posted at Please comment there using OpenID.

21 October 2010

Axel Beckert: New upstream versions of xrootconsole and keynav in Debian Experimental

I recently uploaded new upstream versions of two neat small X tools to Debian Experimental: Both packages introduce several new features and I d be happy if users of these packages in Debian Sid or Debian Sid users curious about them could test the versions in Debian Experimental. xrootconsole 0.6 + patches xrootconsole saw no love since 2006 with the last maintainer upload having been in 2002. Nevertheless it never got kicked out of Debian just because of this. The package had been forcibly orphaned just less than a year ago. So it s no big wonder that this new upstream version I packaged was released already back in 2004. :-) But besides packaging a new upstream version, bumping Standards-Version and debhelper compatibility, fixing tons of lintian warnings and some bugs, I also added two patches which add new features not (yet) available upstream: Of course I informed upstream about these feature patches, but I haven t got any feedback yet. But I got feedback from Julien Viard de Galbert, and he ll join me in packaging xrootconsole as co-maintainer. keynav 0.20101014.3067 I adopted the Debian package of keynav recently, subscribed to the keynav mailing list (well, it s a Google group), got a nice welcome mail from the very friendly upstream developer Jordan Sissel who offered to help with any keynav issues. I told him what features I d like to see in keynav to fit better the setup where I m using it. And just a few days later there was a new upstream release including both features I suggested, some more neat new features, and one bug fix. Jordan Sissel writes in the upstream changelog (emphasis and italic text by me): Both packages will be reuploaded to Debian Sid (Unstable) after the release of Debian Squeeze (currently testing ).

3 July 2008

Craig Sanders: Introduction to Linux Signals 101

or ^C doesn’t kill processes, SIGINT does On the LUV mailing list recently, the perils of tty handling when booting with “init=/bin/bash” came up, and somebody asked why running ‘ping’ (without remembering to run stty or openvt first) will result in you having to reboot. i.e. “why doesn’t ^C work in emergency mode”? I wrote a fairly detailed, but comprehensible to a novice, answer and a friend suggested i should have written it as a blog post - which was the point of me setting up a blog in the first place. So here’s a slightly edited version of it: > Couldn’t you just open another console, run “top” and find the PID for
> ping and kill it? Or ps aux grep ping and kill it? Is a reboot the
> only way to kill a ping where ^C is not available? there’s nothing special about ^C except that by long convention it is mapped to the interrupt signal aka intr aka SIGINT. So, ^C itself doesn’t kill a program, the fact that it generates an interrupt signal does. If you’ve rebooted for emergency maintenance (e.g. “init=/bin/bash” on the LILO or GRUB command line) then all you get is ONE instance of /bin/bash. Nothing else is running and nothing else has been run (because you’ve overriden the usual init with /bin/bash). No setup has been done on the terminal, and no other virtual terminals have been opened. The emergency environment is minimal (and deliberately so - you asked for it, you got it). It’s up to you to do whatever is needed to set up the environment so that you can do the required maintenance/repair work. So if you run ping *before* remembering to set up the terminal (’stty sane’) then ^C doesn’t work because ^C doesn’t generate SIGINT so there’s no way of telling ping to quit, and if you haven’t opened another virtual terminal with openvt (aka “open” - it got renamed some time ago) then there is no other shell available to run top or ps or kill. BTW, it’s not just ping. You’ll see the same problem in the same environment with ANY program that runs continuously until it gets a signal to terminate it. ping’s just the most obvious and common example. Most programs, especially simple programs like ping, don’t have a ^C handler. They don’t need one. They have a signal handler which, amongst other things, handles SIGINT however it’s generated - it can be generated by pressing ^C in a normal tty environment, or by sending the program a SIGINT (e.g. “kill -2″). That, BTW, is a core difference between a SIGTERM (just plain ‘kill’ or ‘kill -15′) and SIGKILL (’kill -9′). SIGTERM is usually handled by the program by setting up a signal handler for it, and it voluntarily terminates when it receives the signal (usually cleaning up, closing files, etc before it does so). If the program hasn’t set up a handler for SIGTERM then the default action is to just terminate. A program can set up a handler which does nothing on SIGTERM, which effectively just ignores the signal. SIGKILL is handled by the kernel and it just kills the process immediately - it can not be blocked or intercepted by the program. (By common convention, SIGHUP or ‘kill -1′ tells a long running daemon process to just reload it’s config files. It’s also generated when the tty that the program is running on is terminated - e.g. you log out or the modem hangs up or the ssh session dies, hence the name “HUP” aka “HANGUP”. That’s why you need to use something like nohup or screen if you want a program to continue running after you log out) In a way, ‘kill’ is a bit of a misnomer. It’s actually a generic tool for sending signals to running processes, but the default action for most of those signals is to terminate the program. Here’s a list of the signals which can be sent.
$ kill -l
See the man pages for init(8), kill(1), and especially signal(7) for more info on this topic. Syndicated from Craig Sanders' Errata: Tech Notes and Miscellaneous Thoughts Introduction to Linux Signals 101

26 February 2008

Vincent Fourmond: Today's dirty command-line trick: progress statistics for dd

Who has already been using dd for endless data transfer without really knowing what is happening ? That is rather annoying. Sure, when the target is a normal file, you can always use
watch ls -l target
But, that won't tell you the rate of transfer, and it won't work if the target is a special file (device, pipe, ...). Fortunately, dd accepts the USR1 signal to print out some small statistics... Well, combining that with the watch tricks gives this:
watch killall -USR1 dd
And there you go, dd is now regularly printing out statistics. Pretty neat, isn't it ?

17 February 2008

Andrew Pollock: [tech] Bless you, Ted Ts'o

From the e2fsck man page:
       The following signals have the following effect when sent to e2fsck.
              This  signal causes e2fsck to start displaying a completion bar.
              (See discussion of the -C option.)
I was just lamenting to myself how e2fsck should default to showing a progress bar for those times when you check a multi-terabyte filesystem, and would really like to know how long it's going to take so you can decide to go to bed or not, and you've learned about the -C option after you've started e2fsck. I'd been thinking to myself how grouse it would be if e2fsck would let you know how far it had gotten on receipt of some signal of some sort. Ted Ts'o, I salute you. Update Too bad I can't say the same thing for resize2fs.

31 May 2006

David Pashley: Init(1) Causing zombies

Had an interesting problem today with one of our servers at work. First thing I noticed was yesterday that an upgrade of Apache2 didn't complete properly because /etc/init.d/apache2 stop didn't return. Killing it and starting apache allowed the upgrade to finish. I noticed there was a zombie process but didn't think too much off it. Then this morning got an email from the MD saying that various internal services websites were down (webmail, wiki etc). My manager noticed that it was due to logrotate hanging, again because restarting apache had hung. Looking at the server I noticed a few more zombie processes. One thing I'd noticed was that all these processes had reparented themselves under init and a quick web search later confirmed that init(1) should be reaping these processes. I thought maybe restarting init would clear the zombies. I tried running telinit q to reread the config file, but that returned an error message about timing out on /dev/initctl named pipe. I checked that file existed and everything looked fine. The next thing I checked was the other end of the named pipe by running lsof -p 1. This showed that init had /dev/console rather than /dev/initctl as fd 0. I tried running kill -SIGHUP 1, but that didn't do anything. Then I tried kill -SIGUSR1 1, but that didn't do anything either. I checked the console, but there wasn't enough scrollback to see the system booting and decided to schedule a reboot for this evening. Rebooting the server presented me with an interesting challenge. Normally the shutdown command signals to init to change to runlevel 0 or 6 to shutdown or reboot using /dev/initctl. Of course init wasn't listening on this file, so that was out. Sending it an SIGINT signal (the same signal init gets on ctrl-alt-delete) had no response. Obviously telinit 0 wasn't going to work either. I decided to start shutting services down manually with the help of Brett Parker. The idea was to stop all non-essential services, unexporting nfs exports, remounting disks read-only and then either using sysrq or a hardware reset. Unfortunately someone accidentally ran /etc/init.d/halt stop, hanging the server, but he is suffering from a bad cold today so I forgive him. The server restarted without a hitch (thank god for ext3) and running lsof -p 1 showed init having /dev/initctl open. I don't know what happened to init the last reboot on Monday, but a reboot seemed to fix it. Odd bug, but thankfully it was a nice simple fix. I could have spent the evening debugged init. :)