Search Results: "ewt"

29 May 2017

Enrico Zini: Jessie live on UEFI systems

According to the Debian Wiki, you can't boot a Debian Live based on Jessie on a UEFI system:
UEFI support in live images At this point, UEFI support exists only in Debian's installation images. The accompanying live images do not have support for UEFI boot, as the live-build software used to generate them still does not include it. Hopefully the debian-live developers will add this important feature soon.
Some people really needed it, though, so I kept looking. Here's a script that takes a Jessie Debian Live .iso file and the device name for a USB pendrive, and gives you a pendrive that boots on UEFI:
#!/bin/sh
# License: do what you want but it's not my fault, I told you not to.
sh -ue
ISO=$ 1:?"Usage: $0 file.iso usbdev" 
DEV=$ 2:?"Usage: $0 file.iso usbdev" 
parted -s $DEV mklabel gpt mkpart primary fat32 1 100%
mkfs.vfat $ DEV 1
mount $ DEV 1 /mnt
bsdtar -C /mnt -xf $ISO
mkdir -p /mnt/efi/boot
# Shell.efi comes from https://svn.code.sf.net/p/edk2/code/trunk/edk2/ShellBinPkg/UefiShell/X64/
cp Shell.efi /mnt/efi/boot/Bootx64.efi
echo 'live\vmlinuz initrd=live\initrd.img append boot=live components' > /mnt/startup.nsh
umount /mnt
Only use it if you really need it, though: Stretch will support this out of the box, and it's coming soon.

2 May 2017

Kees Cook: security things in Linux v4.11

Previously: v4.10. Here s a quick summary of some of the interesting security things in this week s v4.11 release of the Linux kernel: refcount_t infrastructure Building on the efforts of Elena Reshetova, Hans Liljestrand, and David Windsor to port PaX s PAX_REFCOUNT protection, Peter Zijlstra implemented a new kernel API for reference counting with the addition of the refcount_t type. Until now, all reference counters were implemented in the kernel using the atomic_t type, but it has a wide and general-purpose API that offers no reasonable way to provide protection against reference counter overflow vulnerabilities. With a dedicated type, a specialized API can be designed so that reference counting can be sanity-checked and provide a way to block overflows. With 2016 alone seeing at least a couple public exploitable reference counting vulnerabilities (e.g. CVE-2016-0728, CVE-2016-4558), this is going to be a welcome addition to the kernel. The arduous task of converting all the atomic_t reference counters to refcount_t will continue for a while to come. CONFIG_DEBUG_RODATA renamed to CONFIG_STRICT_KERNEL_RWX Laura Abbott landed changes to rename the kernel memory protection feature. The protection hadn t been debug for over a decade, and it covers all kernel memory sections, not just rodata . Getting it consolidated under the top-level arch Kconfig file also brings some sanity to what was a per-architecture config, and signals that this is a fundamental kernel protection needed to be enabled on all architectures. read-only usermodehelper A common way attackers use to escape confinement is by rewriting the user-mode helper sysctls (e.g. /proc/sys/kernel/modprobe) to run something of their choosing in the init namespace. To reduce attack surface within the kernel, Greg KH introduced CONFIG_STATIC_USERMODEHELPER, which switches all user-mode helper binaries to a single read-only path (which defaults to /sbin/usermode-helper). Userspace will need to support this with a new helper tool that can demultiplex the kernel request to a set of known binaries. seccomp coredumps Mike Frysinger noticed that it wasn t possible to get coredumps out of processes killed by seccomp, which could make debugging frustrating, especially for automated crash dump analysis tools. In keeping with the existing documentation for SIGSYS, which says a coredump should be generated, he added support to dump core on seccomp SECCOMP_RET_KILL results. structleak plugin Ported from PaX, I landed the structleak plugin which enforces that any structure containing a __user annotation is fully initialized to 0 so that stack content exposures of these kinds of structures are entirely eliminated from the kernel. This was originally designed to stop a specific vulnerability, and will now continue to block similar exposures. ASLR entropy sysctl on MIPS
Matt Redfearn implemented the ASLR entropy sysctl for MIPS, letting userspace choose to crank up the entropy used for memory layouts. NX brk on powerpc Denys Vlasenko fixed a long standing bug where the kernel made assumptions about ELF memory layouts and defaulted the the brk section on powerpc to be executable. Now it s not, and that ll keep process heap from being abused. That s it for now; please let me know if I missed anything. The v4.12 merge window is open!

2017, Kees Cook. This work is licensed under a Creative Commons Attribution-ShareAlike 3.0 License.
Creative Commons License

10 April 2017

Daniel Pocock: If Alan Turing was born today, would he be a Muslim?

Alan Turing's name and his work are well known to anybody with a theoretical grounding in computer science. Turing developed his theories well before anybody invented file sharing, overclocking or mass surveillance. In fact, Turing was largely working in the absence of any computers at all: the transistor was only invented in 1947 and the microchip, the critical innovation that has made computing both affordable and portable, only came in 1960, four years after Turing's death. To this day, the Turing Test remains a well known challenge in the field of Artificial Intelligence. The most prestigious prize in computing, the A.M. Turing Award from the ACM, equivalent to the Nobel Prize in other fields of endeavour, is named in Turing's honour. (This year's award is another British scientist, Sir Tim Berners-Lee, inventor of the World Wide Web).
Potentially far more people know of Alan Turing for his groundbreaking work at Bletchley Park and the impact it had on cracking the Nazi's Enigma machines during World War 2, giving the allies an advantage against Hitler. While in his lifetime, Turing exposed the secret communications of the Nazis, in his death, he exposed something manifestly repugnant about his own society. Turing's challenges with his sexuality (or Britain's challenge with it) are just as well documented as his greatest scientific achievements. The 2014 movie The Imitation Game tells Turing's story, bringing together the themes from his professional and personal life. Had Turing chosen to flee British persecution by going abroad, he would be a refugee in the same sense as any person who crossed the seas to reach Europe today to avoid persecution elsewhere. Please prove me wrong In March, I blogged about the problem of racism that plagues Britain today. While some may have felt the tone of the blog was quite strong, I was in no way pleased to find my position affirmed by the events that occurred in the two days after the blog appeared. Two days and two more human beings (both immigrants and both refugees) subjected to abhorrent and unnecessary acts of abuse in Great Britain. Both cases appear to be fuelled directly by the evil that has been oozing out of number 10 Downing Street since they decided to have a referendum on "Brexit". What stands out about these latest crimes is not that they occurred (this type of thing has been going on for months now) but certain contrasts between their circumstances and to a lesser extent, the fact they occurred immediately after Theresa May formalized Britain's departure from the EU. One of the victims was almost beaten to death by a street gang, while the other was abused by men wearing uniforms. One was only a child, while the other is a mature adult who has been in the UK almost three decades, completely assimilated into British life, working and paying taxes. Both were doing nothing out of the ordinary at the time the abuse occurred: one had engaged in a conversation at a bus stop, the other was on a routine visit to a Government office. There is no evidence that either of them had done anything to provoke or invite the abhorrent treatment meted out to them by the followers of Theresa May and Nigel Farage. The first victim, on 30 March, was Stojan Jankovic, a refugee from Yugoslavia who has been in the UK for 26 years. He had a routine meeting at an immigration department office where he was ambushed, thrown in the back of a van and sent to rot in a prison cell by Theresa May's gestapo. On Friday, 31 March, it was Reker Ahmed, a 17 year old Kurdish-Iranian beaten to the brink of death by a crowd in south London. One of the more remarkable facts to emerge about these two cases is that while Stojan Jankovic was basically locked up for no reason at all, the street thugs who the police apprehended for the assault on Ahmed were kept in a cell for less than 48 hours and released again on bail. While the harmless and innocent Jankovic was eventually released after a massive public outcry, he spent more time locked up than that gang of violent criminals who beat Reker Ahmed. In other words, Theresa May and Nigel Farage's Britain has more concern for the liberty of violent criminals than somebody like Jankovic who has been working and paying taxes in the UK since before any of those street thugs were born. A deeper insight into Turing's fate With gay marriage having been legal in the UK for a number of years now, the rainbow flag flying at the Tate and Sir Elton John achieving a knighthood, it becomes difficult for people to relate to the world in which Turing and many other victims were collectively classified by their sexuality, systematically persecuted by the state and ultimately died far sooner than they should have. (Turing was only 41 when he died). In fact, the cruel and brutal forces that ripped Turing apart (and countless other victims too) haven't dissipated at all, they have simply shifted their target. The slanderous comments insinuating that immigrants "steal" jobs or that Islam is about terrorism are eerily reminiscent of suggestions that gay men abduct young boys or work as Soviet spies. None of these lies has any basis in fact, but repeat them often enough in certain types of newspaper and these ideas spread like weeds. In an ironic twist, Turing's groundbreaking work at Bletchley Park was founded on the contributions of Polish mathematicians, their own country having been the first casualty to Hitler, they were also both immigrants and refugees in Britain. Today, under the Theresa May/Nigel Farage leadership, Polish citizens have been subjected to regular vilification by the media and some have even been killed in the street. It is said that a picture is worth a thousand words. When you compare these two pieces of propaganda: a 1963 article in the Sunday Mirror advising people "How to spot a possible homo" and a UK Government billboard encouraging people to be on the lookout for people who look different, could you imagine the same type of small-minded and power-hungry tyrants crafting them, singling out a minority so as to keep the public's attention in the wrong place?
Many people have noticed that these latest UK Government posters portray foreigners, Muslims and basically anybody who is not white using a range of characteristics found in anti-semetic propaganda from the Third Reich: Do the people who create such propaganda appear to have any concern whatsoever for the people they hurt? How would Alan Turing have felt when he encountered propaganda like that from the Sunday Mirror? Do posters like these encourage us to judge people by their gifts in science, the arts or sporting prowess or do they encourage us to lump them all together based on their physical appearance? It is a basic expectation of scientific methodology that when you repeat the same experiment, you should get the same result. What type of experiment are Theresa May and Nigel Farage conducting and what type of result would you expect? Playing ping-pong with children If anybody has any doubt that this evil comes from the top, take a moment to contemplate the 3,000 children who were baited with the promise of resettlement from the Calais "jungle" camp into the UK under the Dubs amendment. When French authorities closed the "jungle" in 2016, the children were lured out of the camp and left with nowhere to go as Theresa May and French authorities played ping-pong with them. Given that the UK parliament had already agreed they should be accepted, was there any reason for Theresa May to dig her heels in and make these children suffer? Or was she just trying to prove her credentials as somebody who can bastardize migrants just the way Nigel Farage would do it? How do British politicians really view migrants? Parliamentarian Keith Vaz, former chair of the Home Affairs Select Committee (responsible for security, crime, prostitution and similar things) was exposed with young men from eastern Europe, encouraging them to take drugs before he ordered them "Take your shirt off. I'm going to attack you.". How many British MP's see foreigners this way? Next time you are groped at an airport security checkpoint, remember it was people like Keith Vaz and his committee who oversee those abuses, writing among other things that "The wider introduction of full-body scanners is a welcome development". No need to "take your shirt off" when these machines can look through it as easily as they can look through your children's underwear. According to the World Health Organization, HIV/AIDS kills as many people as the September 11 attacks every single day. Keith Vaz apparently had no concern for the possibility he might spread this disease any further: the media reported he doesn't use any protection in his extra-marital relationships. While Britain's new management continue to round up foreigners like Stojan Jankovic who have done nothing wrong, they chose not to prosecute Keith Vaz for his antics with drugs and prostitution. Who is Britain's next Alan Turing? Britain's next Alan Turing may not be a homosexual. He or she may have been a child turned away by Theresa May's spat with the French at Calais, a migrant bundled into a deportation van by the gestapo (who are just following orders) or perhaps somebody of Muslim appearance who is set upon by thugs in the street who have been energized by Nigel Farage. If you still have any uncertainty about what Brexit really means, this is it. A country that denies itself the opportunity to be great by subjecting itself to be ruled under the "divide and conquer" mantra of the colonial era. Throughout the centuries, Britain has produced some of the most brilliant scientists of their time. Newton, Darwin and Hawking are just some of those who are even more prominent than Turing, household names around the world. One can only wonder what the history books will have to say about Theresa May and Nigel Farage however. Next time you see a British policeman accosting a Muslim, whether it is at an airport, in a shopping centre, keeping Manchester United souvenirs or simply taking a photograph, spare a thought for Alan Turing and the era when homosexuals were their target of choice.

5 March 2017

Julien Viard de Galbert: Raspberry Pi 3 as desktop computer

For about six months I ve been using a Raspberry Pi 3 as my desktop computer at home. The overall experience is fine, but I had to do a few adjustments.
First was to use KeePass, the second to compile gcc for cross-compilation (ie use buildroot).
KeePass I m using KeePass + KeeFox to maintain my passwords on the various websites (and avoid reusing the same everywhere).
For this to work on the Raspberry Pi, one need to use mono from Xamarin:
sudo apt-key adv --keyserver hkp://keyserver.ubuntu.com:80 --recv-keys 3FA7E0328081BFF6A14DA29AA6A19B38D3D831EF
echo "deb http://download.mono-project.com/repo/debian wheezy main"   sudo tee /etc/apt/sources.list.d/mono-xamarin.list
sudo apt-get update
sudo apt-get upgrade
sudo apt-get install mono-runtime
The install instruction comes from mono-project and the initial pointer was found on raspberrypi forums, stackoverflow and Benny Michielsen s blog.
And for some plugin to work I think I had to apt-get install mono-complete. Compiling gcc Using the Raspberry Pi 3, I recovered an old project based on buildroot for the raspberry pi 2. And just for building the tool-chain I had a few issues. First the compilation would stop during mnp compilation:
 /usr/bin/gcc -std=gnu99 -c -DHAVE_CONFIG_H -I. -I.. -D__GMP_WITHIN_GMP -I.. -DOPERATION_divrem_1 -O2 -Wa,--noexecstack tmp-divrem_1.s -fPIC -DPIC -o .libs/divrem_1.o
tmp-divrem_1.s: Assembler messages:
tmp-divrem_1.s:129: Error: selected processor does not support ARM mode  mls r1,r4,r8,r11'
tmp-divrem_1.s:145: Error: selected processor does not support ARM mode  mls r1,r4,r8,r11'
tmp-divrem_1.s:158: Error: selected processor does not support ARM mode  mls r1,r4,r8,r11'
tmp-divrem_1.s:175: Error: selected processor does not support ARM mode  mls r1,r4,r3,r8'
tmp-divrem_1.s:209: Error: selected processor does not support ARM mode  mls r11,r4,r12,r3'
Makefile:768: recipe for target 'divrem_1.lo' failed
make[]: *** [divrem_1.lo] Error 1
I Googled the error and found this post on the Raspberry Pi forum not really helpful
But I finally found an explanation on Jan Hrach s page on the subject.
The raspbian distribution is still optimized for the first Raspberry Pi so basically the compiler is limited to the old raspberypi instructions. While I was compiling gcc for a Raspberry Pi 2 so needed the extra ones. The proposed solution is to basically update raspbian to debian proper. While this is a neat idea, I still wanted to get some raspbian specific packages (like the kernel) but wanted to be sure that everything else comes from debian. So I did some apt pinning. First I experienced that pinning is not sufficient so when updating source.list with plain debian Jessie, make sure to add theses lines before the raspbian lines:
# add official debian jessie (real armhf gcc)
deb http://ftp.fr.debian.org/debian/ jessie main contrib non-free
deb-src http://ftp.fr.debian.org/debian/ jessie main
deb http://security.debian.org/ jessie/updates main
deb-src http://security.debian.org/ jessie/updates main
deb http://ftp.fr.debian.org/debian/ jessie-updates main
deb-src http://ftp.fr.debian.org/debian/ jessie-updates main
Then run the following to get the debian gpg keys, but don t yet upgrade your system:
apt update
apt install debian-archive-keyring
Now, let s add the pinning.
First if you were using APT::Default-Release "stable"; in your apt.conf (as I did) remove it. It does not mix well with fine grained pinning we will then implement. Then, fill your /etc/apt/preferences file with the following:
# Debian
Package: *
Pin: release o=Debian,a=stable,n=jessie
Pin-Priority: 700
# Raspbian
Package: *
Pin: release o=Raspbian,a=stable,n=jessie
Pin-Priority: 600
Package: *
Pin: release o=Raspberry Pi Foundation,a=stable,n=jessie
Pin-Priority: 600
# Mono
Package: *
Pin: release v=7.0,o=Xamarin,a=stable,n=wheezy,l=Xamarin-Stable,c=main
Pin-Priority: 800
Note: You can use apt-cache policy (no parameter) to debug pinning.
The pinning above is mainly based on the origin field of the repositories (o=)
Finally you can upgrade your system:
apt update 
apt-cache policy gcc 
rm /var/cache/apt/archives/* 
apt upgrade 
apt-cache policy gcc
Note: Removing the cache ensure we download the packages from debian as raspbian is using the exact same naming but we now they are not compiled with a real armhf tool-chain. Second issue with gcc The build stopped on recipe for target 's-attrtab' failed. There are many references on the web, that one was easy, it just need more memory, so I added some swap on the external SSD I was already using to work on buildroot. Conclusion That s it for today, not bad considering my last post was more that 3 years ago

1 March 2017

Paul Wise: FLOSS Activities February 2017

Changes

Issues

Review

Administration
  • Debian: do the samhain dance, ask for new local contacts at one site, ask local admins to reset one machine, powercycle 2 dead machines, redirect 1 user to the support channels, redirect 1 user to a service admin, redirect 1 spam reporter to the right mechanisms, investigate mail logs for a missing bug report, ping bugs-search.d.o service admin about moving off glinka and remove data, poke cdimage-search.d.o service admin about moving off glinka, update a cron job on denis.d.o for the rename of letsencrypt.sh to dehydrated, debug planet.d.o issue and remove stray cron job lock file, check if ftp is used on a couple of security.d.o mirrors, discuss storage upgrade for LeaseWeb for snapshot.d.o/deriv.d.n/etc, investigate SSD SMART error and ignore the unknown attribute, ask 9 users to restart their processes, investigate apt-get update failure in nagios, swapoff/swapon a swap file to drain it, restart/disable some failed services, help restore the backup server, debug stretch /dev/log issue,
  • Debian QA: deploy merged PTS/tracker patches,
  • Debian wiki: answer 1 IP-blocked VPN user, pinged 1 user on IRC about their bouncing mail, disabled 4 accounts due to bouncing mail, redirect 1 person to documentation/lists, whitelist 5 email addresses, forward 1 password reset token, killed 1 spammer account, reverted 1 spammer edit,
  • Debian mentors: security upgrades, check which email a user signed up with
  • Openmoko: security upgrades, daemon restarts, reboot

Debian derivatives
  • Turned off the census cron job because it ran out of disk space
  • Update Armbian sources.list
  • Ping siduction folks about updating their sources.list
  • Start a discussion about DebConf17
  • Notify the derivatives based on jessie or older that stretch is frozen
  • Invite Rebellin Linux (again)

Sponsors The libesedb Debian backport was sponsored by my employer. All other work was done on a volunteer basis.

2 February 2017

Paul Wise: FLOSS Activities January 2017

Changes

Issues

Review

Administration
  • Debian: reboot 1 non-responsive VM, redirect 2 users to support channels, redirect 1 contributor to xkb upstream, redirect 1 potential contributor, redirect 1 bug reporter to mirror team, ping 7 folks about restarting processes with upgraded libs, manually restart the sectracker process due to upgraded libs, restart the package tracker process due to upgraded libs, investigate failures connecting to the XMPP service, investigate /dev/shm issue on abel.d.o, clean up after rename of the fedmsg group.
  • Debian mentors: lintian/security updates & reboot
  • Debian packages: deploy 2 contributions to the live server
  • Debian wiki: unblacklist 1 IP address, whitelist 10 email addresses, disable 18 accounts with bouncing email, update email for 2 accounts with bouncing email, reported 1 Debian member as MIA, redirect 1 user to support channels, add 4 domains to the whitelist.
  • Reproducible builds: rescheduled Debian pyxplot:amd64/unstable for themill.
  • Openmoko: security updates & reboots.

Debian derivatives
  • Send the annual activity ping mail.
  • Happy new year messages on IRC, forward to the list.
  • Note that SerbianLinux does not provide source packages.
  • Expand URL shortener on SerbianLinux page.
  • Invite PelicanHPC, Netrunner, DietPi, Hamara Linux (on IRC), BitKey to the census.
  • Add research publications link to the census template
  • Fix Symbiosis sources.list
  • Enquired about SalentOS downtime
  • Fixed and removed some 404 BlankOn links (blog, English homepage)
  • Fixed changes to AstraLinux sources.list
  • Welcome Netrunner to the census

Sponsors I renewed my support of Software Freedom Conservancy. The openchange 1:2.2-6+deb8u1 upload was sponsored by my employer. All other work was done on a volunteer basis.

11 January 2017

Reproducible builds folks: Reproducible Builds: week 89 in Stretch cycle

What happened in the Reproducible Builds effort between Sunday January 1 and Saturday January 7 2017: GSoC and Outreachy updates Toolchain development Packages reviewed and fixed, and bugs filed Chris Lamb: Dhole: Reviews of unreproducible packages 13 package reviews have been added, 4 have been updated and 6 have been removed in this week, adding to our knowledge about identified issues. 2 issue types have been added/updated: Upstreaming of reproducibility fixes Merged: Opened: Weekly QA work During our reproducibility testing, the following FTBFS bugs have been detected and reported by: diffoscope development diffoscope 67 was uploaded to unstable by Chris Lamb. It included contributions from :
[ Chris Lamb ]
* Optimisations:
  - Avoid multiple iterations over archive by unpacking once for an ~8X
    runtime optimisation.
  - Avoid unnecessary splitting and interpolating for a ~20X optimisation
    when writing --text output.
  - Avoid expensive diff regex parsing until we need it, speeding up diff
    parsing by 2X.
  - Alias expensive Config() in diff parsing lookup for a 10% optimisation.
* Progress bar:
  - Show filenames, ELF sections, etc. in progress bar.
  - Emit JSON on the the status file descriptor output instead of a custom
    format.
* Logging:
  - Use more-Pythonic logging functions and output based on __name__, etc.
  - Use Debian-style "I:", "D:" log level format modifier.
  - Only print milliseconds in output, not microseconds.
  - Print version in debug output so that saved debug outputs can standalone
    as bug reports.
* Profiling:
  - Also report the total number of method calls, not just the total time.
  - Report on the total wall clock taken to execute diffoscope, including
    cleanup.
* Tidying:
  - Rename "NonExisting" -> "Missing".
  - Entirely rework diffoscope.comparators module, splitting as many separate
    concerns into a different utility package, tidying imports, etc.
  - Split diffoscope.difference into diffoscope.diff, etc.
  - Update file references in debian/copyright post module reorganisation.
  - Many other cleanups, etc.
* Misc:
  - Clarify comment regarding why we call python3(1) directly. Thanks to J r my
    Bobbio <lunar@debian.org>.
  - Raise a clearer error if trying to use --html-dir on a file.
  - Fix --output-empty when files are identical and no outputs specified.
[ Reiner Herrmann ]
* Extend .apk recognition regex to also match zip archives (Closes: #849638)
[ Mattia Rizzolo ]
* Follow the rename of the Debian package "python-jsbeautifier" to
  "jsbeautifier".
[ siamezzze ]
* Fixed no newline being classified as order-like difference.
reprotest development reprotest 0.5 was uploaded to unstable by Chris Lamb. It included contributions from:
[ Ximin Luo ]
* Stop advertising variations that we're not actually varying.
  That is: domain_host, shell, user_group.
* Fix auto-presets in the case of a file in the current directory.
* Allow disabling build-path variations. (Closes: #833284)
* Add a faketime variation, with NO_FAKE_STAT=1 to avoid messing with
  various buildsystems. This is on by default; if it causes your builds
  to mess up please do file a bug report.
* Add a --store-dir option to save artifacts.
Other contributions (not yet uploaded): reproducible-builds.org website development tests.reproducible-builds.org Misc. This week's edition was written by Chris Lamb, Holger Levsen and Vagrant Cascadian, reviewed by a bunch of Reproducible Builds folks on IRC & the mailing lists.

9 December 2016

John Goerzen: Giant Concrete Arrows, Old Maps, and Fascinated Kids

Let me set a scene for you. Two children, ages 7 and 10, are jostling for position. There s a little pushing and shoving to get the best view. This is pretty typical for siblings this age. But what, you may wonder, are they trying to see? A TV? Video game? No. Jacob and Oliver were in a library, trying to see a 98-year-old map of the property owners in Township 23, range 1 East, Harvey County, Kansas. And they were super excited about it, somewhat to the astonishment of the research librarian, who I am sure is more used to children jostling for position over the DVDs in the youth section than poring over maps in the non-circulating historical archives! All this started with giant concrete arrows in the middle of nowhere. Nearly a century ago, the US government installed a series of arrows on the ground in Kansas. These were part of a primitive air navigation system that led to the first transcontinental airmail service. Every so often, people stumble upon these abandoned arrows and there is a big discussion online. Even Snopes has had to verify their authenticity (verdict: true). Entire websites exist to tracking and locating the remnants of these arrows. And as one of the early air mail routes went through Kansas, every so often people find these arrows around here. I got the idea that it would be fun to replicate a journey along the old routes. Maybe I d spot a few old arrows and such. So I started collecting old maps: a Contract Airmail Route #34 (CAM 34) map from 1927, aviation sectionals from 1933 and 1946, etc. I noticed an odd thing on these maps: the Newton, KS airport was on the other side of the city from its present location, sometimes even several miles outside the city. What was going on? 1927 Airway Map
(1927 Airway Map) 1946 Wichita Sectional
(1946 Wichita sectional) So one foggy morning, I explained my puzzlement to the boys. I highlighted all the mysteries: were these maps correct? Were there really two Newton airports at one time? How many airports were there, and where were they? Why did they move? What was the story behind them? And I offered them the chance to be history detectives with me. And oh my goodness, were they ever excited! We had some information from a very helpful person at the Harvey County Historical Museum (thanks Kris!) So we suspected one airport at least was established in 1927. We also had a description of its location, though given in terms of township maps. So the boys and I made the short drive over to the museum. We reviewed their property maps, though they were all a little older than the time period we needed. We looked through books and at pictures. Oliver pored over a railroad map of Newton from a century ago, fascinated. Jacob was excited to discover on one map that there used to be a train track down the middle of Main Street! I was interested that the present Newton Airport was once known as Wirt Field, rather to my surprise. I somehow suspect most 2nd and 4th graders spend a lot less excited time on their research floor! Then on to the Newton Public Library to see if they d have anything more and that s when the map that produced all the excitement came out. It, by itself, didn t answer the question, but by piecing together a number of pieces of information newspaper stories, information from the museum, and the maps we were able to come up with a pretty good explanation, much to their excitement. Apparently, a man named Tangeman owned a golf course (the golf links according to the paper), and around 1927 the city of Newton purchased it, because of all the planes that were landing there. They turned it into a real airport. Later, they bought land east of the city and moved the airport there. However, during World War II, the Navy took over that location, so they built a third airport a few miles west of the city but moved back to the current east location after the Navy returned that field to them. Of course, a project like this just opens up all sorts of extra questions: why isn t it called Wirt Field anymore? What s the story of Frank Wirt? What led the Navy to take over Newton s airport? Why did planes start landing on the golf course? Where precisely was the west airport located? How long was it there? (I found an aerial photo from 1956 that looks like it may have a plane in that general area, but it seems later than I d have expected) So now I have the boys interested in going to the courthouse with me to research the property records out there. Jacob is continually astounded that we are discovering things that aren t in Wikipedia, and also excited that he could be the one to add them. To be continued, apparently!

5 December 2016

Norbert Preining: Debian/TeX Live 2016.20161130-1

As we are moving closer to the Debian release freeze, I am shipping out a new set of packages. Nothing spectacular here, just the regular updates and a security fix that was only reported internally. Add sugar and a few minor bug fixes.
texlive2016-debian I have been silent for quite some time, busy at my new job, busy with my little monster, writing papers, caring for visitors, living. I have quite a lot of things I want to write, but not enough time, so very short only this one. Enjoy. New packages awesomebox, baskervillef, forest-quickstart, gofonts, iscram, karnaugh-map, tikz-optics, tikzpeople, unicode-bidi. Updated packages acmart, algorithms, aomart, apa, apa6, appendix, apxproof, arabluatex, asymptote, background, bangorexam, beamer, beebe, biblatex-gb7714-2015, biblatex-mla, biblatex-morenames, bibtexperllibs, bidi, bookcover, bxjalipsum, bxjscls, c90, cals, cell, cm, cmap, cmextra, context, cooking-units, ctex, cyrillic, dirtree, ekaia, enotez, errata, euler, exercises, fira, fonts-churchslavonic, formation-latex-ul, german, glossaries, graphics, handout, hustthesis, hyphen-base, ipaex, japanese, jfontmaps, kpathsea, l3build, l3experimental, l3kernel, l3packages, latex2e-help-texinfo-fr, layouts, listofitems, lshort-german, manfnt, mathastext, mcf2graph, media9, mflogo, ms, multirow, newpx, newtx, nlctdoc, notes, patch, pdfscreen, phonenumbers, platex, ptex, quran, readarray, reledmac, shapes, showexpl, siunitx, talk, tcolorbox, tetex, tex4ht, texlive-en, texlive-scripts, texworks, tikz-dependency, toptesi, tpslifonts, tracklang, tugboat, tugboat-plain, units, updmap-map, uplatex, uspace, wadalab, xecjk, xellipsis, xepersian, xint.

3 November 2016

Norbert Preining: Debian/TeX Live 2016.20161103-1

This month s update falls onto a national holiday in Japan. My recent start as a normal company employee in Japan doesn t leave me enough time during normal days to work on Debian, so things have to wait for holidays. There have been a few notable changes in the current packages, and above all I wanted to fix an RC bug and on the way fixed also several other (sometimes rather old) bugs.
texlive2016-debian From the list of new packages I want to pick apxproof: I have written something myself for one of my rather long papers (with proofs about 60pp), where at times I had to factor out the proofs into an appendix. I did this my own way, but I would have preferred to have a nice package! Another interesting change is the upstream merge of collection-mathextra (which translated to the Debian package texlive-math-extra) and collection-science (Debian: texlive-science) into a new collection collection-mathscience. Since introducing new packages and phasing out old ones is generally a pain in Debian, I decided to digress from the upstream naming convention and use texlive-science for the new collection-mathscience. In the end Mathematics is the most important science of all  Finally also a word about removals: Several ConTeXt packages have been removed due to the fact that they are outdated. These removals will find their way in an update of the Debian ConTeXt package in near future. The TeX Live packages lost voss-mathmode, which was retracted by the author due to various reasons. He is working on an updated version that will hopeful reappear in both TeX Live and Debian in near future. Well, that s it for now. Here now the full list with links. Enjoy. New packages apxproof, bangorexam, biblatex-gb7714-2015, biblatex-lni, biblatex-sbl, context-cmscbf, context-cmttbf, context-inifile, context-layout, delimset, latex2nemeth, latexbangla, latex-papersize, ling-macros, notex-bst, platex-tools, testidx, uppunctlm, wtref, xcolor-material. Removed packages voss-mathmode. Updated packages apa6, autoaligne, babel-german, biblatex-abnt, biblatex-anonymous, biblatex-apa, biblatex-manuscripts-philology, biblatex-nature, biblatex-realauthor, bibtex, bidi, boondox, bxcjkjatype, chickenize, churchslavonic, cjk-gs-integrate, context-filter, cooking-units, ctex, denisbdoc, dvips, europasscv, fixme, glossaries, gzt, handout, imakeidx, ipaex-type1, jsclasses, jslectureplanner, kpathsea, l3build, l3experimental, l3kernel, l3packages, latexindent, latexmk, listofitems, luatexja, marginnote, mcf2graph, minted, multirow, nameauth, newpx, newtx, noto, nucleardata, optidef, overlays, pdflatexpicscale, pst-eucl, reledmac, repere, scanpages, semantic-markup, tableaux, tcolorbox, tetex, texlive-scripts, ticket, todonotes, tracklang, tudscr, turabian-formatting, updmap-map, uspace, visualtikz, xassoccnt, xecjk, yathesis.

28 October 2016

Alessio Treglia: The logical contradictions of the Universe

Ouroboros

Ouroboros

Is Erwin Schr dinger s wave function which did in the atomic and subatomic world an operation altogether similar to the one performed by Newton in the macroscopic world an objective reality or just a subjective knowledge? Physicists, philosophers and epistemologist have debated at length on this matter. In 1960, theoretical physicist Eugene Wigner has proposed that the observer s consciousness is the dividing line that triggers the collapse of the wave function[1], and this theory was later taken up and developed in recent years. The rules of quantum mechanics are correct but there is only one system which may be treated with quantum mechanics, namely the entire material world. There exist external observers which cannot be treated within quantum mechanics, namely human (and perhaps animal) minds, which perform measurements on the brain causing wave function collapse [2]. The English mathematical physicist and philosopher of science Roger Penrose developed the hypothesis called Orch-OR (Orchestrated objective reduction) according to which consciousness originates from processes within neurons, rather than from the connections between neurons (the conventional view). The mechanism is believed to be a quantum physical process called objective reduction which is orchestrated by the molecular structures of the microtubules of brain cells (which constitute the cytoskeleton of the cells themselves). Together with the physician Stuart Hameroff, Penrose has suggested a direct relationship between the quantum vibrations of microtubules and the formation of consciousness.

<Read More [by Fabio Marzocca]>

26 October 2016

Joachim Breitner: Showcasing Applicative

My plan for this week s lecture of the CIS 194 Haskell course at the University of Pennsylvania is to dwell a bit on the concept of Functor, Applicative and Monad, and to highlight the value of the Applicative abstraction. I quite like the example that I came up with, so I want to share it here. In the interest of long-term archival and stand-alone presentation, I include all the material in this post.1

Imports In case you want to follow along, start with these imports:
import Data.Char
import Data.Maybe
import Data.List
import System.Environment
import System.IO
import System.Exit

The parser The starting point for this exercise is a fairly standard parser-combinator monad, which happens to be the result of the student s homework from last week:
newtype Parser a = P (String -> Maybe (a, String))
runParser :: Parser t -> String -> Maybe (t, String)
runParser (P p) = p
parse :: Parser a -> String -> Maybe a
parse p input = case runParser p input of
    Just (result, "") -> Just result
    _ -> Nothing -- handles both no result and leftover input
noParserP :: Parser a
noParserP = P (\_ -> Nothing)
pureParserP :: a -> Parser a
pureParserP x = P (\input -> Just (x,input))
instance Functor Parser where
    fmap f p = P $ \input -> do
	(x, rest) <- runParser p input
	return (f x, rest)
instance Applicative Parser where
    pure = pureParserP
    p1 <*> p2 = P $ \input -> do
        (f, rest1) <- runParser p1 input
        (x, rest2) <- runParser p2 rest1
        return (f x, rest2)
instance Monad Parser where
    return = pure
    p1 >>= k = P $ \input -> do
        (x, rest1) <- runParser p1 input
        runParser (k x) rest1
anyCharP :: Parser Char
anyCharP = P $ \input -> case input of
    (c:rest) -> Just (c, rest)
    []       -> Nothing
charP :: Char -> Parser ()
charP c = do
    c' <- anyCharP
    if c == c' then return ()
               else noParserP
anyCharButP :: Char -> Parser Char
anyCharButP c = do
    c' <- anyCharP
    if c /= c' then return c'
               else noParserP
letterOrDigitP :: Parser Char
letterOrDigitP = do
    c <- anyCharP
    if isAlphaNum c then return c else noParserP
orElseP :: Parser a -> Parser a -> Parser a
orElseP p1 p2 = P $ \input -> case runParser p1 input of
    Just r -> Just r
    Nothing -> runParser p2 input
manyP :: Parser a -> Parser [a]
manyP p = (pure (:) <*> p <*> manyP p)  orElseP  pure []
many1P :: Parser a -> Parser [a]
many1P p = pure (:) <*> p <*> manyP p
sepByP :: Parser a -> Parser () -> Parser [a]
sepByP p1 p2 = (pure (:) <*> p1 <*> (manyP (p2 *> p1)))  orElseP  pure []
A parser using this library for, for example, CSV files could take this form:
parseCSVP :: Parser [[String]]
parseCSVP = manyP parseLine
  where
    parseLine = parseCell  sepByP  charP ',' <* charP '\n'
    parseCell = do
        charP '"'
        content <- manyP (anyCharButP '"')
        charP '"'
        return content

We want EBNF Often when we write a parser for a file format, we might also want to have a formal specification of the format. A common form for such a specification is EBNF. This might look as follows, for a CSV file:
cell = '"',  not-quote , '"';
line = (cell,  ',', cell    ''), newline;
csv  =  line ;
It is straightforward to create a Haskell data type to represent an EBNF syntax description. Here is a simple EBNF library (data type and pretty-printer) for your convenience:
data RHS
  = Terminal String
    NonTerminal String
    Choice RHS RHS
    Sequence RHS RHS
    Optional RHS
    Repetition RHS
  deriving (Show, Eq)
ppRHS :: RHS -> String
ppRHS = go 0
  where
    go _ (Terminal s)     = surround "'" "'" $ concatMap quote s
    go _ (NonTerminal s)  = s
    go a (Choice x1 x2)   = p a 1 $ go 1 x1 ++ "   " ++ go 1 x2
    go a (Sequence x1 x2) = p a 2 $ go 2 x1 ++ ", "  ++ go 2 x2
    go _ (Optional x)     = surround "[" "]" $ go 0 x
    go _ (Repetition x)   = surround " " " " $ go 0 x
    surround c1 c2 x = c1 ++ x ++ c2
    p a n   a > n     = surround "(" ")"
            otherwise = id
    quote '\'' = "\\'"
    quote '\\' = "\\\\"
    quote c    = [c]
type Production = (String, RHS)
type BNF = [Production]
ppBNF :: BNF -> String
ppBNF = unlines . map (\(i,rhs) -> i ++ " = " ++ ppRHS rhs ++ ";")

Code to produce EBNF We had a good time writing combinators that create complex parsers from primitive pieces. Let us do the same for EBNF grammars. We could simply work on the RHS type directly, but we can do something more nifty: We create a data type that keeps track, via a phantom type parameter, of what Haskell type the given EBNF syntax is the specification:
newtype Grammar a = G RHS
ppGrammar :: Grammar a -> String
ppGrammar (G rhs) = ppRHS rhs
So a value of type Grammar t is a description of the textual representation of the Haskell type t. Here is one simple example:
anyCharG :: Grammar Char
anyCharG = G (NonTerminal "char")
Here is another one. This one does not describe any interesting Haskell type, but is useful when spelling out the special characters in the syntax described by the grammar:
charG :: Char -> Grammar ()
charG c = G (Terminal [c])
A combinator that creates new grammar from two existing grammars:
orElseG :: Grammar a -> Grammar a -> Grammar a
orElseG (G rhs1) (G rhs2) = G (Choice rhs1 rhs2)
We want the convenience of our well-known type classes in order to combine these values some more:
instance Functor Grammar where
    fmap _ (G rhs) = G rhs
instance Applicative Grammar where
    pure x = G (Terminal "")
    (G rhs1) <*> (G rhs2) = G (Sequence rhs1 rhs2)
Note how the Functor instance does not actually use the function. How should it? There are no values inside a Grammar! We cannot define a Monad instance for Grammar: We would start with (G rhs1) >>= k = , but there is simply no way of getting a value of type a that we can feed to k. So we will do without a Monad instance. This is interesting, and we will come back to that later. Like with the parser, we can now begin to build on the primitive example to build more complicated combinators:
manyG :: Grammar a -> Grammar [a]
manyG p = (pure (:) <*> p <*> manyG p)  orElseG  pure []
many1G :: Grammar a -> Grammar [a]
many1G p = pure (:) <*> p <*> manyG p
sepByG :: Grammar a -> Grammar () -> Grammar [a]
sepByG p1 p2 = ((:) <$> p1 <*> (manyG (p2 *> p1)))  orElseG  pure []
Let us run a small example:
dottedWordsG :: Grammar [String]
dottedWordsG = many1G (manyG anyCharG <* charG '.')
*Main> putStrLn $ ppGrammar dottedWordsG
'', ('', char, ('', char, ('', char, ('', char, ('', char, ('',  
Oh my, that is not good. Looks like the recursion in manyG does not work well, so we need to avoid that. But anyways we want to be explicit in the EBNF grammars about where something can be repeated, so let us just make many a primitive:
manyG :: Grammar a -> Grammar [a]
manyG (G rhs) = G (Repetition rhs)
With this definition, we already get a simple grammar for dottedWordsG:
*Main> putStrLn $ ppGrammar dottedWordsG
'',  char , '.',  char , '.' 
This already looks like a proper EBNF grammar. One thing that is not nice about it is that there is an empty string ('') in a sequence ( , ). We do not want that. Why is it there in the first place? Because our Applicative instance is not lawful! Remember that pure id <*> g == g should hold. One way to achieve that is to improve the Applicative instance to optimize this case away:
instance Applicative Grammar where
    pure x = G (Terminal "")
    G (Terminal "") <*> G rhs2 = G rhs2
    G rhs1 <*> G (Terminal "") = G rhs1
    (G rhs1) <*> (G rhs2) = G (Sequence rhs1 rhs2)
	 
Now we get what we want:
*Main> putStrLn $ ppGrammar dottedWordsG
 char , '.',  char , '.' 
Remember our parser for CSV files above? Let me repeat it here, this time using only Applicative combinators, i.e. avoiding (>>=), (>>), return and do-notation:
parseCSVP :: Parser [[String]]
parseCSVP = manyP parseLine
  where
    parseLine = parseCell  sepByP  charG ',' <* charP '\n'
    parseCell = charP '"' *> manyP (anyCharButP '"') <* charP '"'
And now we try to rewrite the code to produce Grammar instead of Parser. This is straightforward with the exception of anyCharButP. The parser code for that inherently monadic, and we just do not have a monad instance. So we work around the issue by making that a primitive grammar, i.e. introducing a non-terminal in the EBNF without a production rule pretty much like we did for anyCharG:
primitiveG :: String -> Grammar a
primitiveG s = G (NonTerminal s)
parseCSVG :: Grammar [[String]]
parseCSVG = manyG parseLine
  where
    parseLine = parseCell  sepByG  charG ',' <* charG '\n'
    parseCell = charG '"' *> manyG (primitiveG "not-quote") <* charG '"'
Of course the names parse are not quite right any more, but let us just leave that for now. Here is the result:
*Main> putStrLn $ ppGrammar parseCSVG
 ('"',  not-quote , '"',  ',', '"',  not-quote , '"'    ''), '
' 
The line break is weird. We do not really want newlines in the grammar. So let us make that primitive as well, and replace charG '\n' with newlineG:
newlineG :: Grammar ()
newlineG = primitiveG "newline"
Now we get
*Main> putStrLn $ ppGrammar parseCSVG
 ('"',  not-quote , '"',  ',', '"',  not-quote , '"'    ''), newline 
which is nice and correct, but still not quite the easily readable EBNF that we saw further up.

Code to produce EBNF, with productions We currently let our grammars produce only the right-hand side of one EBNF production, but really, we want to produce a RHS that may refer to other productions. So let us change the type accordingly:
newtype Grammar a = G (BNF, RHS)
runGrammer :: String -> Grammar a -> BNF
runGrammer main (G (prods, rhs)) = prods ++ [(main, rhs)]
ppGrammar :: String -> Grammar a -> String
ppGrammar main g = ppBNF $ runGrammer main g
Now we have to adjust all our primitive combinators (but not the derived ones!):
charG :: Char -> Grammar ()
charG c = G ([], Terminal [c])
anyCharG :: Grammar Char
anyCharG = G ([], NonTerminal "char")
manyG :: Grammar a -> Grammar [a]
manyG (G (prods, rhs)) = G (prods, Repetition rhs)
mergeProds :: [Production] -> [Production] -> [Production]
mergeProds prods1 prods2 = nub $ prods1 ++ prods2
orElseG :: Grammar a -> Grammar a -> Grammar a
orElseG (G (prods1, rhs1)) (G (prods2, rhs2))
    = G (mergeProds prods1 prods2, Choice rhs1 rhs2)
instance Functor Grammar where
    fmap _ (G bnf) = G bnf
instance Applicative Grammar where
    pure x = G ([], Terminal "")
    G (prods1, Terminal "") <*> G (prods2, rhs2)
        = G (mergeProds prods1 prods2, rhs2)
    G (prods1, rhs1) <*> G (prods2, Terminal "")
        = G (mergeProds prods1 prods2, rhs1)
    G (prods1, rhs1) <*> G (prods2, rhs2)
        = G (mergeProds prods1 prods2, Sequence rhs1 rhs2)
primitiveG :: String -> Grammar a
primitiveG s = G (NonTerminal s)
The use of nub when combining productions removes duplicates that might be used in different parts of the grammar. Not efficient, but good enough for now. Did we gain anything? Not yet:
*Main> putStr $ ppGrammar "csv" (parseCSVG)
csv =  ('"',  not-quote , '"',  ',', '"',  not-quote , '"'    ''), newline ;
But we can now introduce a function that lets us tell the system where to give names to a piece of grammar:
nonTerminal :: String -> Grammar a -> Grammar a
nonTerminal name (G (prods, rhs))
  = G (prods ++ [(name, rhs)], NonTerminal name)
Ample use of this in parseCSVG yields the desired result:
parseCSVG :: Grammar [[String]]
parseCSVG = manyG parseLine
  where
    parseLine = nonTerminal "line" $
        parseCell  sepByG  charG ',' <* newline
    parseCell = nonTerminal "cell" $
        charG '"' *> manyG (primitiveG "not-quote") <* charG '"
*Main> putStr $ ppGrammar "csv" (parseCSVG)
cell = '"',  not-quote , '"';
line = (cell,  ',', cell    ''), newline;
csv =  line ;
This is great!

Unifying parsing and grammar-generating Note how simliar parseCSVG and parseCSVP are! Would it not be great if we could implement that functionality only once, and get both a parser and a grammar description out of it? This way, the two would never be out of sync! And surely this must be possible. The tool to reach for is of course to define a type class that abstracts over the parts where Parser and Grammer differ. So we have to identify all functions that are primitive in one of the two worlds, and turn them into type class methods. This includes char and orElse. It includes many, too: Although manyP is not primitive, manyG is. It also includes nonTerminal, which does not exist in the world of parsers (yet), but we need it for the grammars. The primitiveG function is tricky. We use it in grammars when the code that we might use while parsing is not expressible as a grammar. So the solution is to let it take two arguments: A String, when used as a descriptive non-terminal in a grammar, and a Parser a, used in the parsing code. Finally, the type classes that we except, Applicative (and thus Functor), are added as constraints on our type class:
class Applicative f => Descr f where
    char :: Char -> f ()
    many :: f a -> f [a]
    orElse :: f a -> f a -> f a
    primitive :: String -> Parser a -> f a
    nonTerminal :: String -> f a -> f a
The instances are easily written:
instance Descr Parser where
    char = charP
    many = manyP
    orElse = orElseP
    primitive _ p = p
    nonTerminal _ p = p
instance Descr Grammar where
    char = charG
    many = manyG
    orElse = orElseG
    primitive s _ = primitiveG s
    nonTerminal s g = nonTerminal s g
And we can now take the derived definitions, of which so far we had two copies, and define them once and for all:
many1 :: Descr f => f a -> f [a]
many1 p = pure (:) <*> p <*> many p
anyChar :: Descr f => f Char
anyChar = primitive "char" anyCharP
dottedWords :: Descr f => f [String]
dottedWords = many1 (many anyChar <* char '.')
sepBy :: Descr f => f a -> f () -> f [a]
sepBy p1 p2 = ((:) <$> p1 <*> (many (p2 *> p1)))  orElse  pure []
newline :: Descr f => f ()
newline = primitive "newline" (charP '\n')
And thus we now have our CSV parser/grammar generator:
parseCSV :: Descr f => f [[String]]
parseCSV = many parseLine
  where
    parseLine = nonTerminal "line" $
        parseCell  sepBy  char ',' <* newline
    parseCell = nonTerminal "cell" $
        char '"' *> many (primitive "not-quote" (anyCharButP '"')) <* char '"'
We can now use this definition both to parse and to generate grammars:
*Main> putStr $ ppGrammar2 "csv" (parseCSV)
cell = '"',  not-quote , '"';
line = (cell,  ',', cell    ''), newline;
csv =  line ;
*Main> parse parseCSV "\"ab\",\"cd\"\n\"\",\"de\"\n\n"
Just [["ab","cd"],["","de"],[]]

The INI file parser and grammar As a final exercise, let us transform the INI file parser into a combined thing. Here is the parser (another artifact of last week s homework) again using applicative style2:
parseINIP :: Parser INIFile
parseINIP = many1P parseSection
  where
    parseSection =
        (,) <$  charP '['
            <*> parseIdent
            <*  charP ']'
            <*  charP '\n'
            <*> (catMaybes <$> manyP parseLine)
    parseIdent = many1P letterOrDigitP
    parseLine = parseDecl  orElseP  parseComment  orElseP  parseEmpty
    parseDecl = Just <$> (
        (,) <*> parseIdent
            <*  manyP (charP ' ')
            <*  charP '='
            <*  manyP (charP ' ')
            <*> many1P (anyCharButP '\n')
            <*  charP '\n')
    parseComment =
        Nothing <$ charP '#'
                <* many1P (anyCharButP '\n')
                <* charP '\n'
    parseEmpty = Nothing <$ charP '\n'
Transforming that to a generic description is quite straightforward. We use primitive again to wrap letterOrDigitP:
descrINI :: Descr f => f INIFile
descrINI = many1 parseSection
  where
    parseSection =
        (,) <*  char '['
            <*> parseIdent
            <*  char ']'
            <*  newline
            <*> (catMaybes <$> many parseLine)
    parseIdent = many1 (primitive "alphanum" letterOrDigitP)
    parseLine = parseDecl  orElse  parseComment  orElse  parseEmpty
    parseDecl = Just <$> (
        (,) <*> parseIdent
            <*  many (char ' ')
            <*  char '='
            <*  many (char ' ')
            <*> many1 (primitive "non-newline" (anyCharButP '\n'))
	    <*  newline)
    parseComment =
        Nothing <$ char '#'
                <* many1 (primitive "non-newline" (anyCharButP '\n'))
		<* newline
    parseEmpty = Nothing <$ newline
This yields this not very helpful grammar (abbreviated here):
*Main> putStr $ ppGrammar2 "ini" descrINI
ini = '[', alphanum,  alphanum , ']', newline,  alphanum,  alphanum ,  ' ' 
But with a few uses of nonTerminal, we get something really nice:
descrINI :: Descr f => f INIFile
descrINI = many1 parseSection
  where
    parseSection = nonTerminal "section" $
        (,) <$  char '['
            <*> parseIdent
            <*  char ']'
            <*  newline
            <*> (catMaybes <$> many parseLine)
    parseIdent = nonTerminal "identifier" $
        many1 (primitive "alphanum" letterOrDigitP)
    parseLine = nonTerminal "line" $
        parseDecl  orElse  parseComment  orElse  parseEmpty
    parseDecl = nonTerminal "declaration" $ Just <$> (
        (,) <$> parseIdent
            <*  spaces
            <*  char '='
            <*  spaces
            <*> remainder)
    parseComment = nonTerminal "comment" $
        Nothing <$ char '#' <* remainder
    remainder = nonTerminal "line-remainder" $
        many1 (primitive "non-newline" (anyCharButP '\n')) <* newline
    parseEmpty = Nothing <$ newline
    spaces = nonTerminal "spaces" $ many (char ' ')
*Main> putStr $ ppGrammar "ini" descrINI
identifier = alphanum,  alphanum ;
spaces =  ' ' ;
line-remainder = non-newline,  non-newline , newline;
declaration = identifier, spaces, '=', spaces, line-remainder;
comment = '#', line-remainder;
line = declaration   comment   newline;
section = '[', identifier, ']', newline,  line ;
ini = section,  section ;

Recursion (variant 1) What if we want to write a parser/grammar-generator that is able to generate the following grammar, which describes terms that are additions and multiplications of natural numbers:
const = digit,  digit ;
spaces =  ' '   newline ;
atom = const   '(', spaces, expr, spaces, ')', spaces;
mult = atom,  spaces, '*', spaces, atom , spaces;
plus = mult,  spaces, '+', spaces, mult , spaces;
expr = plus;
The production of expr is recursive (via plus, mult, atom). We have seen above that simply defining a Grammar a recursively does not go well. One solution is to add a new combinator for explicit recursion, which replaces nonTerminal in the method:
class Applicative f => Descr f where
     
    recNonTerminal :: String -> (f a -> f a) -> f a
instance Descr Parser where
     
    recNonTerminal _ p = let r = p r in r
instance Descr Grammar where
     
    recNonTerminal = recNonTerminalG
recNonTerminalG :: String -> (Grammar a -> Grammar a) -> Grammar a
recNonTerminalG name f =
    let G (prods, rhs) = f (G ([], NonTerminal name))
    in G (prods ++ [(name, rhs)], NonTerminal name)
nonTerminal :: Descr f => String -> f a -> f a
nonTerminal name p = recNonTerminal name (const p)
runGrammer :: String -> Grammar a -> BNF
runGrammer main (G (prods, NonTerminal nt))   main == nt = prods
runGrammer main (G (prods, rhs)) = prods ++ [(main, rhs)]
The change in runGrammer avoids adding a pointless expr = expr production to the output. This lets us define a parser/grammar-generator for the arithmetic expressions given above:
data Expr = Plus Expr Expr   Mult Expr Expr   Const Integer
    deriving Show
mkPlus :: Expr -> [Expr] -> Expr
mkPlus = foldl Plus
mkMult :: Expr -> [Expr] -> Expr
mkMult = foldl Mult
parseExpr :: Descr f => f Expr
parseExpr = recNonTerminal "expr" $ \ exp ->
    ePlus exp
ePlus :: Descr f => f Expr -> f Expr
ePlus exp = nonTerminal "plus" $
    mkPlus <$> eMult exp
           <*> many (spaces *> char '+' *> spaces *> eMult exp)
           <*  spaces
eMult :: Descr f => f Expr -> f Expr
eMult exp = nonTerminal "mult" $
    mkPlus <$> eAtom exp
           <*> many (spaces *> char '*' *> spaces *> eAtom exp)
           <*  spaces
eAtom :: Descr f => f Expr -> f Expr
eAtom exp = nonTerminal "atom" $
    aConst  orElse  eParens exp
aConst :: Descr f => f Expr
aConst = nonTerminal "const" $ Const . read <$> many1 digit
eParens :: Descr f => f a -> f a
eParens inner =
    id <$  char '('
       <*  spaces
       <*> inner
       <*  spaces
       <*  char ')'
       <*  spaces
And indeed, this works:
*Main> putStr $ ppGrammar "expr" parseExpr
const = digit,  digit ;
spaces =  ' '   newline ;
atom = const   '(', spaces, expr, spaces, ')', spaces;
mult = atom,  spaces, '*', spaces, atom , spaces;
plus = mult,  spaces, '+', spaces, mult , spaces;
expr = plus;

Recursion (variant 2) Interestingly, there is another solution to this problem, which avoids introducing recNonTerminal and explicitly passing around the recursive call (i.e. the exp in the example). To implement that we have to adjust our Grammar type as follows:
newtype Grammar a = G ([String] -> (BNF, RHS))
The idea is that the list of strings is those non-terminals that we are currently defining. So in nonTerminal, we check if the non-terminal to be introduced is currently in the process of being defined, and then simply ignore the body. This way, the recursion is stopped automatically:
nonTerminalG :: String -> (Grammar a) -> Grammar a
nonTerminalG name (G g) = G $ \seen ->
    if name  elem  seen
    then ([], NonTerminal name)
    else let (prods, rhs) = g (name : seen)
         in (prods ++ [(name, rhs)], NonTerminal name)
After adjusting the other primitives of Grammar (including the Functor and Applicative instances, wich now again have nonTerminal) to type-check again, we observe that this parser/grammar generator for expressions, with genuine recursion, works now:
parseExp :: Descr f => f Expr
parseExp = nonTerminal "expr" $
    ePlus
ePlus :: Descr f => f Expr
ePlus = nonTerminal "plus" $
    mkPlus <$> eMult
           <*> many (spaces *> char '+' *> spaces *> eMult)
           <*  spaces
eMult :: Descr f => f Expr
eMult = nonTerminal "mult" $
    mkPlus <$> eAtom
           <*> many (spaces *> char '*' *> spaces *> eAtom)
           <*  spaces
eAtom :: Descr f => f Expr
eAtom = nonTerminal "atom" $
    aConst  orElse  eParens parseExp
Note that the recursion is only going to work if there is at least one call to nonTerminal somewhere around the recursive calls. We still cannot implement many as naively as above.

Homework If you want to play more with this: The homework is to define a parser/grammar-generator for EBNF itself, as specified in this variant:
identifier = letter,  letter   digit   '-' ;
spaces =  ' '   newline ;
quoted-char = non-quote-or-backslash   '\\', '\\'   '\\', '\'';
terminal = '\'',  quoted-char , '\'', spaces;
non-terminal = identifier, spaces;
option = '[', spaces, rhs, spaces, ']', spaces;
repetition = ' ', spaces, rhs, spaces, ' ', spaces;
group = '(', spaces, rhs, spaces, ')', spaces;
atom = terminal   non-terminal   option   repetition   group;
sequence = atom,  spaces, ',', spaces, atom , spaces;
choice = sequence,  spaces, ' ', spaces, sequence , spaces;
rhs = choice;
production = identifier, spaces, '=', spaces, rhs, ';', spaces;
bnf = production,  production ;
This grammar is set up so that the precedence of , and is correctly implemented: a , b c will parse as (a, b) c. In this syntax for BNF, terminal characters are quoted, i.e. inside ' ', a ' is replaced by \' and a \ is replaced by \\ this is done by the function quote in ppRHS. If you do this, you should able to round-trip with the pretty-printer, i.e. parse back what it wrote:
*Main> let bnf1 = runGrammer "expr" parseExpr
*Main> let bnf2 = runGrammer "expr" parseBNF
*Main> let f = Data.Maybe.fromJust . parse parseBNF. ppBNF
*Main> f bnf1 == bnf1
True
*Main> f bnf2 == bnf2
True
The last line is quite meta: We are using parseBNF as a parser on the pretty-printed grammar produced from interpreting parseBNF as a grammar.

Conclusion We have again seen an example of the excellent support for abstraction in Haskell: Being able to define so very different things such as a parser and a grammar description with the same code is great. Type classes helped us here. Note that it was crucial that our combined parser/grammars are only able to use the methods of Applicative, and not Monad. Applicative is less powerful, so by giving less power to the user of our Descr interface, the other side, i.e. the implementation, can be more powerful. The reason why Applicative is ok, but Monad is not, is that in Applicative, the results do not affect the shape of the computation, whereas in Monad, the whole point of the bind operator (>>=) is that the result of the computation is used to decide the next computation. And while this is perfectly fine for a parser, it just makes no sense for a grammar generator, where there simply are no values around! We have also seen that a phantom type, namely the parameter of Grammar, can be useful, as it lets the type system make sure we do not write nonsense. For example, the type of orElseG ensures that both grammars that are combined here indeed describe something of the same type.

  1. It seems to be the week of applicative-appraising blog posts: Brent has posted a nice piece about enumerations using Applicative yesterday.
  2. I like how in this alignment of <*> and <* the > point out where the arguments are that are being passed to the function on the left.

22 October 2016

Ingo Juergensmann: Automatically update TLSA records on new Letsencrypt Certs

I've been using DNSSEC for some quite time now and it is working quite well. When LetsEncrypt went public beta I jumped on the train and migrated many services to LE-based TLS. However there was still one small problem with LE certs: When there is a new cert, all of the old TLSA resource records are not valid anymore and might give problems to strict DNSSEC checking clients. It took some while until my pain was big enough to finally fix it by some scripts. There are at least two scripts involved: 1) dnssec.sh
This script does all of my DNSSEC handling. You can just do a "dnssec.sh enable-dnssec domain.tld" and everything is configured so that you only need to copy the appropriate keys into the webinterface of your DNS registry.
host:~/bin# dnssec.sh
No parameter given.
Usage: dnsec.sh MODE DOMAIN
MODE can be one of the following:
enable-dnssec : perform all steps to enable DNSSEC for your domain
edit-zone     : safely edit your zone after enabling DNSSEC
create-dnskey : create new dnskey only
load-dnskey   : loads new dnskeys and signs the zone with them
show-ds       : shows DS records of zone
zoneadd-ds    : adds DS records to the zone file
show-dnskey   : extract DNSKEY record that needs to uploaded to your registrar
update-tlsa   : update TLSA records with new TLSA hash, needs old and new TLSA hashes as additional parameters
For updating zone-files just do a "dnssech.sh edit-zone domain.tld" to add new records and such and the script will take care e.g. of increasing the serial of the zone file. I find this very convenient, so I often use this script for non-DNSSEC-enabled domains as well. However you can spot the command line option "update-tlsa". This option needs the old and the new TLSA hashes beside the domain.tld parameter. However, this option is used from the second script: 2) check_tlsa.sh
This is a quite simple Bash script that parses the domains.txt from letsencrypt.sh script, looking up the old TLSA hash in the zone files (structured in TLD/domain.tld directories), compare the old with the new hash (by invoking tlsagen.sh) and if there is a difference in hashes, call dnssec.sh with the proper parameters:
#!/bin/bash
set -e
LEPATH="/etc/letsencrypt.sh"
for i in  cat /etc/letsencrypt.sh/domains.txt   awk ' print $1 '  ; do
  domain= echo $i   awk 'BEGIN  FS="."  ;  print $(NF-1)"."$NF ' 
  #echo -n "Domain: $domain"
  TLD= echo $i   awk 'BEGIN  FS="." ;  print $NF ' 
  #echo ", TLD: $TLD"
  OLDTLSA= grep -i "in.*tlsa" /etc/bind/$ TLD /$ domain    grep $ i    head -n 1   awk ' print $NF ' 
  if [ -n "$ OLDTLSA " ] ; then
  #echo "--> $ OLDTLSA "
  # Usage: tlsagen.sh cert.pem host[:port] usage selector mtype
  NEWTLSA= /path/to/tlsagen.sh $LEPATH/certs/$ i /fullchain.pem $ i  3 1 1   awk ' print $NF ' 
  #echo "==> $NEWTLSA"
  if [ "$ OLDTLSA " != "$ NEWTLSA " ] ; then
  /path/to/dnssec.sh update-tlsa $ domain  $ OLDTLSA  $ NEWTLSA  > /dev/null
  echo "TLSA RR update for $ i "
  fi
  fi
done
So, quite simple and obviously a quick hack. For sure someone else can write a cleaner and more sophisticated implementation to do the same stuff, but at least it works for meTM. Use it on your own risk and do whatever you want with these scripts (licensed under public domain). You can invoke check_tlsa.sh right after your crontab call for letsencrypt.sh. In a more sophisticated way it should be fairly easy to invoke these scripts from letsencrypt.sh post hooks as well.
Please find the files attached to this page (remove the .txt extension after saving, of course).
AttachmentSize
check_tlsa.sh.txt812 bytes
dnssec.sh.txt3.88 KB
Kategorie:

18 October 2016

MJ Ray: Rinse and repeat

Forgive me, reader, for I have sinned. It has been over a year since my last blog post. Life got busy. Paid work. Another round of challenges managing my chronic illness. Cycle campaigning. Fun bike rides. Friends. Family. Travels. Other social media to stroke. I m still reading some of the planets where this blog post should appear and commenting on some, so I ve not felt completely cut off, but I am surprised how many people don t allow comments on their blogs any more (or make it too difficult for me with reCaptcha and the like). The main motive for this post is to test some minor upgrades, though. Hi everyone. How s it going with you? I ll probably keep posting short updates in the future. Go in peace to love and serve the web.

16 October 2016

Thomas Goirand: Released OpenStack Newton, Moving OpenStack packages to upstream Gerrit CI/CD

OpenStack Newton is released, and uploaded to Sid OpenStack Newton was released on the Thursday 6th of October. I was able to upload nearly all of it before the week-end, though there was a bit of hick-ups still, as I forgot to upload python-fixtures 3.0.0 to unstable, and only realized it thanks to some bug reports. As this is a build time dependency, it didn t disrupt Sid users too much, but 38 packages wouldn t build without it. Thanks to Santiago Vila for pointing at the issue here. As of writing, a lot of the Newton packages didn t migrate to Testing yet. It s been migrating in a very messy way. I d love to improve this process, but I m not sure how, if not filling RC bugs against 250 packages (which would be painful to do), so they would migrate at once. Suggestions welcome. Bye bye Jenkins For a few years, I was using Jenkins, together with a post-receive hook to build Debian Stable backports of OpenStack packages. Though nearly a year and a half ago, we had that project to build the packages within the OpenStack infrastructure, and use the CI/CD like OpenStack upstream was doing. This is done, and Jenkins is gone, as of OpenStack Newton. Current status As of August, almost all of the packages Git repositories were uploaded to OpenStack Gerrit, and the build now happens in OpenStack infrastructure. We ve been able to build all packages a release OpenStack Newton Debian packages using this system. This non-official jessie backports repository has also been validated using Tempest. Goodies from Gerrit and upstream CI/CD It is very nice to have it built this way, so we will be able to maintain a full CI/CD in upstream infrastructure using Newton for the life of Stretch, which means we will have the tools to test security patches virtually forever. Another thing is that now, anyone can propose packaging patches without the need for an Alioth account, by sending a patch for review through Gerrit. It is our hope that this will increase the likeliness of external contribution, for example from 3rd party plugins vendors (ie: networking driver vendors, for example), or upstream contributors themselves. They are already used to Gerrit, and they all expected the packaging to work this way. They are all very much welcome. The upstream infra: nodepool, zuul and friends
The OpenStack infrastructure has been described already in planet.debian.org, by Ian Wienand. So I wont describe it again, he did a better job than I ever would. How it works All source packages are stored in Gerrit with the deb- prefix. This is in order to avoid conflict with upstream code, and to easily locate packaging repositories. For example, you ll find Nova packaging under https://git.openstack.org/cgit/openstack/deb-nova. Two Debian repositories are stored in the infrastructure AFS (Andrew File System, which means a copy of that repository exist on each cloud were we have compute resources): one for the actual deb-* builds, under jessie-newton , and one for the automatic backports, maintained in the deb-auto-backports gerrit repository. We re using a git tag based workflow. Every Gerrit repository contains all of the upstream branch, plus a debian/newton branch, which contains the same content as a tag of upstream, plus the debian folder. The orig tarball is generated using git archive , then used by sbuild to produce binaries. To package a new upstream release, one simply needs to git merge -X theirs FOO (where FOO is the tag you want to merge), then edit debian/changelog so that the Debian package version matches the tag, then do git commit -a amend , and simply git review . At this point, the OpenStack CI will build the package. If it builds correctly, then a core reviewer can approve the merge commit , the patch is merged, then the package is built and the binary package published on the OpenStack Debian package repository. Maintaining backports automatically The automatic backports is maintained through a Gerrit repository called deb-auto-backports containing a packages-list file that simply lists source packages we need to backport. On each new CR (change request) in Gerrit, thanks to some madison-lite and dpkg compare-version magic, the packages-list is used to compare what s in the Debian archive and what we have in the jessie-newton-backports repository. If the version is lower in our repository, or if the package doesn t exist, then a build is triggered. There is the possibility to backport from any Debian release (using the -d flag in the packages-list file), and even we can use jessie-backports to just rebuild the package. I also had to write a hack to just download from jessie-backports without rebuilding, because rebuilding the webkit2gtk package (needed by sphinx) was taking too resources (though we ll try to never use it, and rebuild packages when possible). The nice thing with this system, is that we don t need to care much about maintaining packages up-to-date: the script does that for us. Upstream Debian repository are NOT for production The produced package repositories are there because we have interconnected build dependencies, needed to run unit test at build time. It is the only reason why such Debian repository exist. They are not for production use. If you wish to deploy OpenStack, we very much recommend using packages from distributions (like Debian or Ubuntu). Indeed, the infrastructure Debian repositories are updated multiple times daily. As a result, it is very likely that you will experience failures to download (hash or file size mismatch and such). Also, the functional tests aren t yet wired in the CI/CD in OpenStack infra, and therefore, we cannot guarantee yet that the packages are usable. Improving the build infrastructure There s a bunch of things which we could do to improve the build process. Let me give a list of things we want to do. Generalizing to Debian During Debconf 16, I had very interesting talks with the DSA (Debian System Administrator) about deploying such a CI/CD for the whole of the Debian archive, interfacing Gerrit with something like dgit and a build CI. I was told that I should provide a proof of concept first, which I very much agreed with. Such a PoC is there now, within OpenStack infra. I very much welcome any Debian contributor to try it, through a packaging patch. If you wish to do so, you should read how to contribute to OpenStack here: https://wiki.openstack.org/wiki/How_To_Contribute#If_you.27re_a_developer and then simply send your patch with git review . This system, however, currently only fits the git tag based packaging workflow. We d have to do a little bit more work to make it possible to use pristine-tar (basically, allow to push in the upstream and pristine-tar branches without any CI job connected to the push). Dear DSA team, as we now nice PoC that is working well, on which the OpenStack PKG team is maintaining 100s of packages, shall we try to generalize and provide such infrastructure for every packaging team and DDs?

10 October 2016

Daniel Pocock: DVD-based Clean Room for PGP and PKI

There is increasing interest in computer security these days and more and more people are using some form of PKI, whether it is signing Git tags, signing packages for a GNU/Linux distribution or just signing your emails. There are also more home networks and small offices who require their own in-house Certificate Authority (CA) to issue TLS certificates for VPN users (e.g. StrongSWAN) or IP telephony. Back in April, I started discussing the PGP Clean Room idea (debian-devel discussion and gnupg-users discussion), created a wiki page and started development of a script to build the clean room ISO using live-build on Debian. Keeping the master keys completely offline and putting subkeys onto smart cards and other devices dramatically lowers the risk of mistakes and security breaches. Using a read-only DVD to operate the clean-room makes it convenient and harder to tamper with. Trying it out in VirtualBox It is fairly easy to clone the Git repository, run the script to create the ISO and boot it in VirtualBox to see what is inside: At the moment, it contains a number of packages likely to be useful in a PKI clean room, including GnuPG, smartcard drivers, the lightweight pki utility from StrongSWAN and OpenSSL. I've been trying it out with an SPR-532, one of the GnuPG-supported smartcard readers with a pin-pad and the OpenPGP card. Ready to use today More confident users will be able to build the ISO and use it immediately by operating all the utilities from the command line. For example, you should be able to fully configure PGP smart cards by following this blog from Simon Josefsson. The ISO includes some useful scripts, for example, create-raid will quickly partition and RAID a set of SD cards to store your master key-pair offline. Getting involved To make PGP accessible to a wider user-base and more convenient for those who don't use GnuPG frequently enough to remember all the command line options, it would be interesting to create a GUI, possibly using python-newt to create a similar look-and-feel to popular text-based installer and system administration tools. If you are keen on this project and would like to discuss it further, please come and join the new pki-clean-room mailing list and feel free to ask questions or share your thoughts about it. One way to proceed may be to recruit an Outreachy or GSoC intern to develop the UI. Before they can get started, it would be necessary to more thoroughly document workflow requirements.

8 October 2016

Norbert Preining: Debian/TeX update October 2016: all of TeX Live and Biber 2.6

Finally a new update of many TeX related packages: all the texlive-* including the binary packages, and biber have been updated to the latest release. This upload was delayed by my travels around the world, as well as the necessity to package a new Perl module (libdatetime-calendar-julian-perl) as required by new Biber. Also, my new job leaves me only the weekends for packaging. Anyway, the packages are now uploaded and should appear soon on your friendly local server. texlive2016-debian There are several highlights: The binaries have been patched with several upstream fixes (tex4ht and XeTeX compatibility, as well as various Japanese TeX engine fixes), updated biber and biblatex, and as usual loads of new and updated packages. Last but not least I want to thank one particular author: His package was removed from TeX Live due to the addition of a rather unusual clause in the license. Instead of simply uploading new packages to Debian with the rather important removed, I contacted the author and asked for clarification. And to my great pleasure he immediately answered with an update of the package with fixed license. All of us user of these many packages should be grateful to the authors of the packages who invest loads of their free time into supporting our community. Thanks! Enough now, here as usual the list of new and updated packages with links to their respective CTAN pages. Enjoy. New packages addfont, apalike-german, autoaligne, baekmuk, beamerswitch, beamertheme-cuerna, beuron, biblatex-claves, biolett-bst, cooking-units, cstypo, emf, eulerpx, filecontentsdef, frederika2016, grant, latexgit, listofitems, overlays, phonenumbers, pst-arrow, quicktype, revquantum, richtext, semantic-markup, spalign, texproposal, tikz-page, unfonts-core, unfonts-extra, uspace. Updated packages achemso, acmart, acro, adobemapping, alegreya, allrunes, animate, arabluatex, archaeologie, asymptote, attachfile, babel-greek, bangorcsthesis, beebe, biblatex, biblatex-anonymous, biblatex-apa, biblatex-bookinother, biblatex-chem, biblatex-fiwi, biblatex-gost, biblatex-ieee, biblatex-manuscripts-philology, biblatex-morenames, biblatex-nature, biblatex-opcit-booktitle, biblatex-phys, biblatex-realauthor, biblatex-science, biblatex-true-citepages-omit, bibleref, bidi, chemformula, circuitikz, cochineal, colorspace, comment, covington, cquthesis, ctex, drawmatrix, ejpecp, erewhon, etoc, exsheets, fancyhdr, fei, fithesis, footnotehyper, fvextra, geschichtsfrkl, gnuplottex, gost, gregoriotex, hausarbeit-jura, ijsra, ipaex, jfontmaps, jsclasses, jslectureplanner, latexdiff, leadsheets, libertinust1math, luatexja, markdown, mcf2graph, minutes, multirow, mynsfc, nameauth, newpx, newtxsf, notespages, optidef, pas-cours, platex, prftree, pst-bezier, pst-circ, pst-eucl, pst-optic, pstricks, pstricks-add, refenums, reledmac, rsc, shdoc, siunitx, stackengine, tabstackengine, tagpair, tetex, texlive-es, texlive-scripts, ticket, translation-biblatex-de, tudscr, turabian-formatting, updmap-map, uplatex, xebaposter, xecjk, xepersian, xpinyin. Enjoy.

26 September 2016

Kees Cook: security things in Linux v4.3

When I gave my State of the Kernel Self-Protection Project presentation at the 2016 Linux Security Summit, I included some slides covering some quick bullet points on things I found of interest in recent Linux kernel releases. Since there wasn t a lot of time to talk about them all, I figured I d make some short blog posts here about the stuff I was paying attention to, along with links to more information. This certainly isn t everything security-related or generally of interest, but they re the things I thought needed to be pointed out. If there s something security-related you think I should cover from v4.3, please mention it in the comments. I m sure I haven t caught everything. :) A note on timing and context: the momentum for starting the Kernel Self Protection Project got rolling well before it was officially announced on November 5th last year. To that end, I included stuff from v4.3 (which was developed in the months leading up to November) under the umbrella of the project, since the goals of KSPP aren t unique to the project nor must the goals be met by people that are explicitly participating in it. Additionally, not everything I think worth mentioning here technically falls under the kernel self-protection ideal anyway some things are just really interesting userspace-facing features. So, to that end, here are things I found interesting in v4.3: CONFIG_CPU_SW_DOMAIN_PAN Russell King implemented this feature for ARM which provides emulated segregation of user-space memory when running in kernel mode, by using the ARM Domain access control feature. This is similar to a combination of Privileged eXecute Never (PXN, in later ARMv7 CPUs) and Privileged Access Never (PAN, coming in future ARMv8.1 CPUs): the kernel cannot execute user-space memory, and cannot read/write user-space memory unless it was explicitly prepared to do so. This stops a huge set of common kernel exploitation methods, where either a malicious executable payload has been built in user-space memory and the kernel was redirected to run it, or where malicious data structures have been built in user-space memory and the kernel was tricked into dereferencing the memory, ultimately leading to a redirection of execution flow. This raises the bar for attackers since they can no longer trivially build code or structures in user-space where they control the memory layout, locations, etc. Instead, an attacker must find areas in kernel memory that are writable (and in the case of code, executable), where they can discover the location as well. For an attacker, there are vastly fewer places where this is possible in kernel memory as opposed to user-space memory. And as we continue to reduce the attack surface of the kernel, these opportunities will continue to shrink. While hardware support for this kind of segregation exists in s390 (natively separate memory spaces), ARM (PXN and PAN as mentioned above), and very recent x86 (SMEP since Ivy-Bridge, SMAP since Skylake), ARM is the first upstream architecture to provide this emulation for existing hardware. Everyone running ARMv7 CPUs with this kernel feature enabled suddenly gains the protection. Similar emulation protections (PAX_MEMORY_UDEREF) have been available in PaX/Grsecurity for a while, and I m delighted to see a form of this land in upstream finally. To test this kernel protection, the ACCESS_USERSPACE and EXEC_USERSPACE triggers for lkdtm have existed since Linux v3.13, when they were introduced in anticipation of the x86 SMEP and SMAP features. Ambient Capabilities Andy Lutomirski (with Christoph Lameter and Serge Hallyn) implemented a way for processes to pass capabilities across exec() in a sensible manner. Until Ambient Capabilities, any capabilities available to a process would only be passed to a child process if the new executable was correctly marked with filesystem capability bits. This turns out to be a real headache for anyone trying to build an even marginally complex least privilege execution environment. The case that Chrome OS ran into was having a network service daemon responsible for calling out to helper tools that would perform various networking operations. Keeping the daemon not running as root and retaining the needed capabilities in children required conflicting or crazy filesystem capabilities organized across all the binaries in the expected tree of privileged processes. (For example you may need to set filesystem capabilities on bash!) By being able to explicitly pass capabilities at runtime (instead of based on filesystem markings), this becomes much easier. For more details, the commit message is well-written, almost twice as long as than the code changes, and contains a test case. If that isn t enough, there is a self-test available in tools/testing/selftests/capabilities/ too. PowerPC and Tile support for seccomp filter Michael Ellerman added support for seccomp to PowerPC, and Chris Metcalf added support to Tile. As the seccomp maintainer, I get excited when an architecture adds support, so here we are with two. Also included were updates to the seccomp self-tests (in tools/testing/selftests/seccomp), to help make sure everything continues working correctly. That s it for v4.3. If I missed stuff you found interesting, please let me know! I m going to try to get more per-version posts out in time to catch up to v4.8, which appears to be tentatively scheduled for release this coming weekend.

2016, Kees Cook. This work is licensed under a Creative Commons Attribution-ShareAlike 3.0 License.
Creative Commons License

19 September 2016

Mike Gabriel: Rocrail changed License to some dodgy non-free non-License

The Background Story A year ago, or so, I took some time to search the internet for Free Software that can be used for controlling model railways via a computer. I was happy to find Rocrail [1] being one of only a few applications available on the market. And even more, I was very happy when I saw that it had been licensed under a Free Software license: GPL-3(+). A month ago, or so, I collected my old M rklin (Digital) stuff from my parents' place and started looking into it again after +15 years, together with my little son. Some weeks ago, I remembered Rocrail and thought... Hey, this software was GPLed code and absolutely suitable for uploading to Debian and/or Ubuntu. I searched for the Rocrail source code and figured out that it got hidden from the web some time in 2015 and that the license obviously has been changed to some non-free license (I could not figure out what license, though). This made me very sad! I thought I had found a piece of software that might be interesting for testing with my model railway. Whenever I stumble over some nice piece of Free Software that I plan to use (or even only play with), I upload this to Debian as one of the first steps. However, I highly attempt to stay away from non-free sofware, so Rocrail has become a no-option for me back in 2015. I should have moved on from here on... Instead... Proactively, I signed up with the Rocrail forum and asked the author(s) if they see any chance of re-licensing the Rocrail code under GPL (or any other FLOSS license) again [2]? When I encounter situations like this, I normally offer my expertise and help with such licensing stuff for free. My impression until here already was that something strange must have happened in the past, so that software developers choose GPL and later on stepped back from that decision and from then on have been hiding the source code from the web entirely. Going deeper... The Rocrail project's wiki states that anyone can request GitBlit access via the forum and obtain the source code via Git for local build purposes only. Nice! So, I asked for access to the project's Git repository, which I had been granted. Thanks for that. Trivial Source Code Investigation... So far so good. I investigated the source code (well, only the license meta stuff shipped with the source code...) and found that the main COPYING files (found at various locations in the source tree, containing a full version of the GPL-3 license) had been replaced by this text:
Copyright (c) 2002 Robert Jan Versluis, Rocrail.net
All rights reserved.
Commercial usage needs permission.
The replacement happened with these Git commits:
commit cfee35f3ae5973e97a3d4b178f20eb69a916203e
Author: Rob Versluis <r.j.versluis@rocrail.net>
Date:   Fri Jul 17 16:09:45 2015 +0200
    update copyrights
commit df399d9d4be05799d4ae27984746c8b600adb20b
Author: Rob Versluis <r.j.versluis@rocrail.net>
Date:   Wed Jul 8 14:49:12 2015 +0200
    update licence
commit 0daffa4b8d3dc13df95ef47e0bdd52e1c2c58443
Author: Rob Versluis <r.j.versluis@rocrail.net>
Date:   Wed Jul 8 10:17:13 2015 +0200
    update
Getting in touch again, still being really interested and wanting to help... As I consider such a non-license as really dangerous when distributing any sort of software, be it Free or non-free Software, I posted the below text on the Rocrail forum:
Hi Rob,
I just stumbled over this post [3] [link reference adapted for this
blog post), which probably is the one you have referred to above.
It seems that Rocrail contains features that require a key or such
for permanent activation.  Basically, this is allowed and possible
even with the GPL-3+ (although Free Software activists will  not
appreciate that). As the GPL states that people can share the source
code, programmers can  easily deactivate license key checks (and
such) in the code and re-distribute that patchset as they  like.
Furthermore, the current COPYING file is really non-protective at
all. It does not really protect   you as copyright holder of the
code. Meaning, if people crash their trains with your software, you  
could actually be legally prosecuted for that. In theory. Or in the
U.S. ( ;-) ). Main reason for  having a long long license text is to
protect you as the author in case your software causes t trouble to
other people. You do not have any warranty disclaimer in your COPYING
file or elsewhere. Really not a good idea.
In that referenced post above, someone also writes about the nuisance
of license discussions in  this forum. I have seen various cases
where people produced software and did not really care for 
licensing. Some ended with a letter from a lawyer, some with some BIG
company using their code  under their copyright holdership and their
own commercial licensing scheme. This is not paranoia,  this is what
happens in the Free Software world from time to time.
A model that might be much more appropriate (and more protective to
you as the author), maybe, is a  dual release scheme for the code. A
possible approach could be to split Rocrail into two editions:  
Community Edition and Professional/Commercial Edition. The Community
Edition must be licensed in a  way that it allows re-using the code
in a closed-source, non-free version of Rocrail (e.g.   MIT/Expat
License or Apache2.0 License). Thus, the code base belonging to the
community edition  would be licensed, say..., as Apache-2.0 and for
the extra features in the Commercial Edition, you  may use any
non-free license you want (but please not that COPYING file you have
now, it really  does not protect your copyright holdership).
The reason for releasing (a reduced set of features of a) software as
Free Software is to extend  the user base. The honey jar effect, as
practise by many huge FLOSS projects (e.g. Owncloud,  GitLab, etc.).
If people could install Rocrail from the Debian / Ubuntu archives
directly, I am  sure that the user base of Rocrail will increase.
There may also be developers popping up showing  an interest in
Rocrail (e.g. like me). However, I know many FLOSS developers (e.g.
like me) that  won't waste their free time on working for a non-free
piece of software (without being paid).
If you follow (or want to follow) a business model with Rocrail, then
keep some interesting  features in the Commercial Edition and don't
ship that source code. People with deep interest may  opt for that.
Furthermore, another option could be dual licensing the code. As the
copyright holder of Rocrail  you are free to juggle with licenses and
apply any license to a release you want. For example, this  can be
interesing for a free-again Rocrail being shipped via Apple's iStore. 
Last but not least, as you ship the complete source code with all
previous changes as a Git project  to those who request GitBlit
access, it is possible to obtain all earlier versions of Rocrail. In 
the mail I received with my GitBlit credentials, there was some text
that  prohibits publishing the  code. Fine. But: (in theory) it is
not forbidden to share the code with a friend, for local usage.  This
friend finds the COPYING file, frowns and rewinds back to 2015 where
the license was still  GPL-3+. GPL-3+ code can be shared with anyone
and also published, so this friend could upload the  2015-version of
Rocrail to Github or such and start to work on a free fork. You also
may not want  this.
Thanks for working on this piece of software! It is highly
interesing, and I am still sad, that it  does not come with a free
license anymore. I won't continue this discussion and move on, unless
you  are interested in any of the above information and ask for more
expertise. Ping me here or directly  via mail, if needed. If the
expertise leads to parts of Rocrail becoming Free Software again, the 
expertise is offered free of charge ;-).
light+love
Mike
Wow, the first time I got moderated somewhere... What an experience! This experience now was really new. My post got immediately removed from the forum by the main author of Rocrail (with the forum's moderator's hat on). The new experience was: I got really angry when I discovererd having been moderated. Wow! Really a powerful emotion. No harassment in my words, no secrets disclosed, and still... my free speech got suppressed by someone. That feels intense! And it only occurred in the virtual realm, not face to face. Wow!!! I did not expect such intensity... The reason for wiping my post without any other communication was given as below and quite a statement to frown upon (this post has also been "moderately" removed from the forum thread [2] a bit later today):
Mike,
I think its not a good idea to point out a way to get the sources back to the GPL periode.
Therefore I deleted your posting.
(The phpBB forum software also allows moderators to edit posts, so the critical passage could have been removed instead, but immediately wiping the full message, well...). Also, just wiping my post and not replying otherwise with some apology to suppress my words, really is a no-go. And the reason for wiping the rest of the text... Any Git user can easily figure out how to get a FLOSS version of Rocrail and continue to work on that from then on. Really. Now the political part of this blog post... Fortunately, I still live in an area of the world where the right of free speech is still present. I found out: I really don't like being moderated!!! Esp. if what I share / propose is really noooo secret at all. Anyone who knows how to use Git can come to the same conclusion as I have come to this morning. [Off-topic, not at all related to Rocrail: The last votes here in Germany indicate that some really stupid folks here yearn for another this time highly idiotic wind of change, where free speech may end up as a precious good.] To other (Debian) Package Maintainers and Railroad Enthusiasts... With this blog post I probably close the last option for Rocrail going FLOSS again. Personally, I think that gate was already closed before I got in touch. Now really moving on... Probably the best approach for my new train conductor hobby (as already recommended by the woman at my side some weeks back) is to leave the laptop lid closed when switching on the train control units. I should have listened to her much earlier. I have finally removed the Rocrail source code from my computer again without building and testing the application. I neither have shared the source code with anyone. Neither have I shared the Git URL with anyone. I really think that FLOSS enthusiasts should stay away from this software for now. For my part, I have lost my interest in this completely... References light+love,
Mike

19 August 2016

Norbert Preining: Debian/TeX Live 2016.20160819-1

A new and unplanned release in quick succession. I have uploaded testing packages to experimental which incorporate tex4ht into the TeX Live packages, but somehow the tex4ht transitional updated slipped into sid, and made many packages uninstallable. Well, so after a bit more testing let s ship the beast to sid, meaning that tex4ht will finally updated from the last 2009 version to what is the current status in TeX Live. texlive2016-debian From the list of new packages I want to pick out the group of phf* packages that seem from a quick reading over the package documentations as very interesting. But most important is the incorporation of tex4ht into the TeX Live packages, so please report bugs and shortcomings to the BTS. Thanks. New packages aurl, bxjalipsum, cormorantgaramond, notespages, phffullpagefigure, phfnote, phfparen, phfqit, phfquotetext, phfsvnwatermark, phfthm, table-fct, tocdata. Updated packages acmart, acro, biblatex-abnt, biblatex-publist, bxdpx-beamer, bxjscls, bxnewfont, bxpdfver, dccpaper, etex-pkg, europasscv, exsheets, glossaries-extra, graphics-def, graphics-pln, guitarchordschemes, ijsra, kpathsea, latexpand, latex-veryshortguide, ledmac, libertinust1math, markdown, mcf2graph, menukeys, mfirstuc, mhchem, mweights, newpx, newtx, optidef, paralist, parnotes, pdflatexpicscale, pgfplots, philosophersimprint, pstricks-add, showexpl, tasks, tetex, tex4ht, texlive-docindex, udesoftec, xcolor-solarized.

Next.

Previous.