Search Results: "kyle"

6 June 2023

Russell Coker: PinePhonePro First Impression

Hardware I received my PinePhone Pro [1] on Thursday, it seems in many ways better than the Purism Librem 5 [2] that I have previously written about. The PinePhone is thinner, lighter, and yet has a much longer battery life. A friend described the Librem5 as the CyberTruck phone and not in a good way. In a test I had my PinePhone and my Librem5 fully charged, left them for 4.5 hours without doing anything much with them, and then the PinePhone was at 85% and the Librem5 was at 57%. So the Librem5 will run out of battery after about 10 hours of not being used while a PinePhonePro can be expected to last about 30 hours. The PinePhonePro isn t as good as some of the recent Android phones in this regard but it shows the potential to be quite usable. For this test both phones were connected to a 2.4GHz Wifi network (which uses less power than 5GHz) and doing nothing much with an out of the box configuration. A phone that is checking email, social networking, and a couple of IM services will use the battery faster. But even if the PinePhone has it s battery used twice as fast in a more realistic test that will still be usable. Here are the passmark results from the PinePhone Pro [3] which got a CPU score of 888 compared to 507 for the Librem 5 and 678 for one of the slower laptops I ve used. The results are excluded from the Passmark averages because they identified the CPU as only having 4 cores (expecting just 4*A72) while the PinePhonePro has 6 cores (2*A72+4*A53). This phone definitely has the CPU power for convergence [4]! Default OS By default the PinePhone has a KDE based GUI and the Librem5 has a GNOME based GUI. I don t like any iteration of GNOME (I have tried them all and disliked them all) and I like KDE so I will tend to like anything that is KDE based more than anything GNOME based. But in addition to that the PinePhone has an interface that looks a lot like Android with the three on-screen buttons at the bottom of the display and the way it has the slide up tray for installed apps. Android is the most popular phone OS and looking like the most common option is often a good idea for a new and different product, this seems like an objective criteria to determine that the default GUI on the PinePhone is a better choice (at least for the default). When I first booted it and connected it to Wifi the updates app said that there were 633 updates to apply, but never applied them (I tried clicking on the update button but to no avail) and didn t give any error message. For me not being Debian is enough reason to dislike Manjaro, but if that wasn t enough then the failure to update would be a good start. When I ran pacman in a terminal window it said that each package was corrupt and asked if I wanted to delete it. According to tar tvJf the packages weren t corrupt. After downloading them again it said that they were corrupt again so it seemed that pacman wasn t working correctly. When the screen is locked and a call comes in it gives a window with Accept and Reject buttons but neither of them works. The default country code for Spacebar (the SMS app) is +1 (US) even though I specified Australia on the initial login. It also doesn t get the APN unlike Android phones which seem to have some sort of list of APNs. Upgrading to Debian The Debian Wiki page about Installing on the PinePhone Pro has the basic information [5]. The first thing it covers is installing the TOW boot loader which is already installed by default in recent PinePhones (such as mine). You can recognise that TOW is installed by pressing the volume-up button in the early stages of boot up (described as before and during the second vibration ), then the LED will turn blue and the phone will act as a USB mass storage device which makes it easy to do other install/recovery tasks. The other TOW option is to press volume-down to boot from a MicroSD card (the default is to boot the OS on the eMMC). The images linked from the Debian wiki page are designed to be installed with bmaptool from the bmap-tools Debian package. After installing that package and downloading the pre-built Mobian image I installed it with the command bmaptool copy mobian-pinephonepro-phosh-bookworm-12.0-rc3.img.gz /dev/sdb where /dev/sdb is the device that the USB mapped PinePhone storage was located. That took 6 minutes and then I rebooted my PinePhone into Mobian! Unfortunately the default GUI for Mobian is GNOME/Phosh. Changing it to KDE is my next task.

29 May 2023

Russell Coker: Considering Convergence

What is Convergence In 2013 Kyle Rankin (at the time Linux Journal columnist and CSO of Purism) wrote a Linux Journal article about Linux convergence [1] (which means using a phone and a dock to replace a desktop) featuring the Nokia N900 smart phone and a chroot environment on the Motorola Droid 4 Android phone. Both of them have very limited hardware even by the standards of the day and neither of which were systems I d consider using all the time. None of the Android phones I used at that time were at all comparable to any sort of desktop system I d want to use. Hardware for Convergence Comparing a Phone to a Laptop The first hardware issue for convergence is docks and other accessories to attach a small computer to hardware designed for larger computers. Laptop docks have been around for decades and for decades I haven t been using them because they have all been expensive and specific to a particular model of laptop. Having an expensive dock at home and an expensive dock at the office and then replacing them both when the laptop is replaced may work well for some people but wasn t something I wanted to do. The USB-C interface supports data, power, and DisplayPort video over the same cable and now USB-C docks start at about $20 on eBay and dock functionality is built in to many new monitors. I can take a USB-C device to the office of any large company and know there s a good chance that there will be a USB-C dock ready for me to use. The fact that USB-C is a standard feature for phones gives obvious potential for convergence. The next issue is performance. The Passmark benchmark seems like a reasonable way to compare CPUs [2]. It may not be the best benchmark but it has an excellent set of published results for Intel and AMD CPUs. I ran that benchmark on my Librem5 [3] and got a result of 507 for the CPU score. At the end of 2017 I got a Thinkpad X301 [4] which rates 678 on Passmark. So the Librem5 has 3/4 the CPU power of a laptop that was OK for my use in 2018. Given that the X301 was about the minimum specs for a PC that I can use (for things other than serious compiles, running VMs, etc) the Librem 5 has 3/4 the CPU power, only 3G of RAM compared to 6G, and 32G of storage compared to 64G. Here is the Passmark page for my Librem5 [5]. As an aside my Libnrem5 is apparently 25% faster than the other results for the same CPU did the Purism people do something to make their device faster than most? For me the Librem5 would be at the very low end of what I would consider a usable desktop system. A friend s N900 (like the one Kyle used) won t complete the Passmark test apparently due to the Extended Instructions (NEON) test failing. But of the rest of the tests most of them gave a result that was well below 10% of the result from the Librem5 and only the Compression and CPU Single Threaded tests managed to exceed 1/4 the speed of the Librem5. One thing to note when considering the specs of phones vs desktop systems is that the MicroSD cards designed for use in dashcams and other continuous recording devices have TBW ratings that compare well to SSDs designed for use in PCs, so swap to a MicroSD card should work reasonably well and be significantly faster than the hard disks I was using for swap in 2013! In 2013 I was using a Thinkpad T420 as my main system [6], it had 8G of RAM (the same as my current laptop) although I noted that 4G was slow but usable at the time. Basically it seems that the Librem5 was about the sort of hardware I could have used for convergence in 2013. But by today s standards and with the need to drive 4K monitors etc it s not that great. The N900 hardware specs seem very similar to the Thinkpads I was using from 1998 to about 2003. However a device for convergence will usually do more things than a laptop (IE phone and camera functionality) and software had become significantly more bloated in 1998 to 2013 time period. A Linux desktop system performed reasonably with 32MB of RAM in 1998 but by 2013 even 2G was limiting. Software Issues for Convergence Jeremiah Foster (Director PureOS at Purism) wrote an interesting overview of some of the software issues of convergence [7]. One of the most obvious is that the best app design for a small screen is often very different from that for a large screen. Phone apps usually have a single window that shows a view of only one part of the data that is being worked on (EG an email program that shows a list of messages or the contents of a single message but not both). Desktop apps of any complexity will either have support for multiple windows for different data (EG two messages displayed in different windows) or a single window with multiple different types of data (EG message list and a single message). What we ideally want is all the important apps to support changing modes when the active display is changed to one of a different size/resolution. The Purism people are doing some really good work in this regard. But it is a large project that needs to involve a huge range of apps. The next thing that needs to be addressed is the OS interface for managing apps and metadata. On a phone you swipe from one part of the screen to get a list of apps while on a desktop you will probably have a small section of a large monitor reserved for showing a window list. On a desktop you will typically have an app to manage a list of items copied to the clipboard while on Android and iOS there is AFAIK no standard way to do that (there is a selection of apps in the Google Play Store to do this sort of thing). Purism has a blog post by Sebastian Krzyszkowiak about some of the development of the OS to make it work better for convergence and the status of getting it in Debian [8]. The limitations in phone hardware force changes to the software. Software needs to use less memory because phone RAM can t be upgraded. The OS needs to be configured for low RAM use which includes technologies like the zram kernel memory compression feature. Security When mobile phones first came out they were used for less secret data. Loss of a phone was annoying and expensive but not a security problem. Now phone theft for the purpose of gaining access to resources stored on the phone is becoming a known crime, here is a news report about a thief stealing credit cards and phones to receive the SMS notifications from banks [9]. We should expect that trend to continue, stealing mobile devices for ssh keys, management tools for cloud services, etc is something we should expect to happen. A problem with mobile phones in current use is that they have one login used for all access from trivial things done in low security environments (EG paying for public transport) to sensitive things done in more secure environments (EG online banking and healthcare). Some applications take extra precautions for this EG the Android app I use for online banking requires authentication before performing any operations. The Samsung version of Android has a system called Knox for running a separate secured workspace [10]. I don t think that the Knox approach would work well for a full Linux desktop environment, but something that provides some similar features would be a really good idea. Also running apps in containers as much as possible would be a good security feature, this is done by default in Android and desktop OSs could benefit from it. The Linux desktop security model of logging in to a single account and getting access to everything has been outdated for a long time, probably ever since single-user Linux systems became popular. We need to change this for many reasons and convergence just makes it more urgent. Conclusion I have become convinced that convergence is the way of the future. It has the potential to make transporting computers easier, purchasing cheaper (buy just a phone and not buy desktop and laptop systems), and access to data more convenient. The Librem5 doesn t seem up to the task for my use due to being slow and having short battery life, the PinePhone Pro has more powerful hardware and allegedly has better battery life [11] so it might work for my needs. The PinePhone Pro probably won t meet the desktop computing needs of most people, but hardware keeps getting faster and cheaper so eventually most people could have their computing needs satisfied with a phone. The current state of software for convergence and for Linux desktop security needs some improvement. I have some experience with Linux security so this is something I can help work on. To work on improving this I asked Linux Australia for a grant for me and a friend to get PinePhone Pro devices and a selection of accessories to go with them. Having both a Librem5 and a PinePhone Pro means that I can test software in different configurations which will make developing software easier. Also having a friend who s working on similar things will help a lot, especially as he has some low level hardware skills that I lack. Linux Australia awarded the grant and now the PinePhones are in transit. Hopefully I will have a PinePhone in a couple of weeks to start work on this.

12 November 2022

Wouter Verhelst: Day 3 of the Debian Videoteam Sprint in Cape Town

The Debian Videoteam has been sprinting in Cape Town, South Africa -- mostly because with Stefano here for a few months, four of us (Jonathan, Kyle, Stefano, and myself) actually are in the country on a regular basis. In addition to that, two more members of the team (Nicolas and Louis-Philippe) are joining the sprint remotely (from Paris and Montreal). Videoteam sprint (Kyle and Stefano working on things, with me behind the camera and Jonathan busy elsewhere.) We've made loads of progress! Some highlights: The sprint isn't over yet (we're continuing until Sunday), but loads of things have already happened. Stay tuned!

5 February 2021

Thorsten Alteholz: My Debian Activities in January 2021

FTP master This month I could increase my activities in NEW again and accepted 132 packages. Unfortunately I also had to reject 12 packages. The overall number of packages that got accepted was 374. Debian LTS This was my seventy-ninth month that I did some work for the Debian LTS initiative, started by Raphael Hertzog at Freexian. This month my all in all workload has been 26h. During that time I did LTS and normal security uploads of: With the buster upload of highlight.js I could finish to fix CVE-2020-26237 in all releases. I also tried to fix one or the other CVE for golang packages, to be exact: golang-github-russellhaering-goxmldsig, golang-github-tidwall-match, golang-github-tidwall-gjson and golang-github-antchfx-xmlquery. The version in unstable is easily done by uploading a new upstream version after checking with ratt that all reverse-build-dependencies are still working. The next step will be to really upload all reverse-build-dependencies that need a new build. As the number of reverse-build-dependencies might be rather large, this needs to be done automatically somehow. The problem I am struggling with at the moment are packages that need to be rebuilt but the version in git already increased Another problem with golang packages are packages that are referenced by a Built-Using: line, but whose sources are not yet available on security-master. If this happens, the uploaded package will be automatically rejected. Unfortunately the rejection-email only contains the first missing package. So in order to reduce the hassle with such uploads, please send me the Built-Using:-line before the upload and I will import everything. In December/January this affected the uploads of influxdb and snapd. Last but not least I did some days of frontdesk duties. Debian ELTS This month was the thirty-first ELTS month. During my allocated time I uploaded: Last but not least I did some days of frontdesk duties. Other stuff This month I uploaded new upstream versions of: I improved packaging of: The golang packages here are basically ones with a missing source upload. For whatever reason maintainers tend to forget about this

16 July 2020

Louis-Philippe V ronneau: DebConf Videoteam Sprint Report -- DebConf20@Home

DebConf20 starts in about 5 weeks, and as always, the DebConf Videoteam is working hard to make sure it'll be a success. As such, we held a sprint from July 9th to 13th to work on our new infrastructure. A remote sprint certainly ain't as fun as an in-person one, but we nonetheless managed to enjoy ourselves. Many thanks to those who participated, namely: We also wish to extend our thanks to Thomas Goirand and Infomaniak for providing us with virtual machines to experiment on and host the video infrastructure for DebConf20. Advice for presenters For DebConf20, we strongly encourage presenters to record their talks in advance and send us the resulting video. We understand this is more work, but we think it'll make for a more agreeable conference for everyone. Video conferencing is still pretty wonky and there is nothing worse than a talk ruined by a flaky internet connection or hardware failures. As such, if you are giving a talk at DebConf this year, we are asking you to read and follow our guide on how to record your presentation. Fear not: we are not getting rid of the Q&A period at the end of talks. Attendees will ask their questions either on IRC or on a collaborative pad and the Talkmeister will relay them to the speaker once the pre-recorded video has finished playing. New infrastructure, who dis? Organising a virtual DebConf implies migrating from our battle-tested on-premise workflow to a completely new remote one. One of the major changes this means for us is the addition of Jitsi Meet to our infrastructure. We normally have 3 different video sources in a room: two cameras and a slides grabber. With the new online workflow, directors will be able to play pre-recorded videos as a source, will get a feed from a Jitsi room and will see the audience questions as a third source. This might seem simple at first, but is in fact a very major change to our workflow and required a lot of work to implement.
               == On-premise ==                                          == Online ==
                                                      
              Camera 1                                                 Jitsi
                                                                          
                 v                 ---> Frontend                         v                 ---> Frontend
                                                                                            
    Slides -> Voctomix -> Backend -+--> Frontend         Questions -> Voctomix -> Backend -+--> Frontend
                                                                                            
                 ^                 ---> Frontend                         ^                 ---> Frontend
                                                                          
              Camera 2                                           Pre-recorded video
In our tests, playing back pre-recorded videos to voctomix worked well, but was sometimes unreliable due to inconsistent encoding settings. Presenters will thus upload their pre-recorded talks to SReview so we can make sure there aren't any obvious errors. Videos will then be re-encoded to ensure a consistent encoding and to normalise audio levels. This process will also let us stitch the Q&As at the end of the pre-recorded videos more easily prior to publication. Reducing the stream latency One of the pitfalls of the streaming infrastructure we have been using since 2016 is high video latency. In a worst case scenario, remote attendees could get up to 45 seconds of latency, making participation in events like BoFs arduous. In preparation for DebConf20, we added a new way to stream our talks: RTMP. Attendees will thus have the option of using either an HLS stream with higher latency or an RTMP stream with lower latency. Here is a comparative table that can help you decide between the two protocols:
HLS RTMP
Pros
  • Can be watched from a browser
  • Auto-selects a stream encoding
  • Single URL to remember
  • Lower latency (~5s)
Cons
  • Higher latency (up to 45s)
  • Requires a dedicated video player (VLC, mpv)
  • Specific URLs for each encoding setting
Live mixing from home with VoctoWeb Since DebConf16, we have been using voctomix, a live video mixer developed by the CCC VOC. voctomix is conveniently divided in two: voctocore is the backend server while voctogui is a GTK+ UI frontend directors can use to live-mix. Although voctogui can connect to a remote server, it was primarily designed to run either on the same machine as voctocore or on the same LAN. Trying to use voctogui from a machine at home to connect to a voctocore running in a datacenter proved unreliable, especially for high-latency and low bandwidth connections. Inspired by the setup FOSDEM uses, we instead decided to go with a web frontend for voctocore. We initially used FOSDEM's code as a proof of concept, but quickly reimplemented it in Python, a language we are more familiar with as a team. Compared to the FOSDEM PHP implementation, voctoweb implements A / B source selection (akin to voctogui) as well as audio control, two very useful features. In the following screen captures, you can see the old PHP UI on the left and the new shiny Python one on the right. The old PHP voctowebThe new Python3 voctoweb Voctoweb is still under development and is likely to change quite a bit until DebConf20. Still, the current version seems to works well enough to be used in production if you ever need to. Python GeoIP redirector We run multiple geographically-distributed streaming frontend servers to minimize the load on our streaming backend and to reduce overall latency. Although users can connect to the frontends directly, we typically point them to live.debconf.org and redirect connections to the nearest server. Sadly, 6 months ago MaxMind decided to change the licence on their GeoLite2 database and left us scrambling. To fix this annoying issue, Stefano Rivera wrote a Python program that uses the new database and reworked our ansible frontend server role. Since the new database cannot be redistributed freely, you'll have to get a (free) license key from MaxMind if you to use this role. Ansible & CI improvements Infrastructure as code is a living process and needs constant care to fix bugs, follow changes in DSL and to implement new features. All that to say a large part of the sprint was spent making our ansible roles and continuous integration setup more reliable, less buggy and more featureful. All in all, we merged 26 separate ansible-related merge request during the sprint! As always, if you are good with ansible and wish to help, we accept merge requests on our ansible repository :)

2 March 2020

Jonathan Carter: Free Software activities for 2020-02

Belgians This month started off in Belgium for FOSDEM on 1-2 February. I attended FOSDEM in Brussels and wrote a separate blog entry for that. The month ended with Belgians at Tammy and Wouter s wedding. On Thursday we had Wouter s bachelors and then over the weekend I stayed over at their wedding venue. I thought that other Debianites might be interested so I m sharing some photos here with permission from Wouter. It was the only wedding I ve been at where nearly everyone had questions about Debian! I first met Wouter on the bus during the daytrip on DebConf12 in Nicaragua, back then I ve eagerly followed the Debianites on Planet Debian for a while so it was like meeting someone famous. Little did I know that 8 years later, I d be at his wedding back in my part of the world. If you went to DebConf16 in South Africa, you might remember Tammy, who have done a lot of work for DC16 including most of the artwork, bunch of website work, design on the badges, bags, etc and also did a lot of organisation for the day trips. Tammy and Wouter met while Tammy was reviewing the artwork in the video loops for the DebConf videos, and then things developed from there. Wouter s Bachelors Wouter was blindfolded and kidnapped and taken to the city center where we prepared to go on a bike tour of Cape Town, stopping for beer at a few places along the way. Wouter was given a list of tasks that he had to complete, or the wedding wouldn t be allowed to continue

Wouter s tasks
Wouter s props, needed to complete his tasks
Bike tour leg at Cape Town Stadium.
Seeking out 29 year olds.
Wouter finishing his lemon and actually seemingly enjoying it.
Reciting South African national anthem notes and lyrics.
The national anthem, as performed by Wouter (I was actually impressed by how good his pitch was).
The Wedding Friday afternoon we arrived at the lodge for the weekend. I had some work to finish but at least this was nicer than where I was going to work if it wasn t for the wedding.
Accommodation at the lodge
When the wedding co-ordinators started setting up, I noticed that there were all these swirls that almost looked like Debian logos. I asked Wouter if that was on purpose or just a happy accident. He said Hmm! I haven t even noticed that yet! , didn t get a chance to ask Tammy yet, so it could still be her touch.
Debian swirls everywhere
I took a canoe ride on the river and look what I found, a paddatrapper!
Kyle and I weren t the only ones out on the river that day. When the wedding ceremony started, Tammy made a dramatic entrance coming in on a boat, standing at the front with the breeze blowing her dress like a valkyrie.
A bit of digital zoomage of previous image.
Time to say the vows.
Just married. Thanks to Sue Fuller-Good for the photo.
Except for one character being out of place, this was a perfect fairy tale wedding, but I pointed Wouter to https://jonathancarter.org/how-to-spell-jonathan/ for future reference so it s all good.
Congratulations again to both Tammy and Wouter. It was a great experience meeting both their families and friends and all the love that was swirling around all weekend.

Debian Package Uploads 2020-02-07: Upload package calamares (3.2.18-1) to Debian unstable. 2020-02-07: Upload package python-flask-restful (0.3.8-1) to Debian unstable. 2020-02-10: Upload package kpmcore (4.1.0-1) to Debian unstable. 2020-02-16: Upload package fracplanet (0.5.1-5.1) to Debian unstable (Closes: #946028). 2020-02-20: Upload package kpmcore (4.1.0-2) to Debian unstable. 2020-02-20: Upload package bluefish (2.2.11) to Debian unstable. 2020-02-20: Upload package gdisk (1.0.5-1) to Debian unstable. 2020-02-20: Accept MR#6 for gamemode. 2020-02-23: Upload package tanglet (1.5.5-1) to Debian unstable. 2020-02-23: Upload package gamemode (1.5-1) to Debian unstable. 2020-02-24: Upload package calamares (3.2.19-1) to Debian unstable. 2020-02-24: Upload package partitionmanager (4.1.0-1) to Debian unstable. 2020-02-24: Accept MR#7 for gamemode. 2020-02-24: Merge MR#1 for calcoo. 2020-02-24: Upload package calcoo (1.3.18-8) to Debian unstable. 2020-02-24: Merge MR#1 for flask-api. 2020-02-25: Upload package calamares (3.2.19.1-1) to Debian unstable. 2020-02-25: Upload package gnome-shell-extension-impatience (0.4.5-4) to Debian unstable. 2020-02-25: Upload package gnome-shell-extension-harddisk-led (19-2) to Debian unstable. 2020-02-25: Upload package gnome-shell-extension-no-annoyance (0+20170928-f21d09a-2) to Debian unstable. 2020-02-25: Upload package gnome-shell-extension-system-monitor (38-2) to Debian unstable. 2020-02-25: Upload package tuxpaint (0.9.24~git20190922-f7d30d-1~exp3) to Debian experimental.

Debian Mentoring 2020-02-10: Sponsor package python-marshmallow-polyfield (5.8-1) for Debian unstable (Python team request). 2020-02-10: Sponsor package geoalchemy2 (0.6.3-2) for Debian unstable (Python team request). 2020-02-13: Sponsor package python-tempura (2.2.1-1) for Debian unstable (Python team request). 2020-02-13: Sponsor package python-babel (2.8.0+dfsg.1-1) for Debian unstable (Python team request). 2020-02-13: Sponsor package python-pynvim (0.4.1-1) for Debian unstable (Python team request). 2020-02-13: Review package ledmon (0.94-1) (Needs some more work) (mentors.debian.net request). 2020-02-14: Sponsor package citeproc-py (0.3.0-6) for Debian unstable (Python team request). 2020-02-24: Review package python-suntime (1.2.5-1) (Needs some more work) (Python team request). 2020-02-24: Sponsor package python-babel (2.8.0+dfsg.1-2) for Debian unstable (Python team request). 2020-02-24: Sponsor package 2048 (0.0.0-1~exp1) for Debian experimental (mentors.debian.net request). 2020-02-24: Review package notcurses (1.1.8-1) (Needs some more work) (mentors.debian.net request). 2020-02-25: Sponsor package cloudpickle (1.3.0-1) for Debian unstable (Python team request).

Debian Misc 2020-02-12: Apply Planet Debian request and close MR#21. 2020-02-23: Accept MR#6 for ToeTally (DebConf Video team upstream). 2020-02-23: Accept MR#7 for ToeTally (DebConf Video team upstream).

24 July 2017

Jonathan Carter: Plans for DebCamp17

In a few days, I ll be attending DebCamp17 in Montr al, Canada. Here are a few things I plan to pay attention to:

8 March 2017

Antoine Beaupr : An update to GitHub's terms of service

On February 28th, GitHub published a brand new version of its Terms of Service (ToS). While the first draft announced earlier in February didn't generate much reaction, the new ToS raised concerns that they may break at least the spirit, if not the letter, of certain free-software licenses. Digging in further reveals that the situation is probably not as dire as some had feared. The first person to raise the alarm was probably Thorsten Glaser, a Debian developer, who stated that the "new GitHub Terms of Service require removing many Open Source works from it". His concerns are mainly about section D of the document, in particular section D.4 which states:
You grant us and our legal successors the right to store and display your Content and make incidental copies as necessary to render the Website and provide the Service.
Section D.5 then goes on to say:
[...] You grant each User of GitHub a nonexclusive, worldwide license to access your Content through the GitHub Service, and to use, display and perform your Content, and to reproduce your Content solely on GitHub as permitted through GitHub's functionality

ToS versus GPL The concern here is that the ToS bypass the normal provisions of licenses like the GPL. Indeed, copyleft licenses are based on copyright law which forbid users from doing anything with the content unless they comply with the license, which forces, among other things, "share alike" properties. By granting GitHub and its users rights to reproduce content without explicitly respecting the original license, the ToS may allow users to bypass the copyleft nature of the license. Indeed, as Joey Hess, author of git-annex, explained :
The new TOS is potentially very bad for copylefted Free Software. It potentially neuters it entirely, so GPL licensed software hosted on Github has an implicit BSD-like license
Hess has since removed all his content (mostly mirrors) from GitHub. Others disagree. In a well-reasoned blog post, Debian developer Jonathan McDowell explained the rationale behind the changes:
My reading of the GitHub changes is that they are driven by a desire to ensure that GitHub are legally covered for the things they need to do with your code in order to run their service.
This seems like a fair point to make: GitHub needs to protect its own rights to operate the service. McDowell then goes on to do a detailed rebuttal of the arguments made by Glaser, arguing specifically that section D.5 "does not grant [...] additional rights to reproduce outside of GitHub". However, specific problems arise when we consider that GitHub is a private corporation that users have no control over. The "Services" defined in the ToS explicitly "refers to the applications, software, products, and services provided by GitHub". The term "Services" is therefore not limited to the current set of services. This loophole may actually give GitHub the right to bypass certain provisions of licenses used on GitHub. As Hess detailed in a later blog post:
If Github tomorrow starts providing say, an App Store service, that necessarily involves distribution of software to others, and they put my software in it, would that be allowed by this or not? If that hypothetical Github App Store doesn't sell apps, but licenses access to them for money, would that be allowed under this license that they want to my software?
However, when asked on IRC, Bradley M. Kuhn of the Software Freedom Conservancy explained that "ultimately, failure to comply with a copyleft license is a copyright infringement" and that the ToS do outline a process to deal with such infringement. Some lawyers have also publicly expressed their disagreement with Glaser's assessment, with Richard Fontana from Red Hat saying that the analysis is "basically wrong". It all comes down to the intent of the ToS, as Kuhn (who is not a lawyer) explained:
any license can be abused or misused for an intent other than its original intent. It's why it matters to get every little detail right, and I hope Github will do that.
He went even further and said that "we should assume the ambiguity in their ToS as it stands is favorable to Free Software". The ToS are in effect since February 28th; users "can accept them by clicking the broadcast announcement on your dashboard or by continuing to use GitHub". The immediacy of the change is one of the reasons why certain people are rushing to remove content from GitHub: there are concerns that continuing to use the service may be interpreted as consent to bypass those licenses. Hess even hosted a separate copy of the ToS [PDF] for people to be able to read the document without implicitly consenting. It is, however, unclear how a user should remove their content from the GitHub servers without actually agreeing to the new ToS.

CLAs When I read the first draft, I initially thought there would be concerns about the mandatory Contributor License Agreement (CLA) in section D.5 of the draft:
[...] unless there is a Contributor License Agreement to the contrary, whenever you make a contribution to a repository containing notice of a license, you license your contribution under the same terms, and agree that you have the right to license your contribution under those terms.
I was concerned this would establish the controversial practice of forcing CLAs on every GitHub user. I managed to find a post from a lawyer, Kyle E. Mitchell, who commented on the draft and, specifically, on the CLA. He outlined issues with wording and definition problems in that section of the draft. In particular, he noted that "contributor license agreement is not a legal term of art, but an industry term" and "is a bit fuzzy". This was clarified in the final draft, in section D.6, by removing the use of the CLA term and by explicitly mentioning the widely accepted norm for licenses: "inbound=outbound". So it seems that section D.6 is not really a problem: contributors do not need to necessarily delegate copyright ownership (as some CLAs require) when they make a contribution, unless otherwise noted by a repository-specific CLA. An interesting concern he raised, however, was with how GitHub conducted the drafting process. A blog post announced the change on February 7th with a link to a form to provide feedback until the 21st, with a publishing deadline of February 28th. This gave little time for lawyers and developers to review the document and comment on it. Users then had to basically accept whatever came out of the process as-is. Unlike every software project hosted on GitHub, the ToS document is not part of a Git repository people can propose changes to or even collaboratively discuss. While Mitchell acknowledges that "GitHub are within their rights to update their terms, within very broad limits, more or less however they like, whenever they like", he sets higher standards for GitHub than for other corporations, considering the community it serves and the spirit it represents. He described the process as:
[...] consistent with the value of CYA, which is real, but not with the output-improving virtues of open process, which is also real, and a great deal more pleasant.
Mitchell also explained that, because of its position, GitHub can have a major impact on the free-software world.
And as the current forum of preference for a great many developers, the knock-on effects of their decisions throw big weight. While GitHub have the wheel and they ve certainly earned it for now they can do real damage.
In particular, there have been some concerns that the ToS change may be an attempt to further the already diminishing adoption of the GPL for free-software projects; on GitHub, the GPL has been surpassed by the MIT license. But Kuhn believes that attitudes at GitHub have begun changing:
GitHub historically had an anti-copyleft culture, which was created in large part by their former and now ousted CEO, Preston-Warner. However, recently, I've seen people at GitHub truly reach out to me and others in the copyleft community to learn more and open their minds. I thus have a hard time believing that there was some anti-copyleft conspiracy in this ToS change.

GitHub response However, it seems that GitHub has actually been proactive in reaching out to the free software community. Kuhn noted that GitHub contacted the Conservancy to get its advice on the ToS changes. While he still thinks GitHub should fix the ambiguities quickly, he also noted that those issues "impact pretty much any non-trivial Open Source and Free Software license", not just copylefted material. When reached for comments, a GitHub spokesperson said:
While we are confident that these Terms serve the best needs of the community, we take our users' feedback very seriously and we are looking closely at ways to address their concerns.
Regardless, free-software enthusiasts have other concerns than the new ToS if they wish to use GitHub. First and foremost, most of the software running GitHub is proprietary, including the JavaScript served to your web browser. GitHub also created a centralized service out of a decentralized tool (Git). It has become the largest code hosting service in the world after only a few years and may well have become a single point of failure for free software collaboration in a way we have never seen before. Outages and policy changes at GitHub can have a major impact on not only the free-software world, but also the larger computing world that relies on its services for daily operation. There are now free-software alternatives to GitHub. GitLab.com, for example, does not seem to have similar licensing issues in its ToS and GitLab itself is free software, although based on the controversial open core business model. The GitLab hosting service still needs to get better than its grade of "C" in the GNU Ethical Repository Criteria Evaluations (and it is being worked on); other services like GitHub and SourceForge score an "F". In the end, all this controversy might have been avoided if GitHub was generally more open about the ToS development process and gave more time for feedback and reviews by the community. Terms of service are notorious for being confusing and something of a legal gray area, especially for end users who generally click through without reading them. We should probably applaud the efforts made by GitHub to make its own ToS document more readable and hope that, with time, it will address the community's concerns.
Note: this article first appeared in the Linux Weekly News.

15 January 2016

Lars Wirzenius: Obnam 1.19 released (backup software)

I have just released version 1.19 of Obnam, the backup program. See the website at http://obnam.org for details on what the program does. The new version is available from git (see http://git.liw.fi) and as Debian packages from http://code.liw.fi/debian, and uploaded to Debian, and soon in unstable. The NEWS file extract below gives the highlights of what's new in this version. NOTE: Obnam has an EXPERIMENTAL repository format under development, called green-albatross. It is NOT meant for real use. It is likely to change in incompatible ways without warning. Do not use it unless you're willing to lose your backup. Version 1.19, released 2016-01-15 Bug fixes: Improvements to the manual: Improvements to functionality:

29 December 2013

Russ Allbery: Review: The Incrementalists

Review: The Incrementalists, by Steven Brust & Skyler White
Publisher: Tor
Copyright: September 2013
ISBN: 0-7653-3422-4
Format: Hardcover
Pages: 304
The Incrementalists are a secret society that has been in continuous existence for forty thousand years. The members are immortal... sort of (more on that in a moment). They are extremely good at determining what signals, triggers, and actions have the most impact on people, and adept at using those triggers to influence people's behavior. And they're making the world better. Not in any huge, noticeable way, just a little bit, here and there. Incrementally. Brust and White completely had me with this concept. There are several SF novels about immortals, but (apart from the vampire subgenre) they mostly stay underground and find ways to survive without trying to change the surrounding human society. I love the idea of small, incremental changes, and I was looking forward to reading a book about how that would work. How do they choose what to do? How do they find the right place to push? What long-term goals would such an organization pursue, and what governance structure would they use? I'm still wondering about most of those things, sadly, since that isn't what this novel is about at all. I should be fair: a few of those questions hover in the background. There are some political arguments (that parallel arguments on Brust's blog), and a tiny bit about governance. But mostly this is a sort of romance, a fight over identity, and an extensive exploration of the mental "garden" that the Incrementalists use and build to hold their memories and share information with each other. The story revolves around two Incrementalists: Phil, who is one of the leaders and has one of the longest continuous identities of any of the group, and Renee, a new recruit by Phil who picks up the memories and capabilities (sort of) of the recently-deceased Incrementalist Celeste. Phil is the long-term insider, although a fairly laid-back one. Renee is the outsider, the one to which the story is happening at the beginning, who is taught how the Incrementalists' system works. The Incrementalists is told in alternating first-person viewpoint sections by Phil and Renee (making me wonder if the characters were each written by one of the authors). Once I got past my disappointment over this book not being quite what I wanted, the identity puzzles that Brust and White play with here caught my attention. The underlying "magic" of the Incrementalists is based on the "memory palace" concept of storing memories via visualizations, but takes it into something akin to personal alternate worlds. Their immortality isn't through physical immortality, but rather the ability to store their memories and personality in their garden, which is their term for the memory palace, and then have it imposed on someone else: a combination of reincarnation, mental takeover, and mental merging. This makes for complex interpersonal relationships and complex interactions between memory, belief, and identity, not to mention some major ethical issues, which come to a head in Phil and Renee's relationship and Celeste's lingering meddling. I think Brust and White are digging into some interesting questions of character here. There's a lot of emphasis in The Incrementalists on the ease with which people can be manipulated, and the Incrementalists themselves are far from immune. Choice and personal definition are both questionable concepts given how much influence other people have, how many vulnerabilities everyone carries with them, and how much one's opinions are governed by one's life history. That makes identity complex, and raises the question of whether one can truly define oneself. But, while all these ideas are raised, I think The Incrementalists dances around them more than engages them directly. It's similar to the background hook: yes, these people are slowly improving the world, and we see a little bit of that (and a little bit of them manipulating people for their own convenience), but the story doesn't fully engage with the idea. There's a romance, a few arguments, some tension, some world-building, but I don't feel like this book ever fully committed to any of them. One of the occasional failure modes of Brust for me is insufficient explanation and insufficient clarity, and I hit that here. My final impression of The Incrementalists is interesting, enjoyable, but vaguely disappointing. It's still a good story with some interesting characters and nice use of the memory palace concept, and I liked Renee throughout (although I think the development of the love story is disturbingly easy and a little weird). But I can't strongly recommend it, and I'm not sure if it's worth seeking out. Rating: 7 out of 10

10 October 2013

Russ Allbery: One more haul

Just a few more recently-released books that I had to pick up. Sheila Bair Bull by the Horns (non-fiction)
Elizabeth Bear Book of Iron (sff)
Steven Brust & Skyler White The Incrementalists (sff)
J.M. Coetzee The Childhood of Jesus (mainstream)
Elizabeth Wein Rose Under Fire (mainstream)
Walter Jon Williams Knight Moves (sff) Time to get back into the reading habit. That's the plan for the next couple of weeks.

2 July 2013

Ondřej Čertík: My impressions from the SciPy 2013 conference

I have attended the SciPy 2013 conference in Austin, Texas. Here are my impressions.

Number one is the fact that the IPython notebook was used by pretty much everyone. I use it a lot myself, but I didn't realize how ubiquitous it has become. It is quickly becoming the standard now. The IPython notebook is using Markdown and in fact it is better than Rest. The way to remember the "[]()" syntax for links is that in regular text you put links into () parentheses, so you do the same in Markdown, and append [] for the text of the link. The other way to remember is that [] feel more serious and thus are used for the text of the link. I stressed several times to +Fernando Perez and +Brian Granger how awesome it would be to have interactive widgets in the notebook. Fortunately that was pretty much preaching to the choir, as that's one of the first things they plan to implement good foundations for and I just can't wait to use that.

It is now clear, that the IPython notebook is the way to store computations that I want to share with other people, or to use it as a "lab notebook" for myself, so that I can remember what exactly I did to obtain the results (for example how exactly I obtained some figures from raw data). In other words --- instead of having sets of scripts and manual bash commands that have to be executed in particular order to do what I want, just use IPython notebook and put everything in there.

Number two is that how big the conference has become since the last time I attended (couple years ago), yet it still has the friendly feeling. Unfortunately, I had to miss a lot of talks, due to scheduling conflicts (there were three parallel sessions), so I look forward to seeing them on video.

+Aaron Meurer and I have done the SymPy tutorial (see the link for videos and other tutorial materials). It's been nice to finally meet +Matthew Rocklin (very active SymPy contributor) in person. He also had an interesting presentation
about symbolic matrices + Lapack code generation. +Jason Moore presented PyDy.
It's been a great pleasure for us to invite +David Li (still a high school student) to attend the conference and give a presentation about his work on sympygamma.com and live.sympy.org.

It was nice to meet the Julia guys, +Jeff Bezanson and +Stefan Karpinski. I contributed the Fortran benchmarks on the Julia's website some time ago, but I had the feeling that a lot of them are quite artificial and not very meaningful. I think Jeff and Stefan confirmed my feeling. Julia seems to have quite interesting type system and multiple dispatch, that SymPy should learn from.

I met the VTK guys +Matthew McCormick and +Pat Marion. One of the keynotes was given by +Will Schroeder from Kitware about publishing. I remember him stressing to manage dependencies well as well as to use BSD like license (as opposed to viral licenses like GPL or LGPL). That opensource has pretty much won (i.e. it is now clear that that is the way to go).

I had great discussions with +Francesc Alted, +Andy Terrel, +Brett Murphy, +Jonathan Rocher, +Eric Jones, +Travis Oliphant, +Mark Wiebe, +Ilan Schnell, +St fan van der Walt, +David Cournapeau, +Anthony Scopatz, +Paul Ivanov, +Michael Droettboom, +Wes McKinney, +Jake Vanderplas, +Kurt Smith, +Aron Ahmadia, +Kyle Mandli, +Benjamin Root and others.


It's also been nice to have a chat with +Jason Vertrees and other guys from Schr dinger.

One other thing that I realized last week at the conference is that pretty much everyone agreed on the fact that NumPy should act as the default way to represent memory (no matter if the array was created in Fortran or other code) and allow manipulations on it. Faster libraries like Blaze or ODIN should then hook themselves up into NumPy using multiple dispatch. Also SymPy would then hook itself up so that it can be used with array operations natively. Currently SymPy does work with NumPy (see our tests for some examples what works), but the solution is a bit fragile (it is not possible to override NumPy behavior, but because NumPy supports general objects, we simply give it SymPy objects and things mostly work).

Similar to this, I would like to create multiple dispatch in SymPy core itself, so that other (faster) libraries for symbolic manipulation can hook themselves up, so that their own (faster) multiplication, expansion or series expansion would get called instead of the SymPy default one implemented in pure Python.

Other blog posts from the conference:

18 February 2013

John Sullivan: SCALE

I will be speaking at the Southern California Linux Expo (and yes, given the topics covered, it's missing a GNU). My talk, "Four Freedoms for Freedom," is on Sunday, February 24, 2013 from 16:30 to 17:30.
The most obvious people affected by all four of the freedoms that define free software are the programmers. They are the ones who will likely want to -- and are able to -- modify software running on their computers. But free software is a movement to advance and defend freedom for anyone and everyone using any computing device, not just programmers. In many countries now, given the ubiquity of tablets, phones, laptops and desktops, "anyone and everyone using any computing device" means nearly all citizens. But new technological innovations in these areas keep coming with new restrictions, frustrating and controlling users even while creating a perception of empowerment. The Free Software Foundation wants to gain the support and protect the interests of everyone, not just programmers. How do we reach people who have no intention of ever modifying a program, and how do we help them?
Other presentations on my list to check out (in chronological order, some conflicting): If you will be there and want to meet up, drop me a line.

31 October 2011

Russell Coker: SE Linux Status in Debian 2011-10

Debian/Unstable Development deb http://www.coker.com.au wheezy selinux The above APT sources.list line has my repository for SE Linux packages that have been uploaded to Unstable and which will eventually go to testing and then the Wheezy release (if they aren t obsoleted first). I have created that repository for people who want to track SE Linux development without waiting for an Unstable mirror to update. In that repository I ve included a new version of policycoreutils that now includes mcstrans and also has support for newer policy such that the latest selinux-policy-default package can be installed. The version that is currently in Testing supports upgrading policy on a running system but doesn t support installing the policy on a system that previously didn t run SE Linux. I have also uploaded SE Linux Policy packages from upstream release 20110726 compared to the previous packages which were from upstream release 20100524. As the numbers imply there is 14 months of upstream policy development which changes many things. Many of the patches from my Squeeze policy packages are not yet incorporated in the policy I have uploaded to Unstable. I won t guarantee that an Unstable system in Enforcing mode will do anything other than boot up and allow you to login via ssh. It s definitely not ready for production but it s also very suitable for development (10 years ago I did a lot of development on SE Linux systems that often denied login access, it wasn t fun). Kyle Moffett submitted a patch for libselinux which dramatically changed the build process. As Manoj (who wrote the previous build scripts) was not contactable I accepted Kyle s patch as provided. Thanks for the patch Kyle, and thanks for all your work over the years Manoj. Anyway the result of these changes should mean that it s easier to bootstrap Debian on a new architecture and easier to support multi-arch but I haven t tested either of these. Squeeze The policy packages from Squeeze can t be compiled on Unstable. The newer policy compilation tool chain is more strict about how some things can be declared and used, thus some policy which was fairly dubious but usable is now invalid. While it wouldn t be difficult to fix those problems I don t plan to do so. There is no good reason for compiling Squeeze policy on Unstable now that I ve uploaded a new upstream release. deb http://www.coker.com.au squeeze selinux I am still developing Squeeze policy and releasing it in the above APT repository. I will also get another policy release in a Squeeze update if possible to smooth the transition to Wheezy the goal is that Squeeze policy will be usable on Wheezy even if it can t be compiled. Also note that the compilation failures only affect the Debian package, it should still be possible to make modules for local use on a Wheezy system with Squeeze policy. MLS On Wednesday I m giving a lecture at my local LUG about MLS on SE Linux. I hope to have a MLS demonstration system available to LUG members by then. Ideally I will have a MLS system running on a virtual server somewhere that s accessible as well as a Xen/KVM image on a USB stick that can be copied by anyone at the meeting. I don t expect to spend much time on any aspect of SE Linux unrelated to MLS for the rest of the week. Version Control I need to change the way that I develop SE Linux packages, particularly the refpolicy source package (source of selinux-policy-default among others). A 20,000 line single patch is difficult to work with! I will have to switch to using quilt, once I get it working well it should save me time on my own development as well as making it easier to send patches upstream. Also I need to setup a public version control system so I can access the source from my workstation, laptop, and netbook. While doing that I might as well make it public so any interested people can help out. Suggestions on what type of VCS to use are welcome. How You Can Help Sorting out the mess that is the refpolicy package, sending patches upstream and migrating to a VCS is a fair bit of work. But there are lots of small parts. Sending patches upstream is a job that could be done in small pieces. Writing new policy is not something to do yet. There s not much point in doing that while I still haven t merged all the patches from Squeeze maybe next week. However I can provide the missing patches to anyone who wants to review them and assist with the merging. I have a virtual server that has some spare capacity. One thing I would like to do is to have some virtual machines running Unstable with various configurations of server software. Then we could track Unstable on those images and use automated testing to ensure that nothing breaks. If anyone wants root access on a virtual server to install their favorite software then let me know. But such software needs to be maintained and tested!

16 September 2009

Benjamin Mako Hill: Ubuntu Books

/copyrighteous/images/ubuntu_books_2009.png As I am attempting to focus on writing projects that are more scholarly and academic on the one hand (i.e., work for my day job at MIT) and more geared toward communicating free software principles toward wider audiences on the other (e.g., Revealing Errors), I have little choice but to back away from technical writing. However, this last month has seen the culmination of a bunch of work that I've done previously: two book projects that have been ongoing for the last couple years or more have finally hit the shelves! The first is the fourth edition (!) of the bestselling Official Ubuntu Book. Much to my happiness, the book continues to do extremely well and continues to receive solid reviews. This edition brings the book up to date for Jaunty/9.04 and adds a few pieces here and there. Although I was less active in this update than I have been in the past, Corey Burger continued his great work and Matthew Helmke of Ubuntu Forums fame stepped up to take a leading role. As I plan to retreat into a more purely advisory role for the next edition of this book, I'm thrilled to know that the project will remain in such capable hands. I'm also thrilled that this edition of the book, like all previous editions, is released as a free cultural work under CC BY-SA For years, I have heard people say that although they like the Official Ubuntu Book, it was a little too focused on desktops and on beginners for their tastes. The newly released Official Ubuntu Server Book is designed to address those concerns by providing an expanded guide to more "advanced" topics and is targeted at system administrators trying to get to know Ubuntu. Kyle Rankin planned and produced most of this book but I was thrilled to help poke it in places, chime in during the planning process, and to contribute a few chapters. Kyle is a knowledgeable sysadmin and has done wonderful job with the book. I only wish I could take more credit. The publisher has promised me that, at the very least, my chapters will be distributed under CC BY-SA. Many barriers to the adoption of free software are technical and a good book can, and often does, make a big difference. I enjoy being able to help address that problem. I also truly enjoy technical writing. I find it satisfying to share something I know well with others and it is great to know that I've helped someone with their problems. I'll assure I'll be able to do things here and there, I'll miss technical writing as I attempt "cut back."

23 April 2009

Adrian von Bidder: Let's kill KHTML

Reading Kyle's view on Konqueror and KHTML's current status: I couldn't agree more. I use konqueror instead of Firefox because I quite like its GUI, and its integration into KDE is obviously better than Firefox'. Issues with various websites prompt me to have an Iceweasel window open as well quite a large part of the time. Let's just switch to WebKit, so the market only has to care about Gecko and WebKit and can ignore one more marginal rendering engine. I see libqt-webkit 4.5 is in experimental and a Google query on debian konqueror webkit at least shows an Ubuntu packaging effort of the Konqueror WebKit KPart, so the days of khtml on my Desktop are certainly nearing its end. At this point: Kudos to the KDE folks (Debian and upstream). KDE4.2 is really, really usable, the remaining issues are really small. And, if I don't try to manually interfer like I did in my first attempt, migrating the KDE settings from ~/.kde4 to ~/.kde actually did work just fine on my netbook.

25 January 2009

Kyle McMartin: pcspkr? that s like flickr, right?

two words^W^Wone command. # echo blacklist pcspkr >/etc/modprobe.d/blacklist-pcspkr That is all.

22 January 2009

Kyle McMartin: fudcon f11 thumbnails

tired looking spot pfrields intro pfrields intro pfrields intro pjones notting and dcbw davej representing davej ajax hacks gregdek More coming when I can find the time to convert them to jpg

21 January 2009

Kyle McMartin: fudcon f11

I was in Boston last week for FUDcon F11. As usual, despite bringing several tonnes of camera gear, I couldn t be bothered to pull it out aside from a few (terrible) casual shots. But hey, I m not big on being spontaneous. Anyway, the two hackfest days were really productive (as were the days I managed to make it out to the Red Hat office.) Although sadly we didn t make it into MIT for the Sunday hackfest day, due to the weather and the Sunday bus schedule in Arlington. paul's intro session Instead of DaveJ doing his usual what s going on in the kernel for F$next I pitched that session and Dave held a session on dracut, he and Jeremy s new framework for building initramfs. My session was well attended and got a lot of questions about various things like the new drivers/staging policy (though, that s a post for another time.) dcbw and notting The rest of the time was productively spent poking at updates for the kernel for F9 and F10, rebasing them to 2.6.28, and poking at 2.6.29-rc1 for rawhide. This meant btrfs, while not ready for your critical data, was in mainline! After pushing out the new build, I spent most of the weekend doing some btrfs testing, and pushing out a new btrfsprogs build. Once they were composed into rawhide, I believe Fedora became the first to ship the mainline working btrfs in a distro. paul's closing session All said, it was a fairly productive weekend, and FUDPub on Saturday night was a great chance to catch up with a bunch of my coworkers who I rarely get to see. I kind of wish I had bothered to set up an umbrella and get some decent shots of my friends, colleagues and coworkers having a raucous good time, but oh well, there s always next release. off-topic
I absolutely hate being politic, and desperately hate posting about life or whatever, since I don t care about yours and don t see why you should care about mine, but the callous arrogance of these striking transit workers here in Ottawa is beginning to irritate me. These are people who ve threatened to picket students attempting to get to school if the universities accepted funding from the municipal government to continue to run replacement shuttles, and (though they ve reached a side agreement now) had threatened to picket for attempting to hire more drivers to increase their ParaTranspo service which provides shuttle services for the elderly and less-able-bodied residents of Ottawa. My point is, one of the shops I walked past on the high street yesterday said it best: Caution, possibly NSFW depending on your point of view. Given that I live in a (fairly) suburban part of Ottawa, this means I get to enjoy a forty minute walk to the nearest (competent) grocery store or coffee shop, and haven t bothered much. But hey, wonderful North American urban planning strikes again. Huzzah.

7 September 2008

Kyle McMartin: CONFIG_PRINTK_TIME, what is time?

GMsoft reported a few weeks ago, that kernels on his A500 were hanging on startup with CONFIG_PRINTK_TIME enabled. Knowing that all this does is prefix the kernel messages with a timestamp, I was interested to find out how this could possibly be causing a hard hang. Obviously, the first thing to do, is to try and reproduce the problem. But I was completely unable to reproduce it on my RP3440… How very strange. Ok, well, let’s poke at the A500 and see if it will happen there. It does! Spooky…[1] Well, now I was really interested. Let’s see what CONFIG_PRINTK_TIME actually does… In kernel/printk.c::vprintk, around the if (printk_time) section, we see that after printing the priority level tag, such as KERN_INFO, etc., we attempt to print a timestamp obtained with cpu_clock. The returned value from cpu_clock is in nanoseconds, so before it’s printed, it is munged into two smaller integers, the whole-seconds portion, and the decimal portion. Ok, this gives us a great place to start looking for ways PA-RISC could be tripping up on these codepaths. The first was a fairly fruitless search of the cpu_clock call chain, which turned up nothing suspicious, aside from a maze of CONFIG_ options. Turned out, on non-x86, this code reduced into some fairly trivial stuff, none of which could really have been causing a hang. However, we now had a basis for a fairly good hunch. If it wasn’t the cpu_clock going awry, the do_div routine, or sprintf must have been causing it. A quick boot-test to comment out the do_div routine and replace it will a fixed value resulted in a working system. Hooray. Then it hit me like a freight train, when I saw what was being printed. This was the banner line… the very first thing printed in init/main.c, right after jumping into the kernel in virtual mode at start_kernel.
Linux version 2.6.27-rc5-00283-g70bb089 (kyle@shortfin) (gcc version 4.3.1 (GCC) ) #2 SMP Sat Sep 6 19:45:05 PDT 2008
FP[0] enabled: Rev 1 Model 20
The 64-bit Kernel has started…
console [ttyB0] enabled
Initialized PDC Console for debugging.
Seeing the “FP[0] enabled” line immediately caused me to smack my head at the obviousness of the problem. We were attempting to do a division, which, because of how the kernel and libgcc are compiled, is attempting to use the fpu. However, this is faulting on the very first printk in the kernel, well before any of the architecture-specific initialization is done. A quick hack removing any printks before we initialized the fpu fixed the problem as well. But, dirty hacks are not appropriate for mainline. I thought long and hard about a nice way to fix this, but, really, open coding firmware calls in assembly didn’t strike my fancy. There is, however, another easy way to solve it. I ended up replacing the jump to start_kernel in head.S with my own function to turn on the fpu, and called start_kernel from there. Kind of ugly, but at least the fix is entirely contained to arch/parisc, instead of leaking all over the tree. The patch is available, but I’ve been too busy to push it this last release-cycle (and, didn’t really want to tempt fate at pushing a not-quite-actually-serious-fix outside of -rc1 time.) This has been another post brought to you by the maintainer of an inconsequential architecture. We do hope you enjoy it. 1. This ended up being due to either 1) the fpu being enabled by firmware on the PA8800, or 2) the fact that I was doing warm resets instead of cold starts.

Next.