Last month, I traveled to Kenya to attend a conference called State of the Map 2024 ( SotM for short), which is an annual meetup of OpenStreetMap contributors from all over the world. It was held at the University of Nairobi Towers in Nairobi, from the 6th to the 8th of September.
University of Nairobi.
I have been contributing to OpenStreetMap for the last three years, and this conference seemed like a great opportunity to network with others in the community. As soon as I came across the travel grant announcement, I jumped in and filled the form immediately. I was elated when I was selected for the grant and couldn t wait to attend. The grant had an upper limit of 1200 and covered food, accommodation, travel and miscellaneous expenses such as visa fee.
Pre-travel tasks included obtaining Kenya s eTA and getting a yellow fever vaccine. Before the conference, Mikko from the Humanitarian OpenStreetMap Team introduced me to Rabina and Pragya from Nepal, Ibtehal from Bangladesh, and Sajeevini from Sri Lanka. We all booked the Nairobi Transit Hotel, which was within walking distance of the conference venue. Pragya, Rabina, and I traveled together from Delhi to Nairobi, while Ibtehal was my roommate in the hotel.
Our group at the conference.
The venue, University of Nairobi Towers, was a tall building and the conference was held on the fourth, fifth and sixth floors. The open area on the fifth floor of the building had a nice view of Nairobi s skyline and was a perfect spot for taking pictures. Interestingly, the university had a wing dedicated to Mahatma Gandhi, who is regarded in India as the Father of the Nation.
View of Nairobi's skyline from the open area on the fifth floor.
A library in Mahatma Gandhi wing of the University of Nairobi.
The diversity of the participants was mind-blowing, with people coming from a whopping 54 countries. I was surprised to notice that I was the only participant traveling from India, despite India having a large OpenStreetMap community. That said, there were two other Indian participants who traveled from other countries. I finally got to meet Arnalie (from the Phillipines) and Letwin (from Zimbabwe), both of whom I had only met online before. I had met Anisa (from Albania) earlier during DebConf 2023. But I missed Mikko and Honey from the Humanitarian OpenStreetMap Team, whom I knew from the Open Mapping Guru program.
I learned about the extent of OSM use through Pragya and Rabina s talk; about the logistics of running the OSM Board, in the OSMF (OpenStreetMap Foundation) session; about the Youth Mappers from Sajeevini, about the OSM activities in Malawi from Priscilla Kapolo, and about mapping in Zimbabwe from Letwin. However, I missed Ibtehal s lightning session. The ratio of women speakers and participants at the conference was impressive, and I hope we can get such gender representation in our Delhi/NCR mapping parties.
One of the conference halls where talks took place.
Outside of talks, the conference also had lunch and snack breaks, giving ample time for networking with others. In the food department, there were many options for a lacto-ovo vegetarian like myself, including potatoes, rice, beans, chips etc. I found out that the milk tea in Kenya (referred to as white tea ) is usually not as strong compared to India, so I switched to coffee (which is also called white coffee when taken with milk). The food wasn t spicy, but I can t complain :) Fruit juices served as a nice addition to lunch.
One of the lunch meals served during the conference.
At the end of the second day of the conference, there was a surprise in store for us a bus ride to the Bao Box restaurant. The ride gave us the experience of a typical Kenyan matatu (privately-owned minibuses used as share taxis), complete with loud rap music. I remember one of the songs being Kraff s Nursery Rhymes. That day, I was wearing an original Kenyan cricket jersey - one that belonged to Dominic Wesonga, who represented Kenya in four ODIs. This confused Priscilla Kapolo, who asked if I was from Kenya! Anyway, while it served as a good conversation starter, it didn t attract as much attention as I expected :) I had some pizza and chips there, and later some drinks with Ibtehal. After the party, Piyush went with us to our hotel and we played a few games of UNO.
Minibus which took us from the university to Bao Box restaurant.
This minibus in the picture gave a sense of a real matatu.
I am grateful to the organizers Laura and Dorothea for introducing me to Nikhil when I was searching for a companion for my post-conference trip. Nikhil was one of the aforementioned Indian participants, and a wildlife lover. We had some nice conversations; he wanted to go to the Masai Maara Natural Reserve, but it was too expensive for me. In addition, all the safaris were multi-day affairs, and I wasn t keen on being around wildlife for that long. Eventually I chose to go my own way, exploring the coastal side and visiting Mombasa.
While most of the work regarding the conference was done using free software (including the reimbursement form and Mastodon announcements), I was disappointed by the use of WhatsApp for coordination with the participants. I don t use WhatsApp and so was left out. WhatsApp is proprietary software (they do not provide the source code) and users don t control it. It is common to highlight that OpenStreetMap is controlled by users and the community, rather than a company - this should apply to WhatsApp as well.
My suggestion is to use XMPP, which shares similar principles with OpenStreetMap, as it is privacy-respecting, controlled by users, and powered by free software. I understand the concern that there might not be many participants using XMPP already. Although it is a good idea to onboard people to free software like XMPP, we can also create a Matrix group, and bridge it with both the XMPP group and the Telegram group. In fact, using Matrix and bridging it with Telegram is how I communicated with the South Asian participants. While it s not ideal - as Telegram s servers are proprietary and centralized - but it s certainly much better than creating a WhatsApp-only group. The setup can be bridged with IRC as well. On the other hand, self-hosted mailing lists for participants is also a good idea.
Finally, I would like to thank SotM for the generous grant, enabling me to attend this conference, meet the diverse community behind OSM and visit the beautiful country of Kenya. Stay tuned for the blog post on Kenya trip.
Thanks to Sahilister, Contrapunctus, Snehal and Badri for reviewing the draft of this blog post before publishing.
Review: The Library of Broken Worlds, by Alaya Dawn Johnson
Publisher:
Scholastic Press
Copyright:
June 2023
ISBN:
1-338-29064-9
Format:
Kindle
Pages:
446
The Library of Broken Worlds is a young-adult far-future science
fantasy. So far as I can tell, it's stand-alone, although more on that
later in the review.
Freida is the adopted daughter of Nadi, the Head Librarian, and her
greatest wish is to become a librarian herself. When the book opens,
she's a teenager in highly competitive training. Freida is low-wetware,
without the advanced and expensive enhancements of many of the other
students competing for rare and prized librarian positions, which she
makes up for by being the most audacious. She doesn't need wetware to
commune with the library material gods. If one ventures deep into their
tunnels and consumes their crystals, direct physical communion is
possible.
The library tunnels are Freida's second home, in part because that's where
she was born. She was created by the Library, and specifically by Iemaja,
the youngest of the material gods. Precisely why is a mystery. To Nadi,
Freida is her daughter. To Quinn, Nadi's main political rival within the
library, Freida is a thing, a piece of the library, a secondary and
possibly rogue AI. A disruptive annoyance.
The Library of Broken Worlds is the sort of science fiction where
figuring out what is going on is an integral part of the reading
experience. It opens with a frame story of an unnamed girl (clearly
Freida) waking the god Nameren and identifying herself as designed for
deicide. She provokes Nameren's curiosity and offers an Arabian Nights
bargain: if he wants to hear her story, he has to refrain from killing her
for long enough for her to tell it. As one might expect, the main
narrative doesn't catch up to the frame story until the very end of the
book.
The Library is indeed some type of library that librarians can search for
knowledge that isn't available from more mundane sources, but Freida's
personal experience of it is almost wholly religious and oracular. The
library's material gods are identified as AIs, but good luck making sense
of the story through a science fiction frame, even with a healthy
allowance for sufficiently advanced technology being indistinguishable
from magic. The symbolism and tone is entirely fantasy, and late in the
book it becomes clear that whatever the material gods are, they're not
simple technological AIs in the vein of, say, Banks's Ship Minds.
Also, the Library is not solely a repository of knowledge. It is the
keeper of an interstellar peace. The Library was founded after the Great
War, to prevent a recurrence. It functions as a sort of legal system and
grand tribunal in ways that are never fully explained.
As you might expect, that peace is based more on stability than fairness.
Five of the players in this far future of humanity are the Awilu, the most
advanced society and the first to leave Earth (or Tierra as it's called
here); the Mah m, who possess the material war god Nameren of the frame
story; the Lunars and Martians, who dominate the Sol system; and the
surviving Tierrans, residents of a polluted and struggling planet that is
ruthlessly exploited by the Lunars. The problem facing Freida and her
friends at the start of the book is a petition brought by a young Tierran
against Lunar exploitation of his homeland. His name is Joshua, and
Freida is more than half in love with him. Joshua's legal argument
involves interpretation of the freedom node of the treaty that ended the
Great War, a node that precedent says gives the Lunars the freedom to
exploit Tierra, but which Joshua claims has a still-valid originalist
meaning granting Tierrans freedom from exploitation.
There is, in short, a lot going on in this book, and "never fully
explained" is something of a theme. Freida is telling a story to Nameren
and only explains things Nameren may not already know. The reader has to
puzzle out the rest from the occasional hint. This is made more difficult
by the tendency of the material gods to communicate only in visions or
guided hallucinations, full of symbolism that the characters only partly
explain to the reader.
Nonetheless, this did mostly work, at least for me. I started this book
very confused, but by about the midpoint it felt like the background was
coming together. I'm still not sure I understand the aurochs, baobab, and
cicada symbolism that's so central to the framing story, but it's the
pleasant sort of stretchy confusion that gives my brain a good workout. I
wish Johnson had explained a few more things plainly, particularly near
the end of the book, but my remaining level of confusion was within my
tolerances.
Unfortunately, the ending did not work for me. The first time I read it,
I had no idea what it meant. Lots of baffling, symbolic things happened
and then the book just stopped. After re-reading the last 10%, I think
all the pieces of an ending and a bit of an explanation are there, but
it's absurdly abbreviated. This is another book where the author appears
to have been finished with the story before I was.
This keeps happening to me, so this probably says something more about me
than it says about books, but I want books to have an ending. If the
characters have fought and suffered through the plot, I want them to have
some space to be happy and to see how their sacrifices play out, with more
detail than just a few vague promises. If much of the book has been
puzzling out the nature of the world, I would like some concrete
confirmation of at least some of my guesswork. And if you're going to end
the book on radical transformation, I want to see the results of that
transformation. Johnson does an excellent job showing how brutal the
peace of the powerful can be, and is willing to light more things on fire
over the course of this book than most authors would, but then doesn't
offer the reader much in the way of payoff.
For once, I wish this stand-alone turned out to be a series. I think an
additional book could be written in the aftermath of this ending, and I
would definitely read that novel. Johnson has me caring deeply about
these characters and fascinated by the world background, and I'd happily
spend another 450 pages finding out what happens next. But,
frustratingly, I think this ending was indeed intended to wrap up the
story.
I think this book may fall between a few stools. Science fiction readers
who want mysterious future worlds to be explained by the end of the book
are going to be frustrated by the amount of symbolism, allusion, and
poetic description. Literary fantasy readers, who have a higher tolerance
for that style, are going to wish for more focused and polished writing.
A lot of the story is firmly YA: trying and failing to fit in, developing
one's identity, coming into power, relationship drama, great betrayals and
regrets, overcoming trauma and abuse, and unraveling lies that adults tell
you. But this is definitely not a straight-forward YA plot or world
background. It demands a lot from the reader, and while I am confident
many teenage readers would rise to that challenge, it seems like an
awkward fit for the YA marketing category.
About 75% of the way in, I would have told you this book was great and you
should read it. The ending was a let-down and I'm still grumpy about it.
I still think it's worth your attention if you're in the mood for a
sink-or-swim type of reading experience. Just be warned that when the
ride ends, I felt unceremoniously dumped on the pavement.
Content warnings: Rape, torture, genocide.
Rating: 7 out of 10
This is the second part of how I build a read-only root setup for my router. You might want to read part 1 first, which covers the initial boot and general overview of how I tie the pieces together. This post will describe how I build the squashfs image that forms the main filesystem.
Most of the build is driven from a script, make-router, which I ll dissect below. It s highly tailored to my needs, and this is a fairly lengthy post, but hopefully the steps I describe prove useful to anyone trying to do something similar.
Breakdown of make-router
#!/bin/bash# Either rb3011 (arm) or rb5009 (arm64)#HOSTNAME="rb3011"HOSTNAME="rb5009"if["x$ HOSTNAME"=="xrb3011"];then
ARCH=armhf
elif["x$ HOSTNAME"=="xrb5009"];then
ARCH=arm64
else
echo"Unknown host: $ HOSTNAME"exit 1
fi
It s a bash script, and I allow building for either my RB3011 or RB5009, which means a different architecture (32 vs 64 bit). I run this script on my Pi 4 which means I don t have to mess about with QemuUserEmulation.
BASE_DIR=$(dirname$0)IMAGE_FILE=$(mktemp--tmpdir router.$ ARCH.XXXXXXXXXX.img)MOUNT_POINT=$(mktemp-p /mnt -d router.$ ARCH.XXXXXXXXXX)# Build and mount an ext4 image file to put the root file system indd if=/dev/zero bs=1 count=0 seek=1G of=$ IMAGE_FILE
mkfs -t ext4 $ IMAGE_FILE
mount -o loop $ IMAGE_FILE$ MOUNT_POINT
I build the image in a loopback ext4 file on tmpfs (my Pi4 is the 8G model), which makes things a bit faster.
# Add dpkg excludesmkdir-p$ MOUNT_POINT/etc/dpkg/dpkg.cfg.d/
cat<<EOF > $ MOUNT_POINT/etc/dpkg/dpkg.cfg.d/path-excludes
# Exclude docs
path-exclude=/usr/share/doc/*
# Only locale we want is English
path-exclude=/usr/share/locale/*
path-include=/usr/share/locale/en*/*
path-include=/usr/share/locale/locale.alias
# No man pages
path-exclude=/usr/share/man/*
EOF
Create a dpkg excludes config to drop docs, man pages and most locales before we even start the bootstrap.
Actually do the debootstrap step, including a bunch of extra packages that we want.
# Install mqtt-arpcp$ BASE_DIR/debs/mqtt-arp_1_$ ARCH.deb $ MOUNT_POINT/tmp
chroot$ MOUNT_POINT dpkg -i /tmp/mqtt-arp_1_$ ARCH.deb
rm$ MOUNT_POINT/tmp/mqtt-arp_1_$ ARCH.deb
# Frob the mqtt-arp config so it starts after mosquittosed-i-e's/After=.*/After=mosquitto.service/'$ MOUNT_POINT/lib/systemd/system/mqtt-arp.service
I haven t uploaded mqtt-arp to Debian, so I install a locally built package, and ensure it starts after mosquitto (the MQTT broker), given they re running on the same host.
# Frob watchdog so it starts earlier than multi-usersed-i-e's/After=.*/After=basic.target/'$ MOUNT_POINT/lib/systemd/system/watchdog.service
# Make sure the watchdog is poking the device filesed-i-e's/^#watchdog-device/watchdog-device/'$ MOUNT_POINT/etc/watchdog.conf
watchdog timeouts were particularly an issue on the RB3011, where the default timeout didn t give enough time to reach multiuser mode before it would reset the router. Not helpful, so alter the config to start it earlier (and make sure it s configured to actually kick the device file).
# Clean up docs + localesrm-r$ MOUNT_POINT/usr/share/doc/*rm-r$ MOUNT_POINT/usr/share/man/*for dir in$ MOUNT_POINT/usr/share/locale/*/;do
if["$ dir"!="$ MOUNT_POINT/usr/share/locale/en/"];then
rm-r$ dirfi
done
Clean up any docs etc that ended up installed.
# Set root password to rootecho"root:root"chroot$ MOUNT_POINT chpasswd
The only login method is ssh key to the root account though I suppose this allows for someone to execute a privilege escalation from a daemon user so I should probably randomise this. Does need to be known though so it s possible to login via the serial console for debugging.
There are config files that are easier to replace wholesale, some of which are specific to the hardware (e.g. related to network interfaces). See below for some more details.
# Build symlinks into flash for boot / modulesln-s /mnt/flash/lib/modules $ MOUNT_POINT/lib/modules
rmdir$ MOUNT_POINT/boot
ln-s /mnt/flash/boot $ MOUNT_POINT/boot
The kernel + its modules live outside the squashfs image, on the USB flash drive that the image lives on. That makes for easier kernel upgrades.
# Put our git revision into os-releaseecho-n"GIT_VERSION=">>$ MOUNT_POINT/etc/os-release
(cd$ BASE_DIR; git describe --tags)>>$ MOUNT_POINT/etc/os-release
Always helpful to be able to check the image itself for what it was built from.
# Add some stuff to root's .bashrccat<<EOF >> $ MOUNT_POINT/root/.bashrc
alias ls='ls -F --color=auto'
eval "\$(dircolors)"
case "\$TERM" in
xterm* rxvt*)
PS1="\\[\\e]0;\\u@\\h: \\w\a\\]\$PS1"
;;
*)
;;
esac
EOF
Just some niceties for when I do end up logging in.
# Save the installed package list offchroot$ MOUNT_POINT dpkg --get-selections> /tmp/wip-installed-packages
Save off the installed package list. This was particularly useful when trying to replicate the existing router setup and making sure I had all the important packages installed. It doesn t really serve a purpose now.
In terms of the config files I copy into /etc, shared across both routers are the following:
Breakdown of shared config
Most of us carry cell phones with us almost everywhere we go. So much so that we often forget not just the usefulness, but even the joy, of having our own radios. For instance:
When traveling to national parks or other wilderness areas, family and friends can keep in touch even where there is no cell coverage.
It is a lot faster to just push a button and start talking than it is to unlock a phone, open the phone app, select a person, wait for the call to connect, wait for the other person to answer, etc. I m heading back. OK. Boom, 5 seconds, done. A phone user wouldn t have even dialed in that time.
A whole group of people can be on the same channel.
You can often buy a radio for less than the monthly cost of a cell plan.
From my own experience, as a person and a family that enjoys visiting wilderness areas, having radio communication is great. I have also heard from others that they re also very useful on cruise ships (I ve never been on one so I can t attest to that).
There is also a sheer satisfaction in not needing anybody else s infrastructure, not paying any sort of monthly fee, and setting up the radios ourselves.
How these services fit in
This article is primarily about handheld radios that can be used by anybody. I laid out some of their advantages above. Before continuing, I should point out some of the other services you may consider:
Cell phones, obviously. Due to the impressive infrastructure you pay for each month (many towers in high locations), in areas of cell coverage, you have this ability to connect to so many other phones around the world. With radios like discussed here, your range will likely a few miles.
Amateur Radio has often been a decade or more ahead of what you see in these easy personal radio devices. You can unquestionably get amateur radio devices with many more features and better performance. However, generally speaking, each person that transmits on an amateur radio band must be licensed. Getting an amateur radio license isn t difficult, but it does involve passing a test and some time studying for the exam. So it isn t something you can count on random friends or family members being able to do. That said, I have resources on Getting Started With Amateur Radio and it s not as hard as you might think! There are also a lot of reasons to use amateur radio if you want to go down that path.
Satellite messengers such as the Garmin Inreach or Zoleo can send SMS-like messages across anywhere in the globe with a clear view of the sky. They also often have SOS features. While these are useful safety equipment, it can take many minutes for a message to be sent and received it s not like an interactive SMS conversation and there are places where local radios will have better signal. Notably, satellite messengers are almost useless indoors and can have trouble in areas without a clear view of the sky, such as dense forests, valleys, etc.
For very short-range service, Briar can form a mesh over Bluetooth from cell phones or over Tor, if Internet access is available.
Dedicated short message services Mesh Networks like Meshtastic or Beartooth have no voice capability, but share GPS locations and short text messages over their own local mesh. Generally they need to pair to a cell phone (even if that phone has no cell service) for most functionality.
Yggdrasil can do something similar over ad-hoc Wifi, but it is a lower-level protocol and you d need some sort of messaging to run atop it.
This article is primarily about the USA, though these concepts, if not the specific implementation, apply many other areas as well.
The landscape of easy personal radios
The oldest personal radio service in the US is Citizens Band (CB). Because it uses a lower frequency band than others, handheld radios are larger, heavier, and less efficient. It is mostly used in vehicles or other installations where size isn t an issue.
The FRS/GMRS services mostly share a set of frequencies. The Family Radio Service is unlicensed (you don t have to get a license to use it) and radios are plentiful and cheap. When you get a blister pack or little radios for maybe $50 for a pair or less, they re probably FRS. FRS was expanded by the FCC in 2017, and now most FRS channels can run up to 2 watts of power (with channels 8-14 still limited to 0.5W). FRS radios are pretty much always handheld.
GMRS runs on mostly the same frequencies as FRS. GMRS lets you run up to 5W on some channels, up to 50W on others, and operate repeaters. GMRS also permits limited occasional digital data bursts; three manufacturers currently use this to exchange GPS data or text messages. To use GMRS, you must purchase a GMRS license; it costs $35 for a person and their immediate family and is good for 10 years. No exam is required. GMRS radios can transmit on FRS frequencies using the GMRS authorization.
The extra power of GMRS gets you extra distance. While only the best handheld GMRS radios can put out 5W of power, some mobile (car) or home radios can put out the full 50W, and use more capable exterior antennas too.
There is also the MURS band, which offers very few channels and also very few devices. It is not in wide use, probably for good reason.
Finally, some radios use some other unlicensed bands. The Motorola DTR and DLR series I will talk about operate in the 900MHz ISM band. Regulations there limit them to a maximum power of 1W, but as you will see, due to some other optimizations, their range is often quite similar to a 5W GMRS handheld.
All of these radios share something in common: your radio can either transmit, or receive, but not both simultaneously. They all have a PTT (push-to-talk) button that you push and hold while you are transmitting, and at all other times, they act as receivers.
You ll learn that doubling is a thing where 2 or more people attempt to transmit at the same time. To listeners, the result is often garbled. To the transmitters, they may not even be aware they did it since, after all, they were transmitting. Usually it will be clear pretty quickly as people don t get responses or responses say it was garbled. Only the digital Motorola DLR/DTR series detects and prevents this situation.
FRS and GMRS radios
As mentioned, the FRS/GMRS radios are generally the most popular, and quite inexpensive. Those that can emit 2W will have pretty decent range; 5W even better (assuming a decent antenna), though the 5W ones will require a GMRS license. For the most part, there isn t much that differentiates one FRS radio from another, or (with a few more exceptions) one GMRS handheld from another. Do not believe the manufacturers claims of 50 mile range or whatever; more on range below.
FRS and GMRS radios use FM. GMRS radios are permitted to use a wider bandwidth than FRS radios, but in general, FRS and GMRS users can communicate with each other from any brand of radio to any other brand of radio, assuming they are using basic voice services.
Some FRS and GMRS radios can receive the NOAA weather radio. That s nice for wilderness use. Nicer ones can monitor it for alert tones, even when you re tuned to a different channel. The very nicest on this as far as I know, only the Garmin Rino series will receive and process SAME codes to only trigger alerts for your specific location.
GMRS (but not FRS) also permits 1-second digital data bursts at periodic intervals. There are now three radio series that take advantage of this: the Garmin Rino, the Motorola T800, and BTech GMRS-PRO. Garmin s radios are among the priciest of GMRS handhelds out there; the top-of-the-line Rino will set you back $650. The cheapest is $350, but does not contain a replaceable battery, which should be an instant rejection of a device like this. So, for $550, you can get the middle-of-the-road Rino. It features a sophisticated GPS system with Garmin trail maps and such, plus a 5W GMRS radio with GPS data sharing and a very limited (13-character) text messaging system. It does have a Bluetooth link to a cell phone, which can provide a link to trail maps and the like, and limited functionality for the radio. The Rino is also large and heavy (due to its large map-capable screen). Many consider it to be somewhat dated technology; for instance, other ways to have offline maps now exist (such as my Garmin Fenix 6 Pro, which has those maps on a watch!). It is bulky enough to likely be left at home in many situations.
The Motorola T800 doesn t have much to talk about compared to the other two.
Both of those platforms are a number of years old. The newest entrant in this space, from budget radio maker Baofeng, is the BTech GMRS-PRO, which came out just a couple of weeks ago. Its screen, though lacking built-in maps, does still have a GPS digital link similar to Garmin s, and can show you a heading and distance to other GMRS-PRO users. It too is a 5W unit, and has a ton of advanced features that are rare in GMRS: ability to pair a Bluetooth headset to it directly (though the Garmin Rino supports Bluetooth, it doesn t support this), ability to use the phone app as a speaker/mic for the radio, longer text messages than the Garmin Rino, etc. The GMRS-PRO sold out within a few days of its announcement, and I am presently waiting for mine to arrive to review. At $140 and with a more modern radio implementation, for people that don t need the trail maps and the like, it makes a compelling alternative to Garmin for outdoor use.
Garmin documents when GPS beacons are sent out: generally, when you begin a transmission, or when another radio asks for your position. I couldn t find similar documentation from Motorola or BTech, but I believe FCC regulations mean that the picture would be similar with them. In other words, none of these devices is continuously, automatically, transmitting position updates. However, you can request a position update from another radio.
It should be noted that, while voice communication is compatible across FRS/GMRS, data communication is not. Garmin, Motorola, and BTech all have different data protocols that are incompatible with radios from other manufacturers.
FRS/GMRS radios often advertise privacy codes. These do nothing to protect your privacy; see more under the privacy section below.
Motorola DLR and DTR series
Although they can be used for similar purposes, and I do, these radios are unique from the others in this article in several ways:
Their sales and marketing is targeted at businesses rather than consumers
They use digital encoding of audio, rather than analog FM or AM
They use FHSS (Frequency-Hopping Spread Spectrum) rather than a set frequency
They operate on the 900MHz ISM band, rather than a 460MHz UHF band (or a lower band yet for MURS and CB)
The DLR series is quite small, smaller than many GMRS radios.
I don t have space to go into a lot of radio theory in this article, but I ll briefly expand on some of this.
First, FHSS. A FHSS radio hops from frequency to frequency many times per second, following some preset hopping algorithm that is part of the radio. Although it complicates the radio design, it has some advantages; it tends to allow more users to share a band, and if one particular frequency has a conflict with something else, it will be for a brief fraction of a second and may not even be noticeable.
Digital encoding generally increases the quality of the audio, and keeps the quality high even in degraded signal conditions where analog radios would experience static or a quieter voice. However, you also lose that sort of audible feedback that your signal is getting weak. When you get too far away, the digital signal drops off a cliff . Often, either you have a crystal-clear signal or you have no signal at all.
Motorola s radios leverage these features to build a unique radio. Not only can you talk to a group, but you can select a particular person to talk to with a private conversation, and so forth. DTR radios can send text messages to each other (but only preset canned ones, not arbitrary ones). Channels are more like configurations; they can include various arbitrary groupings of radios. Deconfliction with other users is established via hopsets rather than frequencies; that is, the algorithm that it uses to hop from frequency to frequency. There is a 4-digit PIN in the DLR radios, and newer DTR radios, that makes privacy very easy to set up and maintain.
As far as I am aware, no scanner can monitor DLR/DTR signals. Though they technically aren t encrypted, cracking a DLR/DTR conversation would require cracking Motorola s firmware, and the chances of this happening in your geographical proximity seem vanishingly small.
I will write more below on comparing the range of these to GMRS radios, but in a nutshell, it compares well, despite the fact that the 900MHz band restrictions allow Motorola only 1W of power output with these radios.
There are three current lines of Motorola DLR/DTR radios:
The Motorola DLR1020 and DLR1060 radios. These have no screen; the 1020 has two channels (configurations) while the 1060 supports 6. They are small and compact and great pocketable just work radios.
The Motorola DTR600 and DTR700 radios. These are larger, with a larger antenna (that should theoretically provide greater range) and have a small color screen. They support more channels and more features (eg, short messages, etc).
The Motorola Curve (aka DLR110). Compared to the DLR1060, it adds limited WiFi capabilities that are primarily useful in certain business environments. See this thread for more. These features are unlikely to be useful in the environments we re talking about here.
These radios are fairly expensive new, but DLRs can be readily found at around $60 on eBay. (DTRs for about $250) They are quite rugged. Be aware when purchasing that some radios sold on eBay may not include a correct battery and charger. (Not necessarily a problem; Motorola batteries are easy to find online, and as with any used battery, the life of a used one may not be great.) For more advanced configuration, the Motorola CPS cable works with both radios (plugs into the charging cradle) and is used with the programming software to configure them in more detail.
The older Motorola DTR650, DTR550, and older radios are compatible with the newer DLR and DTR series, if you program the newer ones carefully. The older ones don t support PINs and have a less friendly way of providing privacy, but they do work also. However, for most, I think the newer ones will be friendlier; but if you find a deal on the older ones, hey, why not?
This thread on the MyGMRS forums has tons of useful information on the DLR/DTR radios. Check it out for a lot more detail.
One interesting feature of these radios is that they are aware if there are conflicting users on the channel, and even if anybody is hearing your transmission. If your transmission is not being heard by at least one radio, you will get an audible (and visual, on the DTR) indication that your transmission failed.
One thing that pleasantly surprised me is just how tiny the Motorola DLR is. The whole thing with antenna is like a small candy bar, and thinner. My phone is slightly taller, much wider, and only a little thinner than the Motorola DLR. Seriously, it s more pocketable than most smartphones. The DTR is of a size more commonly associated with radios, though still on the smaller side. Some of the most low-power FRS radios might get down to that size, but to get equivolent range, you need a 5W GMRS unit, which will be much bulkier.
Being targeted at business users, the DLR/DTR don t include NOAA weather radio or GPS.
Power
These radios tend to be powered by:
NiMH rechargable battery packs
AA/AAA batteries
Lithium Ion batteries
Most of the cheap FRS/GMRS radios have a NiMH rechargable battery pack and a terrible charge controller that will tend to overcharge, and thus prematurely destroy, the NiMH packs. This has long ago happened in my GMRS radios, and now I use Eneloop NiMH AAs in them (charged separately by a proper charger).
The BTech, Garmin, and Motorola DLR/DTR radios all use Li-Ion batteries. These have the advantage of being more efficient batteries, though you can t necessarily just swap in AAs in a pinch. Pay attention to your charging options; if you are backpacking, for instance, you may want something that can charge from solar-powered USB or battery banks. The Motorola DLR/DTR radios need to sit in a charging cradle, but the cradle is powered by a Micro USB cable. The BTech GMRS-PRO is charged via USB-C. I don t know about the Garmin Rino or others.
Garmin offers an optional AA battery pack for the Rino. BTech doesn t (yet) for the GMRS-PRO, but they do for some other models, and have stated accessories for the GMRS-PRO are coming. I don t have information about the T800. This is not an option for the DLR/DTR.
Meshtastic
I ll briefly mention Meshtastic. It uses a low-power LoRa system. It can t handle voice transmissions; only data. On its own, it can transmit and receive automatic GPS updates from other Meshtastic devices, which you can view on its small screen. It forms a mesh, so each node can relay messages for others. It is also the only unit in this roundup that uses true encryption, and its battery lasts about a week more than the a solid day you can expect out of the best of the others here.
When paired with a cell phone, Meshtastic can also send and receive short text messages.
Meshtastic uses much less power than even the cheapest of the FRS radios discussed here. It can still achieve respectable range because it uses LoRa, which can trade bandwidth for power or range. It can take it a second or two to transmit a 50-character text message. Still, the GMRS or Motorola radios discussed here will have more than double the point-to-point range of a Meshtastic device. And, if you intend to take advantage of the text messaging features, keep in mind that you must now take two electronic devices with you and maintain a charge for them both.
Privacy
The privacy picture on these is interesting.
Cell phone privacy
Cell phones are difficult for individuals to eavesdrop, but a sophisticated adversary probably could: or an unsophisticated adversary with any manner of malware. Privacy on modern smartphones is a huge area of trouble, and it is safe to say that data brokers and many apps probably know at least your location and contact list, if not also the content of your messages. Though end-to-end encrypted apps such as Signal can certainly help. See Tools for Communicating Offline and in Difficult Circumstances for more details.
GMRS privacy
GMRS radios are unencrypted and public. Anyone in range with another GMRS radio, or a scanner, can listen to your conversations even if you have a privacy code set. The privacy code does not actually protect your privacy; rather, it keeps your radio from playing conversations from others using the same channel, for your convenience.
However, note the in range limitation. An eavesdropper would generally need to be within a few miles of you.
Motorola DLR/DTR privacy
As touched on above, while these also aren t encrypted, as far as I am aware, no tools exist to eavesdrop on DLR/DTR conversations. Change the PIN away from the default 0000, ideally to something that doesn t end in 0 (to pick a different hopset) and you have pretty decent privacy right there.
Decent doesn t mean perfect; it is certainly possible that sophisticated adversaries or state agencies could decode DLR/DTR traffic, since it is unencrypted. As a practical matter, though, the lack of consumer equipment that can decode this makes it be, as I say, pretty decent .
Meshtastic
Meshtastic uses strong AES encryption. But as messaging features require a paired phone, the privacy implications of a phone also apply here.
Range
I tested my best 5W GMRS radios, as well as a Motorola DTR600 talking to a DLR1060. (I also tried two DLR1060s talking to each other; there was no change in rnage.) I took a radio with me in the car, and had another sitting on my table indoors. Those of you familiar with radios will probably recognize that being in a car and being indoors both attenuate (reduce the strength of) the signal significantly. I drove around in a part of Kansas with gentle rolling hills.
Both the GMRS and the DLR/DTR had a range of about 2-3 miles. There were times when each was able to pull out a signal when the other was not. The DLR/DTR series was significantly better while the vehicle was in motion. In weaker signal conditions, the GMRS radios were susceptible to significant picket fencing (static caused by variation in the signal strength when passing things like trees), to the point of being inaudible or losing the signal entirely. The DLR/DTR remained perfectly clear there. I was able to find some spots where, while parked, the GMRS radios had a weak but audible signal but the DLR/DTR had none. However, in all those cases, the distance to GMRS dropping out as well was small. Basically, no radios penetrate the ground, and the valleys were a problem for them all.
Differences may play out in other ways in other environments as well: for instance, dense urban environments, heavy woods, indoor buildings, etc.
GMRS radios can be used with repeaters, or have a rooftop antenna mounted on a car, both of which could significantly extend range and both of which are rare.
The DLR/DTR series are said to be exceptionally good at indoor environments; Motorola rates them for penetrating 20 floors, for instance. Reports on MyGMRS forums state that they are able to cover an entire cruise ship, while the metal and concrete in them poses a big problem for GMRS radios. Different outdoor landscapes may favor one or the other also.
Some of the cheapest FRS radios max out at about 0.5W or even less. This is probably only a little better than yelling distance in many cases. A lot of manufacturers obscure transmit power and use outlandish claims of range instead; don t believe those. Find the power output. A 2W FRS transmitter will be more credible range-wise, and the 5W GMRS transmitter as I tested better yet. Note that even GMRS radios are restricted to 0.5W on channels 8-14.
The Motorola DLR/DTR radio gets about the same range with 1W as a GMRS radio does with 5W. The lower power output allows the DLR to be much smaller and lighter than a 5W GMRS radio for similar performance.
Overall conclusions
Of course, what you use may depend on your needs. I d generally say:
For basic use, the high quality, good range, reasonable used price, and very small size of the Motorola DLR would make it a good all-arounder. Give one to each person (or kid) for use at the mall or amusement park, take them with you to concerts and festivals, etc.
Between vehicles, the Motorola DLR/DTR have a clear range advantage over the GMRS radios for vehicles in motion, though the GPS features of the more advanced GMRS radios may be more useful here.
For wilderness hiking and the like, GMRS radios that have GPS, maps, and NOAA weather radio reception may prove compelling and worth the extra bulk. More flexible power options may also be useful.
Low-end FRS radios can be found very cheap; around $20-$30 new for the lowest end, though their low power output and questionable charging circuits may limit their utility where it really counts.
If you just can t move away from cell phones, try the Zoleo app, which can provide some radio-like features.
A satellite communicator is still good backup safety gear for the wilderness.
Postscript: A final plug for amateur radio
My 10-year-old Kenwood TH-D71A already had features none of these others have. For instance, its support for APRS and ability to act as a digipeater for APRS means that TH-D71As can form an automatic mesh between them, each one repeating new GPS positions or text messages to the others. Traditional APRS doesn t perform well in weak signal situations; however, more modern digital systems like D-Star and DMR also support APRS over more modern codecs and provide all sorts of other advantages as well (though not FHSS).
My conclusions above assume a person is not going to go the amateur radio route for whatever reason. If you can get those in your group to get their license the technician is all you need a whole world of excellent options opens to you.
Appendix: The Trisquare eXRS
Prior to 2012, a small company named Trisquare made a FHSS radio they called the eXRS that operated on the 900MHz band like Motorola s DLR/DTR does. Trisquare aimed at consumers and their radios were cheaper than the Motorola DLR/DTR. However, that is where the similarities end.
Trisquare had an analog voice transmission, even though it used FHSS. Also, there is a problem that can arise with FHSS systems: synchronization. The receiver must hop frequencies in exactly the same order at exactly the same time as the sender. Motorola has clearly done a lot of engineering around this, and I have never encountered a synchronization problem in my DLR/DTR testing, not even once. eXRS, on the other hand, had frequent synchronization problems, which manifested themselves in weak signal conditions and sometimes with doubling. When it would happen, everyone would have to be quiet for a minute or two to give all the radios a chance to timeout and reset to the start of the hop sequence. In addition, the eXRS hardware wasn t great, and was susceptible to hardware failure.
There are some that still view eXRS as a legendary device and hoard them. You can still find them used on eBay. When eXRS came out in 2007, it was indeed nice technology for the day, ahead of its time in some ways. I used and loved the eXRS radios back then; powerful GMRS wasn t all that common. But compared to today s technology, eXRS has inferior range to both GMRS and Motorola DLR/DTR (from my recollection, about a third to half of what I get with today s GMRS and DLR/DTR), is prone to finicky synchronization issues when signals are weak, and isn t made very robustly. I therefore don t recommend the eBay eXRS units.
Don t assume that the eXRS weaknesses extend to Motorola DLR/DTR. The DLR/DTR radios are done well and don t suffer from the same problems.
Note: This article has a long-term home on my website, where it may be updated from time to time.
Welcome to the March 2022 report from the Reproducible Builds project! In our monthly reports we outline the most important things that we have been up to over the past month.
The in-toto project was accepted as an incubating project within the Cloud Native Computing Foundation (CNCF). in-toto is a framework that protects the software supply chain by collecting and verifying relevant data. It does so by enabling libraries to collect information about software supply chain actions and then allowing software users and/or project managers to publish policies about software supply chain practices that can be verified before deploying or installing software. CNCF foundations hosts a number of critical components of the global technology infrastructure under the auspices of the Linux Foundation. (View full announcement.)
Herv Boutemy posted to our mailing list with an announcement that the Java Reproducible Central has hit the milestone of 500 fully reproduced builds of upstream projects . Indeed, at the time of writing, according to the nightly rebuild results, 530 releases were found to be fully reproducible, with 100% reproducible artifacts.
GitBOM is relatively new project to enable build tools trace every source file that is incorporated into build artifacts. As an experiment and/or proof-of-concept, the GitBOM developers are rebuilding Debian to generate side-channel build metadata for versions of Debian that have already been released. This only works because Debian is (partial) reproducible, so one can be sure that that, if the case where build artifacts are identical, any metadata generated during these instrumented builds applies to the binaries that were built and released in the past. More information on their approach is available in README file in the bomsh repository.
Ludovic Courtes has published an academic paper discussing how the performance requirements of high-performance computing are not (as usually assumed) at odds with reproducible builds. The received wisdom is that vendor-specific libraries and platform-specific CPU extensions have resulted in a culture of local recompilation to ensure the best performance, rendering the property of reproducibility unobtainable or even meaningless. In his paper, Ludovic explains how Guix has:
[ ] implemented what we call package multi-versioning for C/C++ software that lacks function multi-versioning and run-time dispatch [ ]. It is another way to ensure that users do not have to trade reproducibility for performance. (full PDF)
Kit Martin posted to the FOSSA blog a post titled The Three Pillars of Reproducible Builds. Inspired by the shock of infiltrated or intentionally broken NPM packages, supply chain attacks, long-unnoticed backdoors , the post goes on to outline the high-level steps that lead to a reproducible build:
It is one thing to talk about reproducible builds and how they strengthen software supply chain security, but it s quite another to effectively configure a reproducible build. Concrete steps for specific languages are a far larger topic than can be covered in a single blog post, but today we ll be talking about some guiding principles when designing reproducible builds. []
Events
There will be an in-person Debian Reunion in Hamburg, Germany later this year, taking place from 23 30 May. Although this is a Debian event, there will be some folks from the broader Reproducible Builds community and, of course, everyone is welcome. Please see the event page on the Debian wiki for more information.
Bernhard M. Wiedemann posted to our mailing list about a meetup for Reproducible Builds folks at the openSUSE conference in Nuremberg, Germany.
It was also recently announced that DebConf22 will take place this year as an in-person conference in Prizren, Kosovo. The pre-conference meeting (or Debcamp ) will take place from 10 16 July, and the main talks, workshops, etc. will take place from 17 24 July.
Johannes Schauer Marin Rodrigues posted to the debian-devel list mentioning that he exploited the property of reproducibility within Debian to demonstrate that automatically converting a large number of packages to a new internal source version did not change the resulting packages. The proposed change could therefore be applied without causing breakage:
So now we have 364 source packages for which we have a patch and for which we can show that this patch does not change the build output. Do you agree that with those two properties, the advantages of the 3.0 (quilt) format are sufficient such that the change shall be implemented at least for those 364? []
Tooling
diffoscope is our in-depth and content-aware diff utility. Not only can it locate and diagnose reproducibility issues, it can provide human-readable diffs from many kinds of binary formats. This month, Chris Lamb prepared and uploaded versions 207, 208 and 209 to Debian unstable, as well as made the following changes to the code itself:
Update minimum version of Black to prevent test failure on Ubuntu jammy. []
Brent Spillner also worked on adding graceful handling for UNIX sockets and named pipes to diffoscope. [][][]. Vagrant Cascadian also updated the diffoscope package in GNU Guix. [][]
reprotest is the Reproducible Build s project end-user tool to build the same source code twice in widely different environments and checking whether the binaries produced by the builds have any differences. This month, Santiago Ruano Rinc n added a new --append-build-command option [], which was subsequently uploaded to Debian unstable by Holger Levsen.
Upstream patches
The Reproducible Builds project detects, dissects and attempts to fix as many currently-unreproducible packages as possible. We endeavour to send all of our patches upstream where appropriate. This month, we wrote a large number of such patches, including:
Testing framework
The Reproducible Builds project runs a significant testing framework at tests.reproducible-builds.org, to check packages and other artifacts for reproducibility. This month, the following changes were made:
Holger Levsen:
Replace a local copy of the dsa-check-running-kernel script with a packaged version. []
Don t hide the status of offline hosts in the Jenkins shell monitor. []
Detect undefined service problems in the node health check. []
Update the sources.lst file for our mail server as its still running Debian buster. []
Add our mail server to our node inventory so it is included in the Jenkins maintenance processes. []
Remove the debsecan package everywhere; it got installed accidentally via the Recommends relation. []
Document the usage of the osuosl174 host. []
Regular node maintenance was also performed by Holger Levsen [], Vagrant Cascadian [][][] and Mattia Rizzolo.
If you are interested in contributing to the Reproducible Builds project, please visit our Contribute page on our website. However, you can get in touch with us via:
Because posting private keys on the Internet is a bad idea, some
people like to redact their private keys, so that it looks kinda-sorta like a private key,
but it isn t actually giving away anything secret. Unfortunately, due to the way that
private keys are represented, it is easy to redact a key in such a way that it
doesn t actually redact anything at all. RSA private keys are particularly bad at this,
but the problem can (potentially) apply to other keys as well.
I ll show you a bit of Inside Baseball with key formats, and then demonstrate the practical
implications. Finally, we ll go through a practical worked example from an actual not-really-redacted
key I recently stumbled across in my travels.
The Private Lives of Private Keys
Here is what a typical private key looks like, when you come across it:
Obviously, there s some hidden meaning in there computers don t encrypt
things by shouting BEGIN RSA PRIVATE KEY! , after all. What is between the
BEGIN/END lines above is, in fact, a
base64-encoded
DER format
ASN.1 structure representing a PKCS#1 private
key.
In simple terms, it s a list of numbers very important numbers. The list
of numbers is, in order:
A version number (0);
The public modulus , commonly referred to as n ;
The public exponent , or e (which is almost always 65,537, for various unimportant reasons);
The private exponent , or d ;
The two private primes , or p and q ;
Two exponents, which are known as dmp1 and dmq1 ; and
A coefficient, known as iqmp .
Why Is This a Problem?
The thing is, only three of those numbers are actually required in a private
key. The rest, whilst useful to allow the RSA encryption and decryption to be
more efficient, aren t necessary. The three absolutely required values are
e, p, and q.
Of the other numbers, most of them are at least about the same size as each
of p and q. So of the total data in an RSA key, less than a quarter of the
data is required. Let me show you with the above toy key, by breaking it
down piece by piece1:
MGI DER for this is a sequence
CAQ version (0)
CxjdTmecltJEz2PLMpS4BXn
AgMBAAe
ECEDKtuwD17gpagnASq1zQTYd
ECCQDVTYVsjjF7IQp
IJANUYZsIjRsR3q
AgkAkahDUXL0RSdmp1
ECCB78r2SnsJC9dmq1
AghaOK3FsKoELg==iqmp
Remember that in order to reconstruct all of these values, all I need are
e, p, and q and e is pretty much always 65,537. So I could redact
almost all of this key, and still give all the important, private bits of this
key. Let me show you:
Now, I doubt that anyone is going to redact a key precisely like
this but then again, this isn t a typical RSA key. They usually
look a lot more like this:
People typically redact keys by deleting whole lines, and usually replacing them
with [...] and the like. But only about 345 of those 1588 characters
(excluding the header and footer) are required to construct the entire key.
You can redact about 4/5ths of that giant blob of stuff, and your private parts
(or at least, those of your key) are still left uncomfortably exposed.
But Wait! There s More!
Remember how I said that everything in the key other than e, p,
and q could be derived from those three numbers? Let s talk about one
of those numbers: n.
This is known as the public modulus (because, along with e, it is also
present in the public key). It is very easy to calculate: n = p * q. It
is also very early in the key (the second number, in fact).
Since n = p * q, it follows that q = n / p. Thus, as long
as the key is intact up to p, you can derive q by simple division.
Real World Redaction
At this point, I d like to introduce an acquaintance of mine: Mr. Johan Finn.
He is the proud owner of the GitHub repo johanfinn/scripts.
For a while, his repo contained a script that contained a poorly-redacted private
key. He since deleted it, by making a new commit, but of course because
git never really deletes anything, it s
still available.
Of course, Mr. Finn may delete the repo, or force-push a new history without
that commit, so here is the redacted private key, with a bit of the surrounding
shell script, for our illustrative pleasure:
Now, if you try to reconstruct this key by removing the obvious garbage
lines (the ones that are all repeated characters, some of which aren t even valid
base64 characters), it still isn t a key at least, openssl pkey
doesn t want anything to do with it. The key is very much still in there,
though, as we shall soon see.
Using a gem I wrote and a quick bit of
Ruby, we can extract a complete private key. The irb session looks something
like this:
What I ve done, in case you don t speak Ruby, is take the two chunks of
plausible-looking base64 data, chuck them together into a variable named b64,
unbase64 it into a variable named der, pass that into a new DerParse
instance, and then walk the DER value tree until I got all the values I need.
Interestingly, the q value actually traverses the split in the two chunks,
which means that there s always the possibility that there are lines missing
from the key. However, since p and q are supposed to be prime, we can
sanity check them to see if corruption is likely to have occurred:
Excellent! The chances of a corrupted file producing valid-but-incorrect prime
numbers isn t huge, so we can be fairly confident that we ve got the real p
and q. Now, with the help of another one of my
creations we can use e, p,
and q to create a fully-operational battle key:
and there you have it. One fairly redacted-looking private key brought back
to life by maths and far too much free time.
Sorry Mr. Finn, I hope you re not still using that key on anything
Internet-facing.
What About Other Key Types?
EC keys are very different beasts, but they have much the same problems as RSA
keys. A typical EC key contains both private and public data, and the public
portion is twice the size so only about 1/3 of the data in the key is
private material. It is quite plausible that you can redact an EC key and
leave all the actually private bits exposed.
What Do We Do About It?
In short: don t ever try and redact real private keys. For documentation purposes,
just put KEY GOES HERE in the appropriate spot, or something like that. Store your
secrets somewhere that isn t a public (or even private!) git repo.
Generating a dummy private key and sticking it in there isn t a great idea,
for different reasons: people have this odd habit of reusing demo keys in
real
life.
There s no need to encourage that sort of thing.
Technically the pieces aren t 100% aligned with the underlying DER, because of how base64 works.
I felt it was easier to understand if I stuck to chopping up the base64, rather than
decoding into DER and then chopping up the DER.
Welcome to gambaru.de. Here is my monthly report that covers what I have been doing for Debian. If you re interested in Java, Games and LTS topics, this might be interesting for you.
DebConf 17 in Montreal
I traveled to DebConf 17 in Montreal/Canada. I arrived on 04. August and met a lot of different people which I only knew by name so far. I think this is definitely one of the best aspects of real life meetings, putting names to faces and getting to know someone better. I totally enjoyed my stay and I would like to thank all the people who were involved in organizing this event. You rock! I also gave a talk about the The past, present and future of Debian Games , listened to numerous other talks and got a nice sunburn which luckily turned into a more brownish color when I returned home on 12. August. The only negative experience I made was with my airline which was supposed to fly me home to Frankfurt again. They decided to cancel the flight one hour before check-in for unknown reasons and just gave me a telephone number to sort things out. No support whatsoever. Fortunately (probably not for him) another DebConf attendee suffered the same fate and together we could find another flight with Royal Air Maroc the same day. And so we made a short trip to Casablanca/Morocco and eventually arrived at our final destination in Frankfurt a few hours later. So which airline should you avoid at all costs (they still haven t responded to my refund claims) ? It s WoW-Air from Iceland. (just wow)
Debian Games
There were a lot of GCC-7 bugs to fix this month which claimed most of my games related time.
I backported the memory leak fix for unknown-horizons and fife to Stretch (#871037).
I investigated some graphical glitches in Neverball which appear to be related to OpenGL and the graphic stack in general but I couldn t find an immediate solution. (#871223)
For jboss-xnio I packaged two new build-dependencies which are wildfly-common and wildfly-client-config and they are currently waiting in the NEW queue.
The last build-dependency for PDFsam was accepted this month and I was able to upload the new version to experimental. Unfortunately the program is currently not really usable due to a bug in libhibernate-validator-java (#874579)
Debian LTS
This was my eighteenth month as a paid contributor and I have been paid to work 20,25 hours on Debian LTS, a project started by Rapha l Hertzog. In that time I did the following:
From 31. July until 06. August I was in charge of our LTS frontdesk. I triaged bugs in tinyproxy, mantis, sox, timidity, ioquake3, varnish, libao, clamav, binutils, smplayer, libid3tag, mpg123 and shadow.
DLA-1064-1. Issued a security update for freeradius fixing 6 CVE.
DLA-1068-1. Issued a security update for git fixing 1 CVE.
DLA-1077-1. Issued a security update for faad2 fixing 11 CVE.
DLA-1083-1. Issued a security update for openexr fixing 3 CVE.
DLA-1095-1. Issued a security update for freerdp fixing 5 CVE.
Non-maintainer upload
I uploaded a security fix for openexr (#864078) to fix CVE-2017-9110, CVE-2017-9112 and CVE-2017-9116.
Previously: v4.7. Here are a bunch of security things I m excited about in Linux v4.8:
SLUB freelist ASLR
Thomas Garnier continued his freelist randomization work by adding SLUB support.
x86_64 KASLR text base offset physical/virtual decoupling
On x86_64, to implement the KASLR text base offset, the physical memory location of the kernel was randomized, which resulted in the virtual address being offset as well. Due to how the kernel s -2GB addressing works (gcc s -mcmodel=kernel ), it wasn t possible to randomize the physical location beyond the 2GB limit, leaving any additional physical memory unused as a randomization target. In order to decouple the physical and virtual location of the kernel (to make physical address exposures less valuable to attackers), the physical location of the kernel needed to be randomized separately from the virtual location. This required a lot of work for handling very large addresses spanning terabytes of address space. Yinghai Lu, Baoquan He, and I landed a series of patches that ultimately did this (and in the process fixed some other bugs too). This expands the physical offset entropy to roughly $physical_memory_size_of_system / 2MB bits.
x86_64 KASLR memory base offset
Thomas Garnier rolled out KASLR to the kernel s various statically located memory ranges, randomizing their locations with CONFIG_RANDOMIZE_MEMORY. One of the more notable things randomized is the physical memory mapping, which is a known target for attacks. Also randomized is the vmalloc area, which makes attacks against targets vmalloced during boot (which tend to always end up in the same location on a given system) are now harder to locate. (The vmemmap region randomization accidentally missed the v4.8 window and will appear in v4.9.)
x86_64 KASLR with hibernation
Rafael Wysocki (with Thomas Garnier, Borislav Petkov, Yinghai Lu, Logan Gunthorpe, and myself) worked on a numberoffixes to hibernation code that, even without KASLR, were coincidentally exposed by the earlier W^X fix. With that original problem fixed, then memory KASLR exposed more problems. I m very grateful everyone was able to help out fixing these, especially Rafael and Thomas. It s a hard place to debug. The bottom line, now, is that hibernation and KASLR are no longer mutually exclusive.
gcc plugin infrastructure
Emese Revfy ported the PaX/Grsecurity gcc plugin infrastructure to upstream. If you want to perform compiler-based magic on kernel builds, now it s much easier with CONFIG_GCC_PLUGINS! The plugins live in scripts/gcc-plugins/. Current plugins are a short example called Cyclic Complexity which just emits the complexity of functions as they re compiled, and Sanitizer Coverage which provides the same functionality as gcc s recent -fsanitize-coverage=trace-pc but back through gcc 4.5. Another notable detail about this work is that it was the first Linux kernel security work funded by Linux Foundation s Core Infrastructure Initiative. I m looking forward to more plugins!
If you re on Debian or Ubuntu, the required gcc plugin headers are available via the gcc-$N-plugin-dev package (and similarly for all cross-compiler packages).
hardened usercopy
Along with work from Rik van Riel, Laura Abbott, Casey Schaufler, and many other folks doing testing on the KSPP mailing list, I ported part of PAX_USERCOPY (the basic runtime bounds checking) to upstream as CONFIG_HARDENED_USERCOPY. One of the interface boundaries between the kernel and user-space are the copy_to_user()/copy_from_user() family of functions. Frequently, the size of a copy is known at compile-time ( built-in constant ), so there s not much benefit in checking those sizes (hardened usercopy avoids these cases). In the case of dynamic sizes, hardened usercopy checks for 3 areas of memory: slab allocations, stack allocations, and kernel text. Direct kernel text copying is simply disallowed. Stack copying is allowed as long as it is entirely contained by the current stack memory range (and on x86, only if it does not include the saved stack frame and instruction pointers). For slab allocations (e.g. those allocated through kmem_cache_alloc() and the kmalloc()-family of functions), the copy size is compared against the size of the object being copied. For example, if copy_from_user() is writing to a structure that was allocated as size 64, but the copy gets tricked into trying to write 65 bytes, hardened usercopy will catch it and kill the process.
For testing hardened usercopy, lkdtm gained several new tests: USERCOPY_HEAP_SIZE_TO, USERCOPY_HEAP_SIZE_FROM, USERCOPY_STACK_FRAME_TO,
USERCOPY_STACK_FRAME_FROM, USERCOPY_STACK_BEYOND, and USERCOPY_KERNEL. Additionally, USERCOPY_HEAP_FLAG_TO and USERCOPY_HEAP_FLAG_FROM were added to test what will be coming next for hardened usercopy: flagging slab memory as safe for copy to/from user-space , effectively whitelisting certainly slab caches, as done by PAX_USERCOPY. This further reduces the scope of what s allowed to be copied to/from, since most kernel memory is not intended to ever be exposed to user-space. Adding this logic will require some reorganization of usercopy code to add some new APIs, as PAX_USERCOPY s approach to handling special-cases is to add bounce-copies (copy from slab to stack, then copy to userspace) as needed, which is unlikely to be acceptable upstream.
seccomp reordered after ptrace
By its original design, seccomp filtering happened before ptrace so that seccomp-based ptracers (i.e. SECCOMP_RET_TRACE) could explicitly bypass seccomp filtering and force a desired syscall. Nothing actually used this feature, and as it turns out, it s not compatible with process launchers that install seccomp filters (e.g. systemd, lxc) since as long as the ptrace and fork syscalls are allowed (and fork is needed for any sensible container environment), a process could spawn a tracer to help bypass a filter by injecting syscalls. After Andy Lutomirski convinced me that ordering ptrace first does not change the attack surface of a running process (unless all syscalls are blacklisted, the entire ptrace attack surface will always be exposed), I rearranged things. Now there is no (expected) way to bypass seccomp filters, and containers with seccomp filters can allow ptrace again.
That s it for v4.8! The merge window is open for v4.9
I ve been thinking about home automation automating lights, switches, thermostats, etc. for years. Literally decades, in fact. When I was a child, my parents had a RadioShack X10 control module and one or two target devices. I think I had fun giving people a light show turning on or off one switch and one outlet remotely.
But I was stuck there are a daunting number of standards for home automation these days. Zigbee, UPB, Z-Wave, Insteon, and all sorts of Wifi-enabled things that aren t really compatible with each other (hellooooo, Nest) or have their own ecosystem that isn t all that open (helloooo, Apple). Frankly I don t think that Wifi is a great home automation protocol; its power drain completely prohibits it being used in a lot of ways.
Earlier this month, my awesome employer had our annual meeting and as part of that our technical teams had some time for anyone to talk about anything geeky. I used my time to talk about flying quadcopters, but two of my colleagues talked about home automation. I had enough to have a place to start, and was hooked.
People use these systems to do all sorts of things: intelligently turn off lights when rooms aren t occupied, provide electronic door locks (unlockable via keypad, remote, or software), remote control lighting and heating/cooling via smartphone apps, detect water leakage, control switches with awkward wiring environments, buttons to instantly set multiple switches to certain levels for TV watching, turning off lights left on, etc. I even heard examples of monitoring a swamp cooler to make sure it is being used correctly. The possibilities are endless, and my geeky side was intrigued.
Insteon and Z-Wave
Based on what I heard from my colleagues, I decided to adopt a hybrid network consisting of Insteon and Z-Wave.
Both are reliable protocols (with ACKs and retransmit), so they work far better than X10 did. Both have all sorts of controls and sensors available (browse around on smarthome.com for some ideas).
Insteon is a particularly interesting system an integrated dual-mesh network. It has both powerline and RF signaling, and most hardwared Insteon devices act as repeaters for both the wired and RF network simultaneously. Insteon packets contain a maximum hop count that is decremented after each relay, and the packets repeat in such as way that they collide and strengthen one another. There is no need to maintain routing tables or anything like that; it simply scales nicely.
This system addresses all sorts of potential complexities. It addresses the split-phase problem of powerline-only systems, but using an RF bridge. It addresses long distances and outbuildings by using the powerline signaling. I found it to work quite well.
The downside to Insteon is that all the equipment comes from one vendor: Insteon. If you don t like their thermostat or motion sensor, you don t have any choice.
Insteon devices can be used entirely without a central controller. Light switches can talk to each other directly, and you can even set them up so that one switch controls dozens of others, if you have enough patience to go around your house pressing tiny set buttons.
Enter Z-Wave. Z-Wave is RF-only, and while it is also a mesh network, it is source-routed, meaning that if you move devices around, you have to heal your network as all your nodes have to re-learn the path to each other. It also doesn t have the easy distance traversal of Insteon, of course. On the other hand, hundreds of vendors make Z-Wave products, and they mostly interoperate well. Z-Wave is said to scale practically to maybe two or three dozen devices, which would have been an issue for me, buut with Insteon doing the heavy lifting and Z-Wave filling in the gaps, it s worked out well.
Controlling it all
While both Insteon (and, to a certain extent, Z-Wave) devices can control each other, to really spread your wings, you need more centralized control. This lets you have programs that do things like if there s motion in the room on a weekday and it s dark outside, then turn on a light, and turn it back off 5 minutes later.
Insteon has several options. One, you can buy their power line modem (PLM). This can be hooked up to a PC to run either Insteon s proprietary software, or something open-source like MisterHouse, written in Perl. Or you can hook it up to a controller I ll mention in a minute. Those looking for a fairly simpe controller might get the Insteon 2242-222 Hub, which has the obligatory smartphone app and basic schedules.
For more sophisticated control, my friend recommended the ISY-994i controller. Not only does it have a lot more capable programming language (though still frustrating), it supports both Insteon and Z-Wave in an integrated box, and has a comprehensive REST API for integrating with other things. I went this route.
First step: LED lighting
I began my project by replacing my light bulbs with LEDs. I found that I could get Cree 4-Flow 60W equivs for $10 at Home Depot. They are dimmable, a key advantage over CFL, and also maintain their brightness throughout their life far better. As I wanted to install dimmer switches, I got a combination of Cree 60W bulbs, Cree TW bulbs (which have a better color spectrum coverage for more true colors), and Cree 100W equiv bulbs for places I needed more coverage. I even put in a few LED flood lights to replace my can lights.
Overall I was happy with the LEDs. They are a significant improvement over the CFLs I had been using, and use even less power to boot. I have had issues with three Cree bulbs, though: one arrived broken, and two others have had issues such as being quite dim. They have a good warranty, but it seems their QA could be better. Also, they can have a tendency to flickr when dimmed, though this plagues virtually all LED bulbs.
I had done quite a bit of research. CNET has some helpful brightness guides, and Insteon has a color temperature chart. CNET also had a nifty in-depth test of LED bulbs.
Second step: switches
Once the LED bulbs were in place, I was then able to start installing smart switches. I picked up Insteon s basic switch, the SwitchLinc 2477D at Menard s. This is a dimmable switch and requires a neutral wire in the box, but acts as a dual-band repeater for the system as well.
The way Insteon switches work, they can be standalone, or controllers, responders, or both in a scene . A scene is where multiple devices act together. You can create virtual 3-way switches in a scene, or more complicated things where different lights are turned on at different levels.
Anyhow, these switches worked out quite well. I have a few boxes where there is no neutral wire, so I had to use the Insteon SwitchLinc 2474D in them. That switch is RF-only and is supposed to have a minimum load of 20W, though they seemed to work OK albeit with limited range and the occasional glitch with my LEDs. There is also the relay-based SwitchLinc 2477S for use with non-dimmable lights, fans, etc. You can also get plug-in modules for controlling lamps and such.
I found the Insteon devices mostly lived up to their billing. Occasionally I could provoke a glitch by changing from dimming to brightening in rapid succession on a remote switch controlling a load on a distant one. Twice I had to power cycle an Insteon switch that got confused (rather annoying when they re hardwared). Other than that, though, it s been solid. From what I gather, this stuff isn t ever quite as reliable as a 1950s mechanical switch, but at least in this price range, this is about as good as it gets these days.
Well, this post got quite long, so I will have to follow up with part 2 in a little while. I intend to write about sensors and the Z-Wave network (which didn t work quite as easily as Insteon), as well as programming the ISY and my lessons learned along the way.
I promised to write about this a long time, ooops... :-)
Another ARM port in Debian - yay!
arm64 is officially a release architecture for Jessie, aka Debian
version 8. That's taken a lot of manual porting and development effort
over the last couple of years, and it's also taken a lot of CPU time -
there are ~21,000 source packages in Debian Jessie! As is often the
case for a brand new architecture like arm64 (or AArch64, to use ARM's
own terminology), hardware can be really difficult to get hold of. In
time this will cease to be an issue as hardware becomes more
commoditised, but in Debian we really struggled to get hold of
equipment for a very long time during the early part of the port.
First bring-up in Debian Ports
To start with, we could use ARM's own AArch64 software models to
build the first few packages. This worked, but only very slowly. Then
Chen Baozi and the folks running
the Tianhe-2
supercomputer project in Guangzhou, China contacted us to offer access
to some arm64 hardware, and this is what Wookey used for bootstrapping
the new port in the
unofficial Debian Ports
archive. This has now become the normal way for new architectures to
get into Debian. We got most of the archive built in debian-ports this
way, and we could then use those results to seed the initial core set
of packages in the main Debian archive.
Second bring-up - moving into the main Debian archive
By the time that first Debian bring-up was
done, ARM was starting to produce its
own "Juno" development boards, and with the help of my boss^4 James
McNiven we managed to acquire a couple of those machines for use as
official Debian build machines. The existing machines in China were
faster, but for various reasons quite difficult to maintain as
official Debian machines. So I set up the Junos as buildds just before
going to DebConf in August 2014. They ran very well, and (for dev
boards!) were very fast and stable. They built a large chunk of the
Debian archive, but as the release freeze for Jessie grew close we
weren't quite there. There was a small but persistent backlog of
un-built packages that were causing us issues, plus the Juno machines
are/were not quite suitable as porter boxes for Debian developers all
over the world to use for debugging their packages on the new
architecture.
More horsepower - Linaro machines
This is where Linaro came to
our aid. Linaro's goal is to help improve Free and Open Source
Software on ARM, and one of the more recent projects in Linaro is a
cluster of
servers that are made available for software developers to use to
get early access to ARMv8 (arm64) hardware. It's a great way for
people who are interested in this new architecture to try things out,
port their software or indeed just help with the general porting
effort.
As Debian is seen as such an important part of the FLOSS ecosystem,
we managed to negotiate dedicated access to three of the machines in
that cluster for Debian's use and we set those up in October, shortly
before the freeze for Jessie. Andy Doan spent a lot of his time
getting these machines going for us, and then I set up two of them as
build machines and one as the porter box we were still needing.
With these extra machines available, we quickly caught up with the
ever-busy "Needs-Build" queue and we've got sufficient build power now
to keep things going for the Jessie release. We were officially added
to the list of release architectures at the Cambridge mini-Debconf in
November, and all is looking good now!
And in the future?
I've organised the loan of another arm64 machine from AMD for
Debian to use for further porting and/or building. We're also
expecting that more and more machines will be coming out soon as
vendors move on from prototyping to producing real customer
equipment. Once that's happened, more kit will be available and
everybody will be able to have arm64-powered computers in the server
room, on their desk and even inside their laptop! Mine will be running
Debian Jessie... :-)
Thanks!
There's been a lot of people involved in the Debian arm64
bootstrapping at various stages, so many that I couldn't possibly
credit them all! I'll highlight some, though. :-)
First of all, Wookey's life has revolved around this port for the
last few years, tirelessly porting, fixing and hacking out package
builds to get us going. We've had loads of help from other teams in
Debian, particularly the massive patience of the DSA folks with
getting early machines up and running and the prodding of the
ftpmaster, buildd and release teams when we've been grinding our way
through ever more package builds and dependency loops. We've also had
really good support from toolchain folks in Debian and ARM, fixing
bugs as we've found them by stressing new code and new machines. We've
had a number of other people helping by filing bugs and posting
patches to help us get things built and working. And (last but not
least!) thanks to all the folks who've helped us beg and borrow the
hardware to make the Debian arm64 port a reality.
Rumours of even more ARM ports coming soon are entirely
scurrilous... *grin*
Wow. 3G delta. I haven t booted this laptop for a while I think I m finally ready to make the move from gnome2 to gnome3. There are bits that still annoy me, but I think it s off to a good start. Upgrading perl from 5.10 to 5.14.
If you ever used filelight or baobab you probably know how useful they are. If you didn't, then you missed a lot on how you can spot (and fix) where your disk space is wasted.
With my recent attempt to upgrade to GNOME 3 which, because of its innate property to be useless and counter-productive, actually made me to use Xfce4 with a mix of GNOME aplications (since Xfce lacks a few functionalities here and there), I ran into all sorts of problems.
As a side note, Xfce4 is quite decent, but if you like some icons on your panels to be left aligned and some right aligned, you should know that you can add a Separator item to the panel, right click on it -> Properties and tick the Expand* check box. And if you also set the Transparent style, it will look nice, too.
Back to the topic. With my mix of Xfce and Gnome apps, I configured my top panel to contain a Free Space Checker for my /home file system and today it alerted me that I was low on disk space, so I started baobab to check what I can clean up.
When I found a possible suspect, I wanted to open the directory with a file browser, but, instead, Totem was started and started trying to queue all the files in the offending directory. The problem is that one way or another, Totem (or VLC) was configured to be the default handler for directories instead of the file manager.
The solution is simple, open with an editor the file ~/.local/share/applications/mimeapps.list and search for the line starting with inode/directory= and you'll see something like:
Remove the offending part, vlc-usercustom.desktop;, save the file and try again to open that directory from baobab. If you are double-lucky :P and now it opens with Totem, you will have to remove a reference to a "totem-usercustom.desktop;" or something of that sort. Now, on my system, that line looks like this:
* I suppose it's called like that in English, I have my desktop in Romanian ** Except that I would like it to start my desktop preferred file manager, not Nautilus, but that's another issue.
In light of this, I have packaged and uploaded RDKit and Cinfony over the last weeks and also updated the Debichem task pages, introducing a Cheminformatics Task at the same time. I feel we still need at least tasks for Chemical Education (to expose e.g. Kalzium more prominently), and possibly Protein Docking and Crystallography. So if you have experience/opinions in these fields (or want to propose other fields), drop me a mail or contact us at debichem-devel@lists.alioth.debian.org.
Google Summer of Code 2011 gave a big boost to the development of the shogun
machine learning toolbox. In case you have never heard of shogun or machine learning:
Machine Learning involves algorithms that do intelligent'' and even automatic
data processing and is nowadays used everywhere to e.g. do face detection in
your camera, compress the speech in you mobile phone, powers the
recommendations in your favourite online shop, predicts solulabily of molecules
in water, the location of genes in humans, to name just a few examples.
Interested? Then you should give it a try. Some very simple examples stemming
from a sub-branch of machine learning called supervised learning
illustrate how objects represented by two-dimensional vectors can be classified in good or bad by learning a so
called support
vector machine. I would suggest to install the python_modular
interface of shogun and to run the example interactive_svm_demo.py
also included in the source tarball. Two images illustrating the training of a support vector machine follow (click to enlarge):
Now back to Google Summer of Code: Google sponsored 5 talented students who were working
hard on various subjects. As a result we now have a new core developer and various new features
implemented in shogun: Interfaces to new languages like java, c#, ruby, lua
written by Baozeng; A model selection framework written by Heiko Strathman,
many dimension reduction techniques written by Sergey Lisitsyn, Gaussian
Mixture Model estimation written by Alesis Novik and a full-fledged online
learning framework developed by Shashwat Lal Das. All of this work has already been
integrated in the newly released shogun 1.0.0. In case you
want to know more about the students projects continue reading below, but
before going into more detail I would like to summarize my experience with GSoC 2011.
My Experience with Google Summer of Code
We were a first time organization, i.e. taking part for the first time in GSoC.
Having received many many student applications we were very happy to hear that
we at least got 5 very talented students accepted
but still had to reject about
60 students (only 7% acceptance rate!). Doing this was an extremely tough decision for us. Each of us
ended up in scoring students even then we had many ties. So in the end we
raised the bar by requiring contributions even before the actual GSoC started.
This way we already got many improvements like more complete i/o functions,
nicely polished ROC and other evaluation routines, new machine learning
algorithms like gaussian naive bayes and averaged perceptron and many bugfixes.
The quality of the contributions and independence of the student aided us
coming up with the selection of the final five.
I personally played the role of the administrator and (co-)mentor and scheduled
regular (usually) monthly irc meetings with mentors and students. For other org admins or mentors wanting into GSoC here come my lessons learned:
Set up the infrastructure for your project before GSoC: We transitioned from svn to git (on github) just before GSoC started. While it was a bit tough to work with git in the beginning it quickly payed off (patch reviewing and discussions on github were really much more easy). We did not have proper regression tests running daily during most of GSoC leaving a number of issues undetected for quite some time. Now that we have buildbots running I keep wondering how we could survive for so long without them :-)
Even though all of our students worked very independently, you want to mentor them very closely in the beginning such that they write code that you like to see in your project, following coding style, utilizing already existing helper routines. We did this and it simplified our lives later - we could mostly accept patches as is.
Expect contributions from external parties: We had contributions to shogun's ruby and csharp interfaces/examples. Ensure that you have some spare manpower to review such additional patches.
Expect critical code review by your students and be open to restructure the code. As a long term contributer you probably no longer realize whether your class-design / code structure is hard to digest. Freshmans like GSoC students immediately will when they stumble upon inconsitencies. When they discover such issues, discuss with them how to resolve them and don't be afraid of doing even bigger changes in the early GSoC phase (not too big to hinder work of all of your students though). We had quite some structural improvent in shogun due to several suggestions by our students. Overall the project improved drastically - not just w.r.t. additions.
As a mentor, work with your student on the project. Yes, get your hands dirty too. This way you are much more of an help to the student when things get stuck and it will be much easier for you to answer difficult questions.
As a mentor, try to answer the questions your students have within a few hours. This keeps the students motivated and you excited that they are doing a great job.
Now please read on to learn about the newly implemented features:
Dimension Reduction Techniques
Sergey Lisitsyn (Mentor: Christian Widmer)
Dimensionality reduction is the process of finding a low-dimensional representation of a high-dimensional one while maintaining the core essence of the data. For one of the most important practical issues of applied machine learning, it is widely used for preprocessing real data. With a strong focus on memory requirements and speed, Sergey implemented the following dimension reduction techniques:
See below for the some nice illustrations of dimension reduction/embedding techniques (click to enlarge).
Cross-Validation Framework
Heiko Strathmann (Mentor: Soeren Sonnenburg)
Nearly every learning machine has parameters which have to be determined manually. Before Heiko started his project one had to manually implement cross-validation using (nested) for-loops. In his highly involved project Heiko extend shogun's core to register parameters and ultimately made cross-validation possible. He implemented different model selection schemes (train,validation,test split, n-fold cross-validation, stratified cross-validation, etc and did create some examples for illustration. Note that various performance measures are available to measure how good'' a model is. The figure below shows the area under the receiver operator characteristic curve as an example.
Interfaces to the Java, C#, Lua and Ruby Programming Languages
Baozeng (Mentor: Mikio Braun and Soeren Sonnenburg)
Boazeng implemented swig-typemaps that enable transfer of objects native to the language one wants to interface to. In his project, he added support for Java, Ruby, C# and Lua. His knowlegde about swig helped us to drastically simplify shogun's typemaps for existing languages like octave and python resolving other corner-case type issues. The addition of these typemaps brings a high-performance and versatile machine learning toolbox to these languages. It should be noted that shogun objects trained in e.g. python can be serialized to disk and then loaded from any other language like say lua or java. We hope this helps users working in multiple-language environments.
Note that the syntax is very similar across all languages used, compare for yourself - various examples for all languages (
python,
octave,
java,
lua,
ruby, and
csharp) are available.
Largescale Learning Framework and Integration of Vowpal Wabbit
Shashwat Lal Das (Mentor: John Langford and Soeren Sonnenburg)
Shashwat introduced support for 'streaming' features into shogun. That is instead of shogun's traditional way of requiring all data to be in memory, features can now be streamed from e.g. disk, enabling the use of massively big data sets. He implemented support for dense and sparse vector based input streams as well as strings and converted existing online learning methods to use this framework. He was particularly careful and even made it possible to emulate streaming from in-memory features. He finally integrated (parts of) vowpal wabbit, which is a very fast large scale online learning algorithm based on SGD.
Expectation Maximization Algorithms for Gaussian Mixture Models
Alesis Novik (Mentor: Vojtech Franc)
The Expectation-Maximization algorithm is well known in the machine learning community. The goal of this project was the robust implementation of the Expectation-Maximization algorithm for Gaussian Mixture Models. Several computational tricks have been applied to address numerical and stability issues, like
Representing covariance matrices as their SVD
Doing operations in log domain to avoid overflow/underflow
Setting minimum variances to avoid singular Gaussians.
Merging/splitting of Gaussians.
An illustrative example of estimating a one and two-dimensional Gaussian follows below.
Final Remarks
All in all, this year s GSoC has given the SHOGUN project a great push forward and we hope that this will translate into an increased user base and numerous external contributions. Also, we hope that by providing bindings for many languages, we can provide a neutral ground for Machine Learning implementations and that way bring together communities centered around different programming languages. All that s left to say is that given the great experiences from this year, we d be more than happy to participate in GSoC2012.
Ever since I blogged about the status of GNOME 3 in Debian experimental, my web logs show that many people are looking for ways to try out GNOME 3 with Debian Squeeze.
No GNOME 3 for Debian 6.0
Don t hold your breath, it s highly unlikely that anyone of the Debian GNOME team will prepare backports of GNOME 3 for Debian 6.0 Squeeze. It s already difficult enough to do everything right in unstable with a solid upgrade path from the current versions in Squeeze
But if you are brave enough to want to install GNOME 3 with Debian 6.0 on your machine then I would suggest that you re the kind of person who should run Debian testing instead (or even Debian unstable, it s not so horrible). That s what most people who like to run recent versions of software do.
How to run Debian testing
You re convinced and want to run Debian testing? It s really easy, just edit your /etc/apt/sources.list and replace stable with testing . A complete file could look like this:
# Main repository
deb http://ftp.debian.org/debian testing main contrib non-free
deb-src http://ftp.debian.org/debian testing main contrib non-free
# Security updates
deb http://security.debian.org/debian-security testing main contrib non-free
Now you should be able to run apt-get dist-upgrade and end up with a testing system.
How to install GNOME 3 on Debian testing aka wheezy
If you want to try GNOME 3 before it has landed in testing, you ll have to add unstable and experimental to your sources.list:
deb http://ftp.debian.org/debian unstable main contrib non-free
deb http://ftp.debian.org/debian experimental main contrib non-free
You should not install GNOME 3 from experimental if you re not ready to deal with some problems and glitches. Beware: once you upgraded to GNOME 3 it will be next to impossible to go back to GNOME 2.32 (you can try it, but it s not officially supported by Debian).
To avoid upgrading all your packages to unstable, you will tell APT to prefer testing with the APT::Default-Release directive:
# cat >/etc/apt/apt.conf.d/local <<END
APT::Default-Release "testing";
END
To allow APT to upgrade the GNOME packages to unstable/experimental, you will also install the following pinning file as /etc/apt/preferences.d/gnome:
Note that I used Pin-Priority: 990 this time (while I used 500 in the article explaining how to install GNOME 3 on top of unstable), that s because you want these packages to have the same priority than those of testing, and they have a priority of 990 instead of 500 due to the APT::Default-Release setting.
You re done, your next dist-upgrade should install GNOME 3. It will pull a bunch of packages from unstable too but that s expected since the packages required by GNOME 3 are spread between unstable and experimental.
Last week s post generated a lot of interest so I will make a small update to keep you posted on the status of GNOME 3 in Debian experimental.
Experimental is not for everybody
But first let me reiterate this: GNOME 3 is in Debian experimental because it s a work in progress. You should not install it if you can t live with problems and glitches. Beware: once you upgraded to GNOME 3 it will be next to impossible to go back to GNOME 2.32 (you can try it, but it s not officially supported by Debian). Even with the fallback mode, you won t get the same experience than what you had with GNOME 2.32. Many applets are not yet ported to the newest gnome-panel API.
So do not upgrade to it if you re not ready to deal with the consequences. It will come to Debian unstable and to Debian testing over time and it should be in a better shape at this point.
Good progress made
Most of the important modules have been updated to 3.0. You can see the progress here.
The exception is gdm, it still needs to be updated, the login screen looks quite ugly right now when using GNOME 3.
Frequently Asked Questions and Common Problems
Why do links always open in epiphany instead of iceweasel? You need to upgrade to the latest version on libglib2.0-0, gvfs and gnome-control-center in experimental. Then you can customize the default application used in the control center (under System Information > Default applications ).
You might need to switch to iceweasel 4.0 in experimental to have iceweasel appear in the list of browsers. Or you can edit ~/.local/share/applications/mimeapps.list and put x-scheme-handler/http=iceweasel.desktop;epiphany.desktop; in the Added Associations section (replace the corresponding line if it already exists and lists epiphany only).
The theme looks ugly, and various icons are missing. Ensure that you have installed the latest version of gnome-themes-standard, gnome-icon-theme and gnome-icon-theme-symbolic.
The network icon in the Shell does not work. Ensure you have upgraded both network-manager-gnome and network-manager to the experimental version.
Some applications do not start at all. If an application loads GTK2 and GTK3, it exits immediately with a clear message on the standard error output (Gtk-ERROR **: GTK+ 2.x symbols detected. Using GTK+ 2.x and GTK+ 3 in the same process is not supported.). It usually means that one of the library used by that application uses a different version of GTK+ than the application itself. You should report those problems to the Debian bug tracking system if you find any.
Some people also reported failures of all GTK+ applications while using the Oxygen themes. Switching to another theme should help. BTW, the default theme in GNOME 3 is called Adwaita.
Where are my icons on the desktop? They are gone, it s by design. But you can reenable them with gsettings set org.gnome.desktop.background show-desktop-icons true and starting nautilus (if it s not already running). (Thanks to bronte for the information)
Why do I see all applications twice in the shell? The package menu-xdg generates a desktop file from the Debian menu information, those are in a menu entry that is hidden by default in the old GNOME menu. Gnome Shell doesn t respect those settings and displays all .desktop files. Remove menu-xdg and you will get a cleaner list of applications.
APT pinning file for the brave
Since last week, we got APT 0.8.14 in unstable and it supports pattern matching for package name in pinning files. So I can give you a shorter and more complete pinning file thanks to this:
Putting the file above in /etc/apt/preferences.d/gnome and having experimental enabled in /etc/apt/sources.list should be enough to enable apt-get dist-upgrade to upgrade to GNOME 3 in experimental.
But if you have packages depending on libimobiledevice1, you might have to wait until #620065 is properly fixed so that libimobiledevice2 is co-installable with libimobiledevice1.
Update: integrated the explanation to reenable the desktop icons thanks to bronte s comment.
With all the buzz around GNOME 3, I really wanted to try it out for real on my main laptop. It usually runs Debian Unstable but that s not enough in this case, GNOME 3 is not fully packaged yet and it s only in experimental for now.
I asked Josselin Mouette (of the pkg-gnome team) when he expected it to be available and he could not really answer because there s lots of work left. Instead Roland Mas gently answered me Sooner if you help .
First steps as a GNOME packager
This is pretty common in free software and for once I followed the advice, I spent most of sunday helping out with GNOME 3 packaging. I have no prior experience with GNOME packaging but I m fairly proficient in Debian packaging in general so when I showed up on #debian-gnome (irc.debian.org) on sunday morning, Josselin quickly added me to the team on alioth.debian.org.
Still being a pkg-gnome rookie, I started by reading the documentation on pkg-gnome.alioth.debian.org. This is enough to know where to find the code in the SVN repository, and how to do releases, but it doesn t contain much information about what you need to know to be a good GNOME packager. It would have been great to have some words on introspection and what it changes in terms of packaging for instance.
Josselin suggested me to start with one of the modules that was not yet updated at all (most packages have a pre-release version usually 2.91 in experimental, but some are still at 2.30).
Packages updated and problems encountered
(You can skip this section if you re not into GNOME packaging)
So I picked up totem. I quickly updated totem-pl-parser as a required build-dependency and made my first mistake by uploading it to unstable (it turns out it s not a problem for this specific package). Totem itself was more complicated even if some preliminary work was already in the subversion repository. It introduces a new library which required a new package and I spent a long time debugging why the package would not build in a minimalistic build environment.
Indeed while the package was building fine in my experimental chroot, I took care to build my test packages like the auto-builders would do with sbuild (in sid environment + the required build-dependencies from experimental) and there it was failing. In fact it turns out pkg-config was failing because libquvi-dev was missing (and it was required by totem-pl-parser.pc) but this did not leave any error message in config.log.
Next, I decided to take care of gnome-screensaver as it was not working for me (I could not unlock the screen once it was activated). When built in my experimental chroot, it was fine but when built in the minimalistic environment it was failing. Turns out /usr/lib/gnome-screensaver/gnome-screensaver-dialog was loading both libgtk2 and libgtk3 at the same time and was crashing. It s not linked against libgtk2 but it was linked against the unstable version of libgnomekbdui which is still using libgtk2. Bumping the build-dependency on libgnomekbd-dev fixed the problem.
In the evening, I took care of mutter and gnome-shell, and did some preliminary work on gnome-menus.
Help is still welcome
There s still lots of work to do, you re welcome to do like me and join to help. Come on #debian-gnome on irc.debian.org, read the documentation and try to update a package (and ask questions when you don t know).
Installation of GNOME 3 from Debian experimental
You can also try GNOME 3 on your Debian machine, but at this point I would advise to do it only if you re ready to invest some time in understanding the remaining problems. It s difficult to cherry-pick just the required packages from experimental, I tried it and at the start I ended up with a bad user experience (important packages like gnome-themes-standard or gnome-icon-theme not installed/updated and similar issues).
To help you out with this, here s a file that you can put in /etc/apt/preferences.d/gnome to allow APT to upgrade the most important GNOME 3 packages from experimental:
The list might not be exhaustive and sometimes you will have to give supplementary hints to apt for the upgrade to succeed, but it s better than nothing.
I hope you find this useful. I m enjoying my shiny new GNOME 3 desktop and it s off for a good start. My main complaint is that hamster-applet (time tracker) has not yet been integrated in the shell.
Tonight was system cleanup day. Baobob showed me where are the gigabytes hide. The home directory got rid of huge, old VCS checkouts of various projects. Then it was time to look at the system directories. I cleaned my apt cache
sudo apt-get clean
and the cache from pbuilder. Then I found something that lead to this blog post: /var/log consumed 3.8 GB. The biggest files were
1.8 GB /var/log/kern.log
1.8 GB /var/log/syslog
4.3 MB /var/log/dpkg.log
1.4 MB /var/log/kern.log.1