Search Results: "Noah Meyerhans"

29 December 2021

Noah Meyerhans: When You Could Hear Security Scans

Have you ever wondered what a security probe of a computer sounded like? I d guess probably not, because on the face of it that doesn t make a whole lot of sense. But there was a time when I could very clearly discern the sound of a computer being scanned. It sounded like a small mechanical heart beat: Click-click click-click click-click Prior to 2010, I had a computer under my desk with what at the time were not unheard-of properties: Its storage was based on a stack of spinning metal platters (a now-antiquated device known as a hard drive ), and it had a publicly routable IPv4 address with an unfiltered connection to the Internet. Naturally it ran Linux and an ssh server. As was common in those days, service logging was handled by a syslog daemon. The syslog daemon would sort log messages based on various criteria and record them somewhere. In most simple environments, somewhere was simply a file on local storage. When writing to a local file, syslog daemons can be optionally configured to use the fsync() system call to ensure that writes are flushed to disk. Practically speaking, what this meant is that a page of disk-backed memory would be written to the disk as soon as an event occurred that triggered a log message. Because of potential performance implications, fsync() was not typically enabled for most log files. However, due to the more sensitive nature of authentication logs, it was often enabled for /var/log/auth.log. In the first decade of the 2000 s, there was a fairly unsophisticated worm loose on the Internet that would probe sshd with some common username/password combinations. The worm would pause for a second or so between login attempts, most likely in an effort to avoid automated security responses. The effect was that a system being probed by this worm would generate disk write every second, with a very distinct audible signature from the hard drive. I think this situation is a fun demonstration of a side-channel data leak. It s primitive and doesn t leak very much information, but it was certainly enough to make some inference about the state of the system in question. Of course, side-channel leakage issues have been a concern for ages, but I like this one for its simplicity. It was something that could be explained and demonstrated easily, even to somebody with relatively limited understanding of how computers work , unlike, for instance measuring electromagnetic emanations from CPU power management units. For a different take on the sounds of a computing infrastructure, Peep (The Network Auralizer) won an award at a USENIX conference long, long ago. I d love to see a modern deployment of such a system. I m sure you could build something for your cloud deployment using something like AWS EventBridge or Amazon SQS fairly easily. For more on research into actual real-world side-channel attacks, you can read A Survey of Microarchitectural Side-channel Vulnerabilities, Attacks and Defenses in Cryptography or A Survey of Electromagnetic Side-Channel Attacks and Discussion on their Case-Progressing Potential for Digital Forensics.

27 October 2020

Noah Meyerhans: Debian STS: Short Term Support

In another of my frequent late-night bouts with insomnia, I started thinking about the intersection of a number of different issues facing Debian today, both from a user point of view and a developer point of view. Debian has a reputation for shipping stale software. Versions in the stable branch are often significantly behind the latest development upstream. Debian s policy here has been that this is fine, our goal is to ship something stable, not something bleeding edge. Unofficially, our response to users is: If you need bleeding edge software, Debian may not be for you. Officially, we have no response to users who want fresher software. Debian also has a problem with a lack of manpower. I believe that part of why we have a hard time attracting contributors is our reputation for stale software. It might be worth it for us to consider changes to our approach to releases.

What about running testing? People who want newer software often look to Debian s testing branch as a possible solution. It s tempting, as it s a dynamically generated release based on unstable, so it should be quite current. In practice, it s not at all uncommon to find people running testing, and in fact I m running it right now on the ThinkPad on which this is being typed. However, testing comes with a glaring issue: a lack of timely security support. Security updates must still propagate through unstable, and this can take some time. They can be held up by dependencies, library transitions, or other factors. Nearly every list of best practices for computer security lists keeping software up-to-date at or near the top of most important steps to take to safely use networked computer. Debian s testing branch makes this very difficult, especially when faced with a zero-day with potential for real-world exploit.

What about stable-backports? Stable backports is both better and worse than testing. It s better in that it allows you to run a system comprised mainly of packages from the stable branch, which receive updates from the security team in a timely manner. However, it s worse in that the packages from the backports repository incur an additional delay. The expectation around backports is that a package migrates naturally from unstable to testing, and then requires a maintainer to upload a new package based on the version in testing specifically targeted at stable backports. The migration can potentially be bypassed, and we used to have a mechanism for announcing the availability of security updates for the stable backports archive, but it has gone unused for several years now. The documentation describes a workflow for posting security updates that involves creating a ticket in Debian s RT system, which is going to be quite foreign to most people. News from mid 2019 suggests that this process might change, but nothing appears to have come of this in over a year, and we still haven t seen a proper security advisory for stable backports in years.

Looking to LTS for ideas The Long-Term Support project is an alternative branch of Debian, maintained outside the normal stable release infrastructure. It s stable, and expected to behave that way, but it s not supported by the stable security team or release team. LTS provides a framework for providing security updates via targeted uploads by a team of interested individuals working outside the structure of the existing stable releases. This project seems to be quite active (how much of this is because at least some members are being paid?), and as of this writing has actually published more security advisories in the past month than the stable security team has published for the current stable branch. This is also interesting in that the software in LTS is quite old, first appearing in a Debian stable release in 2017. LTS is particularly interesting here as it s an example an initiative within the Debian community taken specifically to address user needs. For some of our users, remaining on an old release is a perfectly valid thing for them to do, and we recognize this and support them in doing so.

Debian Short-Term Support So, what would it take to create an LTS-like initiative in the other direction? Instead of providing ongoing support for ancient versions of software that previously comprised a stable release, could we build a distribution branch based on something that hasn t yet reached stable? What would that look like? How would it fit in the existing unstable testing migration process? What impact would it have on the size of the archive? Would we want a rolling release, or discrete releases? If the latter, how many would we want between proper stable releases? The security tracker already tracks outstanding issues in unstable and testing, and can even show issues that have been fixed in unstable but haven t yet propagated to testing. If we want a rolling release, maybe we could just open up the testing-security repository more broadly? There was once a testing security team, which IIRC was chartered to publish updated packages directly to testing security, along with associated security advisory. Based on the mailing list history, that effort seems to have shut down around the time of the squeeze (Debian 6.0) release in early 2011. Would it be worth resurrecting it? We ve probably got much of the infrastructure required in place already, since it previously existed. Personally I m not really a fan of a pure rolling release. I d rather see a light-weight release. Maybe a snapshot of testing that gets just a date, not a Toy Story codename. Probably skip building a dedicated installer for it. Upgrade from stable or use a d-i snapshot from testing if needed. This mini release is supported until the next one comes out, maybe 6 or 8 months later. By supported, I mean that the Short Term Release team is responsible for it. They can upload security or other critical bug fixes directly to a dedicated repository. When the next STS snapshot is released, packages in the updates repository are either archived, if they re a lower version than the one in the new mini release, or rebuilt against the new mini release and preserved. Using some of the same mechanisms as the LTS release, we d need
  1. Something to take the place of oldstable, that is the base release against which updates are released. This could be something that effectively maps to a date snapshot served by http://snapshot.debian.org/. (Snapshot itself could not currently handle the increased load, as I understand it, but conceptually it s similar.)
  2. Something to take the place of the dist/updates apt repository that hosts the packages that are updated.
In theory, if the infrastructure could support hose things, then we could in effect generate a mini release at any time based on a snapshot. I wonder if this could start as something totally unofficial; mirror an arbitrary testing snapshot and provide a place for interested people to publish package updates.

Not a proposal, nor a criticism To be clear, I don t really intend this as a proposal; It s really half-baked. Maybe these ideas have already been considered and dismissed. I don t know if people would be interested in working on such a project, and I m not nearly familiar enough with the Debian archive tooling to even make a guess as to how hard it would be to implement much of it. I m just posting some ideas that I came up with while pondering something that, from my perspective, is an area where Debian is clearly failing to meet the needs of some of our users. We know Debian is a popular and respected Linux distribution, and we know people value our stability. However, we also know that people like running Fedora and Ubuntu s non-LTS releases. People like Arch Linux. Not just end-users , but also the people developing the software shipped by the distros themselves. There are a lot of potential contributors to Debian who are kept away by our unwillingness to provide a distro offering both fresh software and security support. I think that we could attract more people to the Debian community if we could provide a solution for these people, and that would ultimately be good for everybody. Also, please don t interpret this as being critical of the release team, the stable security team, or any other team or individual in Debian. I m sharing this because I think there are opportunities for Debian to improve how we serve our users, not because I think anybody is doing anything wrong. With all that said, though, let me know if you find the ideas interesting. If you think they re crazy, you can tell me that, too. I ll probably agree with you.

7 July 2020

Noah Meyerhans: Setting environment variables for gnome-session

Am I missing something obvious? When did this get so hard? In the old days, you configured your desktop session on a Linux system by editing the .xsession file in your home directory. The display manager (login screen) would invoke the system-wide xsession script, which would either defer to your personal .xsession script or set up a standard desktop environment. You could put whatever you want in the .xsession script, and it would be executed. If you wanted a specific window manager, you d run it from .xsession. Start emacs or a browser or an xterm or two? .xsession. It was pretty easy, and super flexible. For the past 25 years or so, I ve used X with an environment started via .xsession. Early on it was fvwm with some programs, then I replaced fvwm with Window Maker (before that was even its name!), then switched to KDE. More recently (OK, like 10 years ago) I gradually replaced KDE with awesome and various custom widgets. Pretty much everything was based on a .xsession script, and that was fine. One particularly nice thing about it was that I could keep .xsession and any related helper programs in a git repository and manage changes over time. More recently I decided to give Wayland and GNOME an honest look. This has mostly been fine, but everything I ve been doing in .xsession is suddenly useless. OK, fine, progress is good. I ll just use whatever new mechanisms exist. How hard can it be? OK, so here we go. I am running GNOME. This isn t so bad. Alt+F2 brings up the Run Command dialog. It s a different keystroke than what I m used to, but I can adapt. (Obviously I can reconfigure the key binding, and maybe someday I will, but that s not the point here.) I have some executables in ~/bin. Oops, the run command dialog can t find them. No problem, I just need to update the PATH variable that it sees. Hmmm So how does one do that, anyway? GNOME has a help system, but searching that doesn t doesn t reveal anything. But that s fine, maybe it s inherited from the parent process. But there s no xsession script equivalent, since this isn t X anymore at all. The familiar stuff in /etc/X11/Xsession is no longer used. What s the equivalent in Wayland? Turns out, there isn t a shell script at all anymore, at least not in how Wayland and GNOME interact in Debian s configuration, which seems fairly similar to how anybody else would set this up. The GNOME session runs from a systemd-managed user session. Digging in to some web search results suggests that systemd provides a mechanism for setting some environment variables for services started by the user instance of the system. OK, so let s create some files in ~/.config/environment.d and we should be good. Except no, this isn t working. I can set some variables, but something is overriding PATH. I can create this file:
$ cat ~/.config/environment.d/01_path.conf
USER_INITIAL_PATH=$ PATH 
PATH=$ HOME /bin:$ HOME /go/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin
USER_CUSTOM_PATH=$ PATH 
After logging in, the Run a command dialog still doesn t see my PATH. So I use Alt+F2 and sh -c "env > /tmp/env" to capture the environment, and this is what I see:
USER_INITIAL_PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin
PATH=/usr/local/bin:/usr/bin:/bin:/usr/games
USER_CUSTOM_PATH=/home/noahm/bin:/home/noahm/go/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin
So, my environment.d file is there, and it s getting looked at, but something else is clobbering my PATH later in the startup process. But what? Where? Why? The systemd docs don t indicate that there s anything special about PATH, and nothing in /lib/systemd/user-environment-generators/ seems to treat it specially. The string PATH doesn t appear in /lib/systemd/user/ either. Looking for the specific value that s getting assigned to PATH in /etc shows the only occurrence of it being in /etc/zsh/zshenv, so maybe that s where it s coming from? But that should only get set there if it s otherwise unset or otherwise very minimally set. So I still have no idea where it s coming from. OK, so ignoring where my custom value is getting overridden, maybe what s configured in /lib/systemd/user will point me in the right direction. systemd --user status suggests that the interesting part of my session is coming from gnome-shell-wayland.service. Can we use a standard systemd drop-in as documented in systemd.unit(5)? It turns out that we can. This file sets things up the way I want:
$ cat .config/systemd/user/gnome-shell-wayland.service.d/path.conf
[Service]
Environment=PATH=%h/bin:%h/go/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin
Is that right? It really doesn t feel ideal to me. Systemd s Environment directive can t reference existing environment variables, and I can t use conditionals to do things like add a directory to the PATH only if it exists, so it s still a functional regression from what we had before. But at least it s a text file, edited by hand, trackable in git, so that s not too bad. There are some people out there who hate systemd, and will cite this as an illustration of why. However, I m not one of those people, and I very much like systemd as an init system. I d be happy to throw away sysvinit scripts forever, but I m not quite so happy with the state of .xsession s replacements. Despite the similarities, I don t think .xsession is entirely the same as SysV-style init scripts. The services running on a system are vastly more important than my personal .xsession, and systemd is far better at managing them than the pile of shell scripts used to set things up under sysvinit. Further, systemd the init system maintains compatibility with init scripts, so if you really want to keep using them, you can. As far as I can tell, though, systemd the user session manager does not seem to maintain compatibility with .xsession scripts, and that s unfortunate. I still haven t figured out what was overriding the ~/.config/environment.d/ setting. Any ideas?

4 March 2020

Noah Meyerhans: Daily VM image builds are available from the cloud team

Did you know that the cloud team generates daily images for buster, bullseye, and sid? They re available for download from cdimage.debian.org and are published to Amazon EC2 and Microsoft Azure. This is done both to exercise our image generation infrastructure, and also to facilitate testing of the actual images and distribution in general. I ve often found it convenient to have easy access to a clean, up-to-date, disposable virtual machine, and you might too. Please note that these images are intended for testing purposes, and older ones may be removed at any time in order to free up various resources. You should not hardcode references to specific images in any tools or configuration. If you re downloading an image for local use, you ll probably want one of the nocloud images. They have an empty root password (the security ramifications of this should be obvious, so please be careful!), and don t rely on any cloud service for configuration. You can use the qcow2 images with QEMU on any Linux system, or retrieve the raw images for use with another VMM. If you want to use the images on Amazon EC2, you can identify the latest nightly build using the AWS CLI as follows:
# Select the most recent bullseye image for arm64 instance types:
$ aws ec2 describe-images --owner 903794441882 \
--region us-east-1 --output json \
--query "Images[?Architecture=='arm64']   [?starts_with(Name, 'debian-11-')]   max_by([], &Name)"
 
"Architecture": "arm64",
"CreationDate": "2020-03-04T05:31:12.000Z",
"ImageId": "ami-056a2fe946ef98607",
"ImageLocation": "903794441882/debian-11-arm64-daily-20200304-189",
"ImageType": "machine",
"Public": true,
"OwnerId": "903794441882",
"State": "available",
"BlockDeviceMappings": [
 
"DeviceName": "/dev/xvda",
"Ebs":  
"Encrypted": false,
"DeleteOnTermination": true,
"SnapshotId": "snap-0d7a569b159964d87",
"VolumeSize": 8,
"VolumeType": "gp2"
 
 
],
"Description": "Debian 11 (daily build 20200304-189)",
"EnaSupport": true,
"Hypervisor": "xen",
"Name": "debian-11-arm64-daily-20200304-189",
"RootDeviceName": "/dev/xvda",
"RootDeviceType": "ebs",
"SriovNetSupport": "simple",
"VirtualizationType": "hvm"
 
# Similarly, select the most recent sid amd64 AMI:
$ aws ec2 describe-images --owner 903794441882 \
--region us-east-1 --output json \
--query "Images[?Architecture=='x86_64']   [?starts_with(Name, 'debian-sid-')]   max_by([], &Name)"
 
"Architecture": "x86_64",
"CreationDate": "2020-03-04T05:13:58.000Z",
"ImageId": "ami-00ec9272298ca9059",
"ImageLocation": "903794441882/debian-sid-amd64-daily-20200304-189",
"ImageType": "machine",
"Public": true,
"OwnerId": "903794441882",
"State": "available",
"BlockDeviceMappings": [
 
"DeviceName": "/dev/xvda",
"Ebs":  
"Encrypted": false,
"DeleteOnTermination": true,
"SnapshotId": "snap-07c3fad3ff835248a",
"VolumeSize": 8,
"VolumeType": "gp2"
 
 
],
"Description": "Debian sid (daily build 20200304-189)",
"EnaSupport": true,
"Hypervisor": "xen",
"Name": "debian-sid-amd64-daily-20200304-189",
"RootDeviceName": "/dev/xvda",
"RootDeviceType": "ebs",
"SriovNetSupport": "simple",
"VirtualizationType": "hvm"
 
If you re using Microsoft Azure images, you can inspect the images with az vm image list and az vm image show, as follows:
$ az vm image list -o table --publisher debian --offer debian-sid-daily --location westeurope --all   sort -k 5   tail
debian-sid-daily Debian sid-gen2 Debian:debian-sid-daily:sid-gen2:0.20200228.184 0.20200228.184
debian-sid-daily Debian sid Debian:debian-sid-daily:sid:0.20200229.185 0.20200229.185
debian-sid-daily Debian sid-gen2 Debian:debian-sid-daily:sid-gen2:0.20200229.185 0.20200229.185
debian-sid-daily Debian sid Debian:debian-sid-daily:sid:0.20200301.186 0.20200301.186
debian-sid-daily Debian sid-gen2 Debian:debian-sid-daily:sid-gen2:0.20200301.186 0.20200301.186
debian-sid-daily Debian sid Debian:debian-sid-daily:sid:0.20200302.187 0.20200302.187
debian-sid-daily Debian sid-gen2 Debian:debian-sid-daily:sid-gen2:0.20200302.187 0.20200302.187
debian-sid-daily Debian sid Debian:debian-sid-daily:sid:0.20200303.188 0.20200303.188
debian-sid-daily Debian sid-gen2 Debian:debian-sid-daily:sid-gen2:0.20200303.188 0.20200303.188
Offer Publisher Sku Urn Version
$ az vm image show --location westeurope --urn debian:debian-sid-daily:sid:latest
 
"automaticOsUpgradeProperties":  
"automaticOsUpgradeSupported": false
 ,
"dataDiskImages": [],
"hyperVgeneration": "V1",
"id": "/Subscriptions/428325bd-cc87-41f1-b0d8-8caf8bb80b6b/Providers/Microsoft.Compute/Locations/westeurope/Publishers/debian/ArtifactTypes/VMImage/Offers/debian-sid-daily/Skus/sid/Versions/0.20200303.188",
"location": "westeurope",
"name": "0.20200303.188",
"osDiskImage":  
"operatingSystem": "Linux",
"sizeInBytes": 32212255232,
"sizeInGb": 30
 ,
"plan": null,
"tags": null
 
More information about cloud computing with Debian is available on the wiki.

2 March 2020

Noah Meyerhans: Buster in the AWS Marketplace

When buster was first released back in early July of last year, the cloud team was in the process of setting up some new accounts with AWS to be used for AMI publication. For various reasons, the accounts we used for pre-buster releases were considered unsuitable for use long term, and the buster release was considered to be a good logical point to make the switch. Unfortunately, issues within the bureaucracy of both SPI/Debian and AWS delayed the complete switch to the new accounts. We have been publishing buster AMIs using a new account since September of 2019, but we have not been able to list them with the AWS Marketplace. This has reduced the visibility and discoverability of the AMIs and lead to numerous questions on the mailing lists and other forums. I m happy to announce today that the issues blocking Marketplace publication have finally been resolved, and buster is officially available in the AWS Marketplace. Please use it, please leave us ratings and reviews in the Marketplace, and most importantly, please feel welcome to contribute to the Debian cloud team. As always, if you d rather get the latest details from the Debian wiki, they re available, or you can query the AWS API directly, e.g. using the awscli command as follows:
$ aws ec2 describe-images --owner 136693071363 \
--region us-east-1 --output json \
--query "Images[?Architecture=='arm64']   [?starts_with(Name, 'debian-10-')]   max_by([], &Name)"
 
"Architecture": "arm64",
"CreationDate": "2020-02-10T19:04:55.000Z",
"ImageId": "ami-031d1abcdcbbfbd8f",
"ImageLocation": "136693071363/debian-10-arm64-20200210-166",
"ImageType": "machine",
"Public": true,
"OwnerId": "136693071363",
"State": "available",
"BlockDeviceMappings": [
 
"DeviceName": "/dev/xvda",
"Ebs":  
"Encrypted": false,
"DeleteOnTermination": true,
"SnapshotId": "snap-0d8459c1e3fe12486",
"VolumeSize": 8,
"VolumeType": "gp2"
 
 
],
"Description": "Debian 10 (20200210-166)",
"EnaSupport": true,
"Hypervisor": "xen",
"Name": "debian-10-arm64-20200210-166",
"RootDeviceName": "/dev/xvda",
"RootDeviceType": "ebs",
"SriovNetSupport": "simple",
"VirtualizationType": "hvm"
 
Hopefully this helps reduce some of the confusion around the availability of the buster AMIs. Next up, GovCloud!

21 April 2017

Noah Meyerhans: Stretch images for Amazon EC2, round 2

Following up on a previous post announcing the availability of a first round of AWS AMIs for stretch, I'm happy to announce the availability of a second round of images. These images address all the feedback we've received about the first round. The notable changes include: AMI details are listed on the wiki. As usual, you're encouraged to submit feedback to the cloud team via the cloud.debian.org BTS pseudopackage, the debian-cloud mailing list, or #debian-cloud on irc.

11 February 2017

Noah Meyerhans: Using FAI to customize and build your own cloud images

At this past November's Debian cloud sprint, we classified our image users into three broad buckets in order to help guide our discussions and ensure that we were covering the common use cases. Our users fit generally into one of the following groups:
  1. People who directly launch our image and treat it like a classic VPS. These users most likely will be logging into their instances via ssh and configuring it interactively, though they may also install and use a configuration management system at some point.
  2. People who directly launch our images but configure them automatically via launch-time configuration passed to the cloud-init process on the agent. This automatic configuration may optionally serve to bootstrap the instance into a more complete configuration management system. The user may or may not ever actually log in to the system at all.
  3. People who will not use our images directly at all, but will instead construct their own image based on ours. They may do this by launching an instance of our image, customizing it, and snapshotting it, or they may build a custom image from scratch by reusing and modifying the tools and configuration that we use to generate our images.
This post is intended to help people in the final category get started with building their own cloud images based on our tools and configuration. As I mentioned in my previous post on the subject, we are using the FAI project with configuration from the fai-cloud-images. It's probably a good idea to get familiar with FAI and our configs before proceeding, but it's not necessary. You'll need to use FAI version 5.3.4 or greater. 5.3.4 is currently available in stretch and jessie-backports. Images can be generated locally on your non-cloud host, or on an existing cloud instance. You'll likely find it more convenient to use a cloud instance so you can avoid the overhead of having to copy disk images between hosts. For the most part, I'll assume throughout this document that you're generating your image on a cloud instance, but I'll highlight the steps where it actually matters. I'll also be describing the steps to target AWS, though the general workflow should be similar if you're targeting a different platform. To get started, install the fai-server package on your instance and clone the fai-cloud-images git repository. (I'll assume the repository is cloned to /srv/fai/config.) In order to generate your own disk image that generally matches what we've been distributing, you'll use a command like:
sudo fai-diskimage --hostname stretch-image --size 8G \
--class DEBIAN,STRETCH,AMD64,GRUB_PC,DEVEL,CLOUD,EC2 \
/tmp/stretch-image.raw
This command will create an 8 GB raw disk image at /tmp/stretch-image.raw, create some partitions and filesystems within it, and install and configure a bunch of packages into it. Exactly what packages it installs and how it configures them will be determined by the FAI config tree and the classes provided on the command line. The package_config subdirectory of the FAI configuration contains several files, the names of which are FAI classes. Activating a given class by referencing it on the fai-diskimage command line instructs FAI to process the contents of the matching package_config file if such a file exists. The files use a simple grammar that provides you with the ability to request certain packages to be installed or removed. Let's say for example that you'd like to build a custom image that looks mostly identical to Debian's images, but that also contains the Apache HTTP server. You might do that by introducing a new file to package_config/HTTPD file, as follows:
PACKAGES install
apache2
Then, when running fai-diskimage, you'll add HTTPD to the list of classes:
sudo fai-diskimage --hostname stretch-image --size 8G \
--class DEBIAN,STRETCH,AMD64,GRUB_PC,DEVEL,CLOUD,EC2,HTTPD \
/tmp/stretch-image.raw
Aside from custom package installation, you're likely to also want custom configuration. FAI allows the use of pretty much any scripting language to perform modifications to your image. A common task that these scripts may want to perform is the installation of custom configuration files. FAI provides the fcopy tool to help with this. Fcopy is aware of FAI's class list and is able to select an appropriate file from the FAI config's files subdirectory based on classes. The scripts/EC2/10-apt script provides a basic example of using fcopy to select and install an apt sources.list file. The files/etc/apt/sources.list/ subdirectory contains both an EC2 and a GCE file. Since we've enabled the EC2 class on our command line, fcopy will find and install that file. You'll notice that the sources.list subdirectory also contains a preinst file, which fcopy can use to perform additional actions prior to actually installing the specified file. postinst scripts are also supported. Beyond package and file installation, FAI also provides mechanisms to support debconf preseeding, as well as hooks that are executed at various stages of the image generation process. I recommend following the examples in the fai-cloud-images repo, as well as the FAI guide for more details. I do have one caveat regarding the documentation, however: FAI was originally written to help provision bare-metal systems, and much of its documentation is written with that use case in mind. The cloud image generation process is able to ignore a lot of the complexity of these environments (for example, you don't need to worry about pxeboot and tftp!) However, this means that although you get to ignore probably half of the FAI Guide, it's not immediately obvious which half it is that you get to ignore. Once you've generated your raw image, you can inspect it by telling Linux about the partitions contained within, and then mount and examine the filesystems. For example:
admin@ip-10-0-0-64:~$ sudo partx --show /tmp/stretch-image.raw
NR START      END  SECTORS SIZE NAME UUID
 1  2048 16777215 16775168   8G      ed093314-01
admin@ip-10-0-0-64:~$ sudo partx -a /tmp/stretch-image.raw 
partx: /dev/loop0: error adding partition 1
admin@ip-10-0-0-64:~$ lsblk 
NAME      MAJ:MIN RM    SIZE RO TYPE MOUNTPOINT
xvda      202:0    0      8G  0 disk 
 xvda1   202:1    0 1007.5K  0 part 
 xvda2   202:2    0      8G  0 part /
loop0       7:0    0      8G  0 loop 
 loop0p1 259:0    0      8G  0 loop 
admin@ip-10-0-0-64:~$ sudo mount /dev/loop0p1 /mnt/
admin@ip-10-0-0-64:~$ ls /mnt/
bin/   dev/  home/        initrd.img.old@  lib64/       media/  opt/   root/  sbin/  sys/  usr/  vmlinuz@
boot/  etc/  initrd.img@  lib/             lost+found/  mnt/    proc/  run/   srv/   tmp/  var/  vmlinuz.old@
In order to actually use your image with your cloud provider, you'll need to register it with them. Strictly speaking, these are the only steps that are provider specific and need to be run on your provider's cloud infrastructure. AWS documents this process in the User Guide for Linux Instances. The basic workflow is:
  1. Attach a secondary EBS volume to your EC2 instance. It must be large enough to hold the raw disk image you created.
  2. Use dd to write your image to the secondary volume, e.g. sudo dd if=/tmp/stretch-image.raw of=/dev/xvdb
  3. Use the volume-to-ami.sh script in the fail-cloud-image repo to snapshot the volume and register the resulting snapshot with AWS as a new AMI. Example: ./volume-to-ami.sh vol-04351c30c46d7dd6e
The volume-to-ami.sh script must be run with access to AWS credentials that grant access to several EC2 API calls: describe-snapshots, create-snapshot, and register-image. It recognizes a --help command-line flag and several options that modify characteristics of the AMI that it registers. When volume-to-ami.sh completes, it will print the AMI ID of your new image. You can now work with this image using standard AWS workflows. As always, we welcome feedback and contributions via the debian-cloud mailing list or #debian-cloud on IRC.

28 January 2017

Noah Meyerhans: Call for testing: Stretch cloud images on AWS

Following up on Steve McIntyre's writeup of the Debian Cloud Sprint that took place in Seattle this past November, I'm pleased to announce the availability of preliminary Debian stretch AMIs for Amazon EC2. Pre-generated images are available in all public AWS regions, or you can use FAI with the fai-cloud-images configuration tree to generate your own images. The pre-generated AMIs were created on 25 January, shortly after Linux 4.9 entered stretch, and their details follow:
ami-6d017002 ap-south-1
ami-cc5540a8 eu-west-2
ami-43401925 eu-west-1
ami-870edfe9 ap-northeast-2
ami-812266e6 ap-northeast-1
ami-932e4aff sa-east-1
ami-34ce7350 ca-central-1
ami-9f6dd8fc ap-southeast-1
ami-829295e1 ap-southeast-2
ami-42448a2d eu-central-1
ami-98c9348e us-east-1
ami-57361332 us-east-2
ami-03386563 us-west-1
ami-7a27991a us-west-2
As with the current jessie images, these use a default username of 'admin', with access controlled by the ssh key named in the ec2 run-instances invocation. They're intended to provide a reasonably complete Debian environment without too much bloat. IPv6 addressing should be supported in an appropriately configured VPC environment. These images were build using Thomas Lange's FAI, which has been used for over 15 years for provisioning all sorts of server, workstation, and VM systems, but which only recently was adapted for use generating cloud disk images. It has proven to be well suited to this task though, and image creation is straightforward and flexible. I'll describe in a followup post the steps you can follow to create and customize your own AMIs based on our recipes. In the meantime, please do test these images! You can submit bug reports to the cloud.debian.org metapackage, and feedback is welcome via the debian-cloud mailing list or #debian-cloud on IRC.

14 December 2016

Antoine Beaupr : Debian considering automated upgrades

The Debian project is looking at possibly making automatic minor upgrades to installed packages the default for newly installed systems. While Debian has a reliable and stable package update system that has been an inspiration for multiple operating systems (the venerable APT), upgrades are, usually, a manual process on Debian for most users. The proposal was brought up during the Debian Cloud sprint in November by longtime Debian Developer Steve McIntyre. The rationale was to make sure that users installing Debian in the cloud have a "secure" experience by default, by installing and configuring the unattended-upgrades package within the images. The unattended-upgrades package contains a Python program that automatically performs any pending upgrade and is designed to run unattended. It is roughly the equivalent of doing apt-get update; apt-get upgrade in a cron job, but has special code to handle error conditions, warn about reboots, and selectively upgrade packages. The package was originally written for Ubuntu by Michael Vogt, a longtime Debian developer and Canonical employee. Since there was a concern that Debian cloud images would be different from normal Debian installs, McIntyre suggested installing unattended-upgrades by default on all Debian installs, so that people have a consistent experience inside and outside of the cloud. The discussion that followed was interesting as it brought up key issues one would have when deploying automated upgrade tools, outlining both the benefits and downsides to such systems.

Problems with automated upgrades An issue raised in the following discussion is that automated upgrades may create unscheduled downtime for critical services. For example, certain sites may not be willing to tolerate a master MySQL server rebooting in conditions not controlled by the administrators. The consensus seems to be that experienced administrators will be able to solve this issue on their own, or are already doing so. For example, Noah Meyerhans, a Debian developer, argued that "any reasonably well managed production host is going to be driven by some kind of configuration management system" where competent administrators can override the defaults. Debian, for example, provides the policy-rc.d mechanism to disable service restarts on certain packages out of the box. unattended-upgrades also features a way to disable upgrades on specific packages that administrators would consider too sensitive to restart automatically and will want to schedule during maintenance windows. Reboots were another issue discussed: how and when to deploy kernel upgrades? Automating kernel upgrades may mean data loss if the reboot happens during a critical operation. On Debian systems, the kernel upgrade mechanisms already provide a /var/run/reboot-required flag file that tools can monitor to notify users of the required reboot. For example, some desktop environments will popup a warning prompting users to reboot when the file exists. Debian doesn't currently feature an equivalent warning for command-line operation: Vogt suggested that the warning could be shown along with the usual /etc/motd announcement. The ideal solution here, of course, is reboot-less kernel upgrades, which is also known as "live patching" the kernel. Unfortunately, this area is still in development in the kernel (as was previously discussed here). Canonical deployed the feature for the Ubuntu 16.04 LTS release, but Debian doesn't yet have such capability, since it requires extra infrastructure among other issues. Furthermore, system reboots are only one part of the problem. Currently, upgrading packages only replaces the code and restarts the primary service shipped with a given package. On library upgrades, however, dependent services may not necessarily notice and will keep running with older, possibly vulnerable, libraries. While libc6, in Debian, has special code to restart dependent services, other libraries like libssl do not notify dependent services that they need to restart to benefit from potentially critical security fixes. One solution to this is the needrestart package which inspects all running processes and restarts services as necessary. It also covers interpreted code, specifically Ruby, Python, and Perl. In my experience, however, it can take up to a minute to inspect all processes, which degrades the interactivity of the usually satisfying apt-get install process. Nevertheless, it seems like needrestart is a key component of a properly deployed automated upgrade system.

Benefits of automated upgrades One thing that was less discussed is the actual benefit of automating upgrades. It is merely described as "secure by default" by McIntyre in the proposal, but no one actually expanded on this much. For me, however, it is now obvious that any out-of-date system will be systematically attacked by automated probes and may be taken over to the detriment of the whole internet community, as we are seeing with Internet of Things devices. As Debian Developer Lars Wirzenius said:
The ecosystem-wide security benefits of having Debian systems keep up to date with security updates by default overweigh any inconvenience of having to tweak system configuration on hosts where the automatic updates are problematic.
One could compare automated upgrades with backups: if they are not automated, they do not exist and you will run into trouble without them. (Wirzenius, coincidentally, also works on the Obnam backup software.) Another benefit that may be less obvious is the acceleration of the feedback loop between developers and users: developers like to know quickly when an update creates a regression. Automation does create the risk of a bad update affecting more users, but this issue is already present, to a lesser extent, with manual updates. And the same solution applies: have a staging area for security upgrades, the same way updates to Debian stable are first proposed before shipping a point release. This doesn't have to be limited to stable security updates either: more adventurous users could follow rolling distributions like Debian testing or unstable with unattended upgrades as well, with all the risks and benefits that implies.

Possible non-issues That there was not a backlash against the proposal surprised me: I expected the privacy-sensitive Debian community to react negatively to another "phone home" system as it did with the Django proposal. This, however, is different than a phone home system: it merely leaks package lists and one has to leak that information to get the updated packages. Furthermore, privacy-sensitive administrators can use APT over Tor to fetch packages. In addition, the diversity of the mirror infrastructure makes it difficult for a single entity to profile users. Automated upgrades do imply a culture change, however: administrators approve changes only a posteriori as opposed to deliberately deciding to upgrade parts they chose. I remember a time when I had to maintain proprietary operating systems and was reluctant to enable automated upgrades: such changes could mean degraded functionality or additional spyware. However, this is the free-software world and upgrades generally come with bug fixes and new features, not additional restrictions.

Automating major upgrades? While automating minor upgrades is one part of the solution to the problem of security maintenance, the other is how to deal with major upgrades. Once a release becomes unsupported, security issues may come up and affect older software. While Debian LTS extends releases lifetimes significantly, it merely delays the inevitable major upgrades. In the grand scheme of things, the lifetimes of Linux systems (Debian: 3-5 years, Ubuntu: 1-5 years) versus other operating systems (Solaris: 10-15 years, Windows: 10+ years) is fairly short, which makes major upgrades especially critical. While major upgrades are not currently automated in Debian, they are usually pretty simple: edit sources.list then:
    # apt-get update && apt-get dist-upgrade
But the actual upgrade process is really much more complex. If you run into problems with the above commands, you will quickly learn that you should have followed the release notes, a whopping 20,000-word, ten-section document that outlines all the gory details of the release. This is a real issue for large deployments and for users unfamiliar with the command line. The solutions most administrators seem to use right now is to roll their own automated upgrade process. For example, the Debian.org system administrators have their own process for the "jessie" (8.0) upgrade. I have also written a specification of how major upgrades could be automated that attempts to take into account the wide variety of corner cases that occur during major upgrades, but it is currently at the design stage. Therefore, this problem space is generally unaddressed in Debian: Ubuntu does have a do-release-upgrade command but it is Ubuntu-specific and would need significant changes in order to work in Debian.

Future work Ubuntu currently defaults to "no automation" but, on install, invites users to enable unattended-upgrades or Landscape, a proprietary system-management service from Canonical. According to Vogt, the company supports both projects equally as they differ in scope: unattended-upgrades just upgrades packages while Landscape aims at maintaining thousands of machines and handles user management, release upgrades, statistics, and aggregation. It appears that Debian will enable unattended-upgrades on the images built for the cloud by default. For regular installs, the consensus that has emerged points at the Debian installer prompting users to ask if they want to disable the feature as well. One reason why this was not enabled before is that unattended-upgrades had serious bugs in the past that made it less attractive. For example, it would simply fail to follow security updates, a major bug that was fortunately promptly fixed by the maintainer. In any case, it is important to distribute security and major upgrades on Debian machines in a timely manner. In my long experience in professionally administering Unix server farms, I have found the upgrade work to be a critical but time-consuming part of my work. During that time, I successfully deployed an automated upgrade system all the way back to Debian woody, using the simpler cron-apt. This approach is, unfortunately, a little brittle and non-standard; it doesn't address the need of automating major upgrades, for which I had to revert to tools like cluster-ssh or more specialized configuration management tools like Puppet. I therefore encourage any effort towards improving that process for the whole community. More information about the configuration of unattended-upgrades can be found in the Ubuntu documentation or the Debian wiki.
Note: this article first appeared in the Linux Weekly News.

5 March 2016

Lunar: Reproducible builds: week 44 in Stretch cycle

What happened in the reproducible builds effort between February 21th and February 27th:

Toolchain fixes Didier Raboud uploaded pyppd/1.0.2-4 which makes PPD generation deterministic. Emmanuel Bourg uploaded plexus-maven-plugin/1.3.8-10 which sorts the components in the components.xml files generated by the plugin. Guillem Jover has implemented stable ordering for members of the control archives in .debs. Chris Lamb submitted another patch to improve reproducibility of files generated by cython.

Packages fixed The following packages have become reproducible due to changes in their build dependencies: dctrl-tools, debian-edu, dvdwizard, dymo-cups-drivers, ekg2, epson-inkjet-printer-escpr, expeyes, fades, foomatic-db, galternatives, gnuradio, gpodder, gutenprint icewm, invesalius, jodconverter-cli latex-mk, libiio, libimobiledevice, libmcrypt, libopendbx, lives, lttnganalyses, m2300w, microdc2, navit, po4a, ptouch-driver, pxljr, tasksel, tilda, vdr-plugin-infosatepg, xaos. The following packages became reproducible after getting fixed: Some uploads fixed some reproducibility issues, but not all of them:

tests.reproducible-builds.org The reproducibly tests for Debian now vary the provider of /bin/sh between bash and dash. (Reiner Herrmann)

diffoscope development diffoscope version 50 was released on February 27th. It adds a new comparator for PostScript files, makes the directory tests pass on slower hardware, and line ordering variations in .deb md5sums files will not be hidden anymore. Version 51 uploaded the next day re-added test data missing from the previous tarball. diffoscope is looking for a new primary maintainer.

Package reviews 87 reviews have been removed, 61 added and 43 updated in the previous week. New issues: captures_shell_variable_in_autofoo_script, varying_ordering_in_data_tar_gz_or_control_tar_gz. 30 new FTBFS have been reported by Chris Lamb, Antonio Terceiro, Aaron M. Ucko, Michael Tautschnig, and Tobias Frost.

Misc. The release team reported on their discussion about the topic of rebuilding all of Stretch to make it self-contained (in respect to reproducibility). Christian Boltz is hoping someone could talk about reproducible builds at the openSUSE conference happening June 22nd-26th in N rnberg, Germany.

23 April 2015

Noah Meyerhans: We live in strange times

Join Microsoft to celebrate Debian 8 at LinuxFest Northwest

20 March 2015

Noah Meyerhans: Building OpenWRT with Docker

I've run OpenWRT on my home router for a long time, and these days I maintain a couple of packages for the project. In order to make most efficient use of the hardware resources on my router, I run a custom build of the OpenWRT firmware with some default features removed and others added. For example, I install bind and ipsec-tools, while I disable the web UI in order to save space. There are quite a few packages required for the OpenWRT build process. I don't necessarily want all of these packages installed on my main machine, nor do I want to maintain a VM for the build environment. So I investigated using Docker for this. Starting from a base jessie image, which I created using the Docker debootstrap wrapper, the first step was to construct a Dockerfile containing instructions on how to set up the build environment and create a non-root user to perform the build:
FROM jessie:latest
MAINTAINER Noah Meyerhans <frodo@morgul.net>
RUN DEBIAN_FRONTEND=noninteractive apt-get update && apt-get -y install \
asciidoc bash bc binutils bzip2 fastjar flex git-core g++ gcc
util-linux gawk libgtk2.0-dev intltool jikespg zlib1g-dev make \
genisoimage libncurses5-dev libssl-dev patch perl-modules \
python2.7-dev rsync ruby sdcc unzip wget gettext xsltproc \
libboost1.55-dev libxml-parser-perl libusb-dev bin86 bcc sharutils \
subversion
RUN adduser --disabled-password --uid 1000 --gecos "Docker Builder,,," builder
And we generate a docker image based on this Dockerfile per the docker build documentation. At this point, we've got a basic image that does what we want. To initialize the build environment (download package sources, etc), I might run: docker run -v ~/src/openwrt:/src/openwrt -u builder -t -i jessie/openwrt sh -c "cd /src/openwrt/openwrt && scripts/feeds update -a" Or configure the system: docker run -v ~/src/openwrt:/src/openwrt -u builder -t -i jessie/openwrt make -C /src/openwrt/openwrt menuconfig And finally, build the OpenWRT image itself: docker run -v ~/src/openwrt:/src/openwrt -u builder -t -i jessie/openwrt make -C /src/openwrt/openwrt -j3 The -v ~/src/openwrt:/src/openwrt flags tell docker to bind mount my ~/src/openwrt directory (which I'd previously cloned using git) to /src/openwrt inside the running container. Without this, one might be tempted to clone the git repo directly into the container at runtime, but the changes to non-bind-mount filesystems are lost when the container terminates. This could be suitable for an autobuild environment, in which the sources are cloned at the start of the build and any generated artifacts are archived externally at the end, but it isn't suitable for a dev environment where I might be making and testing small changes at a relatively high frequency. The -u builder flags tell docker to run the given commands as the builder user inside the container. Recall that builder was created with UID 1000 in the Dockerfile. Since I'm storing the source and artifacts in a bind-mounted directory, all saved files will be created with this UID. Since UID 1000 happens to be my UID on my laptop, this is fine. Any files created by builder inside the container will be owned by me outside the container. However, this container should not have to rely on a user with a given UID running it! I'm not sure what the right way to approach this problem is within Docker. It may be that someone using my image should create their own derivative image that creates a user with the appropriate UID (creation of this derivative image is a cheap operation in Docker). Alternatively, whatever Docker init system is used could start as root, add a new user with a specific UID, and execute the build commands as that new user. Neither of these seems as clean as it could be, though. In general, Docker seems quite useful for such a build environment. It's easy to set up, and it makes it very easy to generate and share a common collection of packages and configuration. Because images are self-contained, I can reclaim a bunch of disk space by simple executing "docker rmi".

15 January 2015

Noah Meyerhans: Spamassassin updates

If you're running Spamassassin on Debian or Ubuntu, have you enabled automatic rule updates? If not, why not? If possible, you should enable this feature. It should be as simple as setting "CRON=1" in /etc/default/spamassassin. If you choose not to enable this feature, I'd really like to hear why. In particular, I'm thinking about changing the default behavior of the Spamassassin packages such that automatic rule updates are enabled, and I'd like to know if (and why) anybody opposes this. Spamassassin hasn't been providing rules as part of the upstream package for some time. In Debian, we include a snapshot of the ruleset from an essentially arbitrary point in time in our packages. We do this so Spamassassin will work "out of the box" on Debian systems. People who install spamassassin from source must download rules using spamassassin's updates channel. The typical way to use this service is to use cron or something similar to periodically check for rule changes via this service. This allows the anti-spam community to quickly adapt to changes in spammer tactics, and for you to actually benefit from their work by taking advantage of their newer, presumably more accurate, rules. It also allows for quick reaction to issues such as the one described in bug 738872 and 774768. If we do change the default, there are a couple of possible approaches we could take. The simplest would be to simply change the default value of the CRON variable in /etc/default/spamassassin. Perhaps a cleaner approach would be to provide a "spamassassin-autoupdates" package that would simply provide the cron job and a simple wrapper program to perform the updates. The Spamassassin package would then specify a Recommends relationship with this package, thus providing the default enabled behavior while still providing a clear and simple mechanism to disable it.

27 September 2014

DebConf team: Wrapping up DebConf14 (Posted by Paul Wise, Donald Norwood)

The annual Debian developer meeting took place in Portland, Oregon, 23 to 31 August 2014. DebConf14 attendees participated in talks, discussions, workshops and programming sessions. Video teams captured a lot of the main talks and discussions for streaming for interactive attendees and for the Debian video archive. Between the video, presentations, and handouts the coverage came from the attendees in blogs, posts, and project updates. We ve gathered a few articles for your reading pleasure: Gregor Herrmann and a few members of the Debian Perl group had an informal unofficial pkg-perl micro-sprint and were very productive. Vincent Sanders shared an inspired gift in the form of a plaque given to Russ Allbery in thanks for his tireless work of keeping sanity in the Debian mailing lists. Pictures of the plaque and design scheme are linked in the post. Vincent also shared his experiences of the conference and hopes the organisers have recovered. Noah Meyerhans adventuring to Debian by train, (Inter)netted some interesting IPv6 data for future road and railwarriors. Hideki Yamane sent a gentle reminder for English speakers to speak more slowly. Daniel Pocock posted of GSoC talks at DebConf14, highlights include the Java Project Dependency Builder and the WebRTC JSCommunicator. Thomas Goirand gives us some insight into a working task list of accomplishments and projects he was able to complete at DebConf14, from the OpenStack discussion to tasksel talks, and completion of some things started last year at DebConf13. Antonio Terceiro blogged about debci and the Debian Continuous Integration project, Ruby, Redmine, and Noosfero. His post also shares the atmosphere of being able to interact directly with peers once a year. Stefano Zacchiroli blogged about a talk he did on debsources which now has its own HACKING file. Juliana Louback penned: DebConf 2014 and How I Became a Debian Contributor. Elizabeth Krumbach Joseph s in-depth summary of DebConf14 is a great read. She discussed Debian Validation & CI, debci and the Continuous Integration project, Automated Validation in Debian using LAVA, and Outsourcing webapp maintenance. Lucas Nussbaum by way of a blog post releases the very first version of Debian Trivia modelled after the TCP/IP Drinking Game. Fran ois Marier s shares additional information and further discussion on Outsourcing your webapp maintenance to Debian. Joachim Breitner gave a talk on Haskell and Debian, created a new tool for binNMUs for Haskell packages which runs via cron job. The output is available for Haskell and for OCaml, and he still had a small amount of time to go dancing. Jaldhar Harshad Vyas was not able to attend DebConf this year, but he did tune in to the videos made available by the video team and gives an insightful viewpoint to what was being seen. J r my Bobbio posted about Reproducible builds in Debian in his recap of DebConf14. One of the topics at hand involved defining a canonical path where packages must be built and a BOF discussion on reproducible builds from where the conversation moved to discussions in both Octave and Groff. New helpers dh_fixmtimes and dh_genbuildinfo were added to BTS. The .buildinfo format has been specified on the wiki and reviewed. Lots of work is being done in the project, interested parties can help with the TODO list or join the new IRC channel #debian-reproducible on irc.debian.org. Steve McIntyre posted a Summary from the d-i / debian-cd BoF at DC14, with some of the session video available online. Current jessie D-I needs some help with the testing on less common architectures and languages, and release scheduling could be improved. Future plans: Switching to a GUI by default for jessie, a default desktop and desktop choice, artwork, bug fixes and new architecture support. debian-cd: Things are working well. Improvement discussions are on selecting which images to make I.E. netinst, DVD, et al., debian-cd in progress with http download support, Regular live test builds, Other discussions and questions revolve around which ARM platforms to support, specially-designed images, multi-arch CDs, and cloud-init based images. There is also a call for help as the team needs help with testing, bug-handling, and translations. Holger Levsen reports on feedback about the feedback from his LTS talk at DebConf14. LTS has been perceived well, fits a demand, and people are expecting it to continue; however, this is not without a few issues as Holger explains in greater detail the lacking gatekeeper mechanisms, and how contributions are needed from finance to uploads. In other news the security-tracker is now fixed to know about old stable. Time is short for that fix as once jessie is released the tracker will need to support stable, oldstable which will be wheezy, and oldoldstable. Jonathan McDowell s summary of DebConf14 includes a fair perspective of the host city and the benefits of planning of a good DebConf14 location. He also talks about the need for facetime in the Debian project as it correlates with and improves everyone s ability to work together. DebConf14 also provided the chance to set up a hard time frame for removing older 1024 bit keys from Debian keyrings. Steve McIntyre posted a Summary from the State of the ARM BoF at DebConf14 with updates on the 3 current ports armel, armhf and arm64. armel which targets the ARM EABI soft-float ARMv4t processor may eventually be going away, while armhf which targets the ARM EABI hard-float ARMv7 is doing well as the cross-distro standard. Debian is has moved to a single armmp kernel flavour using Device Tree Blobs and should be able to run on a large range of ARMv7 hardware. The arm64 port recently entered the main archive and it is hoped to release with jessie with 2 official builds hosted at ARM. There is talk of laptop development with an arm64 CPU. Buildds and hardware are mentioned with acknowledgements for donated new machines, Banana Pi boards, and software by way of ARM s DS-5 Development Studio - free for all Debian Developers. Help is needed! Join #debian-arm on irc.debian.org and/or the debian-arm mailing list. There is an upcoming Mini-DebConf in November 2014 hosted by ARM in Cambridge, UK. Tianon Gravi posted about the atmosphere and contrast between an average conference and a DebConf. Joseph Bisch posted about meeting his GSOC mentors, attending and contributing to a keysigning event and did some work on debmetrics which is powering metrics.debian.net. Debmetrics provides a uniform interface for adding, updating, and viewing various metrics concerning Debian. Harlan Lieberman-Berg s DebConf Retrospective shared the feel of DebConf, and detailed some of the work on debugging a build failure, work with the pkg-perl team on a few uploads, and work on a javascript slowdown issue on codeeditor. Ana Guerrero L pez reflected on Ten years contributing to Debian.

24 August 2014

Noah Meyerhans: Debconf by train

Today is the first time I've taken an interstate train trip in something like 15 years. A few things about the trip were pleasantly surprising. Most of these will come as no surprise:
  1. Less time wasted in security theater at the station prior to departure.
  2. On-time departure
  3. More comfortable seats than a plane or bus.
  4. Quiet.
  5. Permissive free wifi
Wifi was the biggest surprise. Not that it existed, since we're living in the future and wifi is expected everywhere. It's IPv4 only and stuck behind a NAT, which isn't a big surprise, but it is reasonably open. There isn't any port filtering of non-web TCP ports, and even non-TCP protocols are allowed out. Even my aiccu IPv6 tunnel worked fine from the train, although I did experience some weird behavior with it. I haven't used aiccu much in quite a while, since I have a native IPv6 connection at home, but it can be convenient while travelling. I'm still trying to figure out happened today, though. The first symptoms were that, although I could ping IPv6 hosts, I could not actually log in via IMAP or ssh. Tcpdump showed all the standard symptoms of a PMTU blackhole. Small packets flow fine, large ones are dropped. The interface MTU is set to 1280, which is the minimum MTU for IPv6 and any path on the internet is expected to handle packets of at least that size. Experimentation via ping6 reveals that the largest payload size I can successfully exchange with a peer is 820 bytes. Add 8 bytes for the ICMPv6 header for 828 bytes of payload, plus 40 bytes for the IPv6 header gives an 868 byte packet, which is well under what should be the MTU for this path. I've worked around this problem with an ip6tables rule to rewrite the MSS on outgoing SYN packets to 760 bytes, which should leave 40 for the IPv6 header and 20 for any extension headers:
sudo ip6tables -t mangle -A OUTPUT -p tcp --tcp-flags SYN,RST SYN -j TCPMSS --set-mss 760
It is working well and will allow me to publish this from the train, which I'd otherwise have been unable to do. But... weird.

26 July 2008

Philipp Kern: Stable Point Release: Etch 4.0r4 (aka etchnhalf)

Another point release for Etch has been done; now it's the time for the CD team to roll out new images after the next mirror pulse. The official announcements (prepared by Alexander Reichle-Schmehl, thanks!) will follow shortly afterwards. FTP master of the day was Joerg Jaspert, who did his first point release since Woody, as he told us on IRC. We appreciate your work and you spending your time that shortly before going to Argentina. This point release includes the etchnhalf update introducing a new kernel image (based on 2.6.24) and some driver updates. Additionally the infamous openssl hole will be fixed for good, even for new installs. Again I want to present you a list of people who contributed to this release. It cannot be complete as I got the information out of the Changed-by fields of the uploads. From the Release Team we had dann frazier (who drove the important kernel part of etchnhalf), Luk Claes, Neil McGovern, Andreas Barth, Martin Zobel-Helas and me working on it. ;-)

12 April 2008

Philipp Kern: Wrapping up Sarge into a nice package

We escorted Sarge to its last home. 3.1r8 is done, thanks to all the people who made it possible. A big thanks goes to James Troup, our ftpmaster of the day doing all the grunt work of getting a new point release out of the door. To bring in a more personal feeling of who makes this all possible, here is a list of people contributing uploads to 3.1r8 (mostly people from our fabulous Security Team): I would also like to thank dann frazier, Luk Claes, Martin Zobel-Helas and Neil McGovern for helping with the preparation of the point release.

20 May 2007

Christian Perrier: Samba week

Last week has been mostly dedicated to samba, again. After last week's hurry dealing with 3 security announcements, the announcements were published by the security team. This was a not-so-simple process as even sarge was affected by the security issues and, to make things less simple, only two of them. I also slightly messed up in uploads to the security upload queues which added more work to Noah Meyerhans to deal with them. Anyway, we still were pretty happy of the result. However, immediately after the packages hit unstable and etch, a few problem arised: Dealing with all these bugs was time consuming but finally convinced me to re-subscribe to a few upstream mailing lists...which is slowly taking samba out of the "maintenance mode" we adopted last years. I hope we'll be able to sustain the load. Anyway, we already succeeded during that week in helping upstream to investigate some issues in their recent release. More samba uploads will soon come, so if you use samba, please update your systems regularly.

14 May 2007

Christian Perrier: Samba week-end

Today is the end of a pretty long week-end of dealing with samba. About 10 days ago, we (the samba packaging team in Debian) were privately notified of security issues found by the Samba Team developers in this quite popular package. This very close to one of their bi-annual releases, namely 3.0.25. No less than three security issues were unveiled. Two of them (CVE-2007-2446 and CVE-2007-2447) affect all currently supported Debian releases, namely sarge, etch and sid. One (CVE-2007-2444) affects both etch and unstable. This was the beginning of long days of rehearsal, helped out by Noah Meyerhans from the security team. Finally, updated versions for sarge and etch were uploaded to oldstable-security and stable-security for the autobuilders to catch up (they still are catching up because I uploaded the updates without the .orig.tar.gz file, which unveiled a bug in the security autobuilders apparently). These updates for sarge and etch should be available as soon as the security team completes the final checks and approval, I guess. And, today, the Samba Team gave us early access to the newly released 3.0.25 version of samba so that we could build and upload that package 2 hours before it was officially announced..:-) Thanks a lot again to the Samba Team for their care and more particularly to Gerald "Jerry" Carter (someone I have deep respect for, along with Jeremy Allison and Andrew Tridgell). Thanks to Steve Langasek for his hard work preparing the 3.0.25 release (tracking down an upstream bug in the Pythin bindings build). And, finally, thanks to the Debian security team and more particularly to Noah Meyerhans for his work on backporting some of the patches. This was a tiring but really motivating week-end. I hope that all Debianers who use Samba will enjoy the result.