Search Results: "mt"

25 February 2015

Jonathan Dowland: CD ripping on Linux

A few months ago I decided it would be good to re-rip my CD collection, retaining a lossless digital copy, and set about planning the project. I then realised I hadn't the time to take the project on for the time being and parked it, but not before figuring a few bits and pieces out. Starting at the beginning, with ripping the CDs. The most widely used CD ripping software on Linux systems is still cdparanoia, which is pretty good, but it's still possible to get bad CD rips, and I've had several in a very small sample size. On Windows systems, the recommended ripper is Exact Audio Copy, or EAC for short. EAC calculates checksums of ripped CDs or tracks and compares them against an online database of rips called AccurateRip. It also calibrates your CD drive against the same database and uses the calculated offset during the rip. I wasn't aware of any AccurateRip-supporting rippers until recently when Mark Brown introduced me to morituri. I've done some tentative experiments and it appears to be produce identical rips to EAC for some sample CDs (with different CD reading hardware too). Fundamentally, AccurateRip is a proprietary database, and so I think the longer term goal in the F/OSS community should be to create an alternative, open database of rip checksums and drive offsets. The audio community has already been burned by the CDDB database going proprietary, but at least we now have the far superior MusicBrainz.

16 February 2015

Julien Danjou: Hacking Python AST: checking methods declaration

A few months ago, I wrote the definitive guide about Python method declaration, which had quite a good success. I still fight every day in OpenStack to have the developers declare their methods correctly in the patches they submit. Automation plan The thing is, I really dislike doing the same things over and over again. Furthermore, I'm not perfect either, and I miss a lot of these kind of problems in the reviews I made. So I decided to replace me by a program a more scalable and less error-prone version of my brain. In OpenStack, we rely on flake8 to do static analysis of our Python code in order to spot common programming mistakes. But we are really pedantic, so we wrote some extra hacking rules that we enforce on our code. To that end, we wrote a flake8 extension called hacking. I really like these rules, I even recommend to apply them in your own project. Though I might be biased or victim of Stockholm syndrome. Your call. Anyway, it's pretty clear that I need to add a check for method declaration in hacking. Let's write a flake8 extension! Typical error The typical error I spot is the following:
class Foo(object):
# self is not used, the method does not need
# to be bound, it should be declared static
def bar(self, a, b, c):
return a + b - c

That would be the correct version:
class Foo(object):
@staticmethod
def bar(a, b, c):
return a + b - c

This kind of mistake is not a show-stopper. It's just not optimized. Why you have to manually declare static or class methods might be a language issue, but I don't want to debate about Python misfeatures or design flaws. Strategy We could probably use some big magical regular expression to catch this problem. flake8 is based on the pep8 tool, which can do a line by line analysis of the code. But this method would make it very hard and error prone to detect this pattern. Though it's also possible to do an AST based analysis on on a per-file basis with pep8. So that's the method I pick as it's the most solid. AST analysis I won't dive deeply into Python AST and how it works. You can find plenty of sources on the Internet, and I even talk about it a bit in my book The Hacker's Guide to Python. To check correctly if all the methods in a Python file are correctly declared, we need to do the following: Flake8 plugin In order to register a new plugin in flake8 via hacking, we just need to add an entry in setup.cfg:
[entry_points]
flake8.extension =
[ ]
H904 = hacking.checks.other:StaticmethodChecker
H905 = hacking.checks.other:StaticmethodChecker

We register 2 hacking codes here. As you will notice later, we are actually going to add an extra check in our code for the same price. Stay tuned. The next step is to write the actual plugin. Since we are using an AST based check, the plugin needs to be a class following a certain signature:
@core.flake8ext
class StaticmethodChecker(object):
def __init__(self, tree, filename):
self.tree = tree

def run(self):
pass

So far, so good and pretty easy. We store the tree locally, then we just need to use it in run() and yield the problem we discover following pep8 expected signature, which is a tuple of (lineno, col_offset, error_string, code). This AST is made for walking The ast module provides the walk function, that allow to iterate easily on a tree. We'll use that to run through the AST. First, let's write a loop that ignores the statement that are not class definition.
@core.flake8ext
class StaticmethodChecker(object):
def __init__(self, tree, filename):
self.tree = tree

def run(self):
for stmt in ast.walk(self.tree):
# Ignore non-class
if not isinstance(stmt, ast.ClassDef):
continue

We still don't check for anything, but we know how to ignore statement that are not class definitions. The next step need to be to ignore what is not function definition. We just iterate over the attributes of the class definition.
for stmt in ast.walk(self.tree):
# Ignore non-class
if not isinstance(stmt, ast.ClassDef):
continue
# If it's a class, iterate over its body member to find methods
for body_item in stmt.body:
# Not a method, skip
if not isinstance(body_item, ast.FunctionDef):
continue

We're all set for checking the method, which is body_item. First, we need to check if it's already declared as static. If so, we don't have to do any further check and we can bail out.
for stmt in ast.walk(self.tree):
# Ignore non-class
if not isinstance(stmt, ast.ClassDef):
continue
# If it's a class, iterate over its body member to find methods
for body_item in stmt.body:
# Not a method, skip
if not isinstance(body_item, ast.FunctionDef):
continue
# Check that it has a decorator
for decorator in body_item.decorator_list:
if (isinstance(decorator, ast.Name)
and decorator.id == 'staticmethod'):
# It's a static function, it's OK
break
else:
# Function is not static, we do nothing for now
pass

Note that we use the special for/else form of Python, where the else is evaluated unless we used break to exit the for loop.
for stmt in ast.walk(self.tree):
# Ignore non-class
if not isinstance(stmt, ast.ClassDef):
continue
# If it's a class, iterate over its body member to find methods
for body_item in stmt.body:
# Not a method, skip
if not isinstance(body_item, ast.FunctionDef):
continue
# Check that it has a decorator
for decorator in body_item.decorator_list:
if (isinstance(decorator, ast.Name)
and decorator.id == 'staticmethod'):
# It's a static function, it's OK
break
else:
try:
first_arg = body_item.args.args[0]
except IndexError:
yield (
body_item.lineno,
body_item.col_offset,
"H905: method misses first argument",
"H905",
)
# Check next method
continue

We finally added some check! We grab the first argument from the method signature. Unless it fails, and in that case, we know there's a problem: you can't have a bound method without the self argument, therefore we raise the H905 code to signal a method that misses its first argument. Now you know why we registered this second pep8 code along with H904 in setup.cfg. We have here a good opportunity to kill two birds with one stone. The next step is to check if that first argument is used in the code of the method.
for stmt in ast.walk(self.tree):
# Ignore non-class
if not isinstance(stmt, ast.ClassDef):
continue
# If it's a class, iterate over its body member to find methods
for body_item in stmt.body:
# Not a method, skip
if not isinstance(body_item, ast.FunctionDef):
continue
# Check that it has a decorator
for decorator in body_item.decorator_list:
if (isinstance(decorator, ast.Name)
and decorator.id == 'staticmethod'):
# It's a static function, it's OK
break
else:
try:
first_arg = body_item.args.args[0]
except IndexError:
yield (
body_item.lineno,
body_item.col_offset,
"H905: method misses first argument",
"H905",
)
# Check next method
continue
for func_stmt in ast.walk(body_item):
if six.PY3:
if (isinstance(func_stmt, ast.Name)
and first_arg.arg == func_stmt.id):
# The first argument is used, it's OK
break
else:
if (func_stmt != first_arg
and isinstance(func_stmt, ast.Name)
and func_stmt.id == first_arg.id):
# The first argument is used, it's OK
break
else:
yield (
body_item.lineno,
body_item.col_offset,
"H904: method should be declared static",
"H904",
)

To that end, we iterate using ast.walk again and we look for the use of the same variable named (usually self, but if could be anything, like cls for @classmethod) in the body of the function. If not found, we finally yield the H904 error code. Otherwise, we're good. Conclusion I've submitted this patch to hacking, and, finger crossed, it might be merged one day. If it's not I'll create a new Python package with that check for flake8. The actual submitted code is a bit more complex to take into account the use of abc module and include some tests. As you may have notice, the code walks over the module AST definition several times. There might be a couple of optimization to browse the AST in only one pass, but I'm not sure it's worth it considering the actual usage of the tool. I'll let that as an exercise for the reader interested in contributing to OpenStack. Happy hacking!
The Hacker's Guide to Python
A book I wrote talking about designing Python applications, state of the art, advice to apply when building your application, various Python tips, etc. Interested? Check it out.

13 February 2015

Daniel Leidert: Motion picture capturing: Debian + motion + Logitech C910 - part II

In my recent attempt to setup a motion detection camera I was disappointed, that my camera, which should be able to record with 30 fps in 720p mode only reached 10 fps using the software motion. Now I got a bit further. This seems to be an issue with the format used by motion. I've check the output of v4l2-ctl ...
$ v4l2-ctl -d /dev/video1 --list-formats-ext
[..]
ioctl: VIDIOC_ENUM_FMT
Index : 0
Type : Video Capture
Pixel Format: 'YUYV'
Name : YUV 4:2:2 (YUYV)
[..]
Size: Discrete 1280x720
Interval: Discrete 0.100s (10.000 fps)
Interval: Discrete 0.133s (7.500 fps)
Interval: Discrete 0.200s (5.000 fps)
[..]

Index : 1
Type : Video Capture
Pixel Format: 'MJPG' (compressed)
Name : MJPEG
[..]
Size: Discrete 1280x720
Interval: Discrete 0.033s (30.000 fps)
Interval: Discrete 0.042s (24.000 fps)
Interval: Discrete 0.050s (20.000 fps)
Interval: Discrete 0.067s (15.000 fps)
Interval: Discrete 0.100s (10.000 fps)
Interval: Discrete 0.133s (7.500 fps)
Interval: Discrete 0.200s (5.000 fps)
[..]
... and motion:
$ motion
[..]
[1] [NTC] [VID] v4l2_set_pix_format: Config palette index 17 (YU12) doesn't work.
[1] [NTC] [VID] v4l2_set_pix_format: Supported palettes:
[1] [NTC] [VID] v4l2_set_pix_format: (0) YUYV (YUV 4:2:2 (YUYV))
[1] [NTC] [VID] v4l2_set_pix_format: 0 - YUV 4:2:2 (YUYV) (compressed : 0) (0x56595559)
[1] [NTC] [VID] v4l2_set_pix_format: (1) MJPG (MJPEG)
[1] [NTC] [VID] v4l2_set_pix_format: 1 - MJPEG (compressed : 1) (0x47504a4d)
[1] [NTC] [VID] v4l2_set_pix_format Selected palette YUYV
[1] [NTC] [VID] v4l2_do_set_pix_format: Testing palette YUYV (1280x720)
[1] [NTC] [VID] v4l2_do_set_pix_format: Using palette YUYV (1280x720) bytesperlines 2560 sizeimage 1843200 colorspace 00000008
[..]
Ok, so both formats YUYV and MJPG are supported and recognized and I can choose both via the v4l2palette configuration variable, citing motion.conf:
# v4l2_palette allows to choose preferable palette to be use by motion
# to capture from those supported by your videodevice. (default: 17)
# E.g. if your videodevice supports both V4L2_PIX_FMT_SBGGR8 and
# V4L2_PIX_FMT_MJPEG then motion will by default use V4L2_PIX_FMT_MJPEG.
# Setting v4l2_palette to 2 forces motion to use V4L2_PIX_FMT_SBGGR8
# instead.
#
# Values :
# V4L2_PIX_FMT_SN9C10X : 0 'S910'
# V4L2_PIX_FMT_SBGGR16 : 1 'BYR2'
# V4L2_PIX_FMT_SBGGR8 : 2 'BA81'
# V4L2_PIX_FMT_SPCA561 : 3 'S561'
# V4L2_PIX_FMT_SGBRG8 : 4 'GBRG'
# V4L2_PIX_FMT_SGRBG8 : 5 'GRBG'
# V4L2_PIX_FMT_PAC207 : 6 'P207'
# V4L2_PIX_FMT_PJPG : 7 'PJPG'
# V4L2_PIX_FMT_MJPEG : 8 'MJPEG'
# V4L2_PIX_FMT_JPEG : 9 'JPEG'
# V4L2_PIX_FMT_RGB24 : 10 'RGB3'
# V4L2_PIX_FMT_SPCA501 : 11 'S501'
# V4L2_PIX_FMT_SPCA505 : 12 'S505'
# V4L2_PIX_FMT_SPCA508 : 13 'S508'
# V4L2_PIX_FMT_UYVY : 14 'UYVY'
# V4L2_PIX_FMT_YUYV : 15 'YUYV'
# V4L2_PIX_FMT_YUV422P : 16 '422P'
# V4L2_PIX_FMT_YUV420 : 17 'YU12'
#
v4l2_palette 17
Now motion uses YUYV as default mode as shown by its output. So it seems that all I have to do is to choose MJPEG in my motion.conf:
v4l2_palette 8
Testing again ...
$ motion
[..]
[1] [NTC] [VID] vid_v4lx_start: Using V4L2
[1] [NTC] [ALL] image_ring_resize: Resizing pre_capture buffer to 1 items
[1] [NTC] [VID] v4l2_set_control: setting control "Brightness" to 25 (ret 0 )
Corrupt JPEG data: 5 extraneous bytes before marker 0xd6
[1] [CRT] [VID] mjpegtoyuv420p: Corrupt image ... continue
[1] [NTC] [VID] v4l2_set_control: setting control "Brightness" to 14 (ret 0 )
Corrupt JPEG data: 1 extraneous bytes before marker 0xd5
[1] [CRT] [VID] mjpegtoyuv420p: Corrupt image ... continue
[1] [NTC] [VID] v4l2_set_control: setting control "Brightness" to 36 (ret 0 )
Corrupt JPEG data: 3 extraneous bytes before marker 0xd2
[1] [CRT] [VID] mjpegtoyuv420p: Corrupt image ... continue
[1] [NTC] [VID] v4l2_set_control: setting control "Brightness" to 58 (ret 0 )
Corrupt JPEG data: 1 extraneous bytes before marker 0xd7
[1] [CRT] [VID] mjpegtoyuv420p: Corrupt image ... continue
[1] [NTC] [VID] v4l2_set_control: setting control "Brightness" to 80 (ret 0 )
Corrupt JPEG data: 4 extraneous bytes before marker 0xd7
[1] [CRT] [VID] mjpegtoyuv420p: Corrupt image ... continue
[1] [ERR] [ALL] motion_init: Error capturing first image
[1] [NTC] [ALL] image_ring_resize: Resizing pre_capture buffer to 16 items
Corrupt JPEG data: 4 extraneous bytes before marker 0xd1
[1] [CRT] [VID] mjpegtoyuv420p: Corrupt image ... continue
Corrupt JPEG data: 11 extraneous bytes before marker 0xd1
[1] [CRT] [VID] mjpegtoyuv420p: Corrupt image ... continue
Corrupt JPEG data: 3 extraneous bytes before marker 0xd4
[1] [CRT] [VID] mjpegtoyuv420p: Corrupt image ... continue
Corrupt JPEG data: 7 extraneous bytes before marker 0xd1
[..]
... and another issue is turning up :( The output above goes on and on and on and there is no video capturing. So accordingly to $searchengine the above happens to a lot of people. I just found one often suggested fix: pre-load v4l2convert.so from libv4l-0:
$ LD_PRELOAD=/usr/lib/x86_64-linux-gnu/libv4l/v4l2convert.so motion
But the problem persists and I'm out of ideas :( So atm it lokks like I cannot use the MJPEG format and don't get 30 fps at 1280x720 pixels. During writing I then discovered a solution by good old trial-and-error: Leaving the v4l2_palette variable at its default value 17 (YU12) and pre-loading v4l2convert.so makes use of YU12 and the framerate at least raises to 24 fps:
$ LD_PRELOAD=/usr/lib/x86_64-linux-gnu/libv4lg/v4l2convert.so motion
[..]
[1] [NTC] [VID] v4l2_do_set_pix_format: Testing palette YU12 (1280x720)
[1] [NTC] [VID] v4l2_do_set_pix_format: Using palette YU12 (1280x720) bytesperlines 1280 sizeimage 1382400 colorspace 00000008
[..]
[1] [NTC] [EVT] event_new_video FPS 24
[..]
Finally! :) The results are nice. It would maybe even be a good idea to limit the framerate a bit, to e.g. 20. So that is a tested configuration for the Logitech C910 running at a resolution of 1280x720 pixels:
v4l2_palette 17
width 1280
height 720
framerate 20
minimum_frame_time 0
pre_capture 10 # 0,5 seconds pre-recording
post_capture 50 # 2,5 seconds after-recording
auto_brightness on
ffmpeg_variable_bitrate 2 # best quality
Now all this made me curious, which framerate is possible at a resolution of 1920x1080 pixels now and how the results look like. Although I get 24 fps too, the resulting movie suffers of jumps every few frames. So here I got pretty good results with a more conservative setting. By increasing framerate - tested up to 15 fps with good results - pre_capture needed to be decreased accordingly to values between 1..3 to minimize jumps:
v4l2_palette 17
width 1920
height 1080
framerate 12
minimum_frame_time 0
pre_capture 6 # 0,5 seconds pre-recording
post_capture 30 # 2,5 seconds after-recording
auto_brightness on
ffmpeg_variable_bitrate 2 # best quality
Both configurations lead to satisfying results. Of course the latter will easily fill your hardrive :) TODOI guess, the results can be optimzed further by playing around with ffmpeg_bps and ffmpeg_variable_bitrate. Maybe then it is possible to record without jumps at higher framerates too(?). I also didn't test the various norm settings (PAL, NTSC, etc.).

11 February 2015

John Goerzen: Reactions to Has modern Linux lost its way? and the value of simplicity

Apparently I touched a nerve with my recent post about the growing complexity of issues. There were quite a few good comments, which I ll mention here. It s provided me some clarity on the problem, in fact. I ll try to distill a few more thoughts here. The value of simplicity and predictability The best software, whether it s operating systems or anything else, is predictable. You read the documentation, or explore the interface, and you can make a logical prediction that when I do action X, the result will be Y. grep and cat are perfect examples of this. The more complex the rules in the software, the more hard it is for us to predict. It leads to bugs, and it leads to inadvertant security holes. Worse, it leads to people being unable to fix things themselves one of the key freedoms that Free Software is supposed to provide. The more complex software is, the fewer people will be able to fix it by themselves. Now, I want to clarify: I hear a lot of talk about ease of use. Gnome removes options in my print dialog box to make it easier to use. (This is why I do not use Gnome. It actually makes it harder to use, because now I have to go find some obscure way to just make the darn thing print.) A lot of people conflate ease of use with ease of learning, but in reality, I am talking about neither. I am talking about ease of analysis. The Linux command line may not have pointy-clicky icons, but at least at one time once you understood ls -l and how groups, users, and permission bits interacted, you could fairly easily conclude who had access to what on a system. Now we have a situation where the answer to this is quite unclear in terms of desktop environments (apparently some distros ship network-manager so that all users on the system share the wifi passwords they enter. A surprise, eh?) I don t mind reading a manpage to learn about something, so long as the manpage was written to inform. With this situation of dbus/cgmanager/polkit/etc, here s what it feels like. This, to me, is the crux of the problem: It feels like we are in a twisty maze, every passage looks alike, and our flashlight ran out of battieries in 2013. The manpages, to the extent they exist for things like cgmanager and polkit, describe the texture of the walls in our cavern, but don t give us a map to the cave. Therefore, we are each left to piece it together little bits at a time, but there are traps that keep moving around, so it s slow going. And it s a really big cave. Other user perceptions There are a lot of comments on the blog about this. It is clear that the problem is not specific to Debian. For instance: This stuff is really important, folks. People being able to maintain their own software, work with it themselves, etc. is one of the core reasons that Free Software exists in the first place. It is a fundamental value of our community. For decades, we have been struggling for survival, for relevance. When I started using Linux, it was both a question and an accomplishment to have a useable web browser on many platforms. (Netscape Navigator was closed source back then.) Now we have succeeded. We have GPL-licensed and BSD-licensed software running on everything from our smartphones to cars. But we are snatching defeat from the jaws of victory, because just as we are managing to remove the legal roadblocks that kept people from true mastery of their software, we are erecting technological ones that make the step into the Free Software world so much more difficult than it needs to be. We no longer have to craft Modelines for X, or compile a kernel with just the right drivers. This is progress. Our hardware is mostly auto-detected and our USB serial dongles work properly more often on Linux than on Windows. This is progress. Even our printers and scanners work pretty darn well. This is progress, too. But in the place of all these things, now we have userspace mucking it up. We have people with mysterious errors that can t be easily assisted by the elders in the community, because the elders are just as mystified. We have bugs crop up that would once have been shallow, but are now non-obvious. We are going to leave a sour taste in people s mouth, and stir repulsion instead of interest among those just checking it out. The ways out It s a nasty predicament, isn t it? What are your ways out of that cave without being eaten by a grue? Obviously the best bet is to get rid of the traps and the grues. Somehow the people that are working on this need to understand that elegance is a feature a darn important feature. Sadly I think this ship may have already sailed. Software diagnosis tools like Enrico Zini s seat-inspect idea can also help. If we have something like an ls for polkit that can reduce all the complexity to something more manageable, that s great. The next best thing is a good map good manpages, detailed logs, good error messages. If software would be more verbose about the permission errors, people could get a good clue about where to look. If manpages for software didn t just explain the cavern wall texture, but explain how this room relates to all the other nearby rooms, it would be tremendously helpful. At present, I am unsure if our problem is one of very poor documentation, or is so bad that good documentation like this is impossible because the underlying design is so complex it defies being documented in something smaller than a book (in which case, our ship has not just sailed but is taking on water). Counter-argument: progress One theme that came up often in the comments is that this is necessary for progress. To a certain extent, I buy that. I get why udev is important. I get why we want the DE software to interact well. But here s my thing: this already worked well in wheezy. Gnome, XFCE, and KDE software all could mount/unmount my drives. I am truly still unsure what problem all this solved. Yes, cloud companies have demanding requirements about security. I work for one. Making security more difficult to audit doesn t do me any favors, I can assure you. The systemd angle To my surprise, systemd came up quite often in the discussion, despite the fact that I mentioned I wasn t running systemd-sysv. It seems like the new desktop environemt ecosystem is the systemd ecosystem in a lot of people s minds. I m not certain this is justified; systemd was not my first choice, but as I said in an earlier blog post, jessie will still boot . A final note I still run Debian on all my personal boxes and I m not going to change. It does awesome things. For under $100, I built a music-playing system, with Raspberry Pis, fully synced throughout my house, using a little scripting and software. The same thing from Sonos would have cost thousands. I am passionate about this community and its values. Even when jessie releases with polkit and all the rest, I m still going to use it, because it is still a good distro from good people.

10 February 2015

Petter Reinholdtsen: Nude body scanner now present on Norwegian airport

Aftenposten, one of the largest newspapers in Norway, today report that three of the nude body scanners now is put to use at Gardermoen, the main airport in Norway. This way the travelers can have their body photographed without cloths when visiting Norway. Of course this horrible news is presented with a positive spin, stating that "now travelers can move past the security check point faster and more efficiently", but fail to mention that the machines in question take pictures of their nude bodies and store them internally in the computer, while only presenting sketch figure of the body to the public. The article is written in a way that leave the impression that the new machines do not take these nude pictures and only create the sketch figures. In reality the same nude pictures are still taken, but not presented to everyone. They are still available for the owners of the system and the people doing maintenance of the scanners, as long as they are taken and stored. Wikipedia have a more on Full body scanners, including example images and a summary of the controversy about these scanners. Personally I will decline to use these machines, as I believe strip searches of my body is a very intrusive attack on my privacy, and not something everyone should have to accept to travel.

29 January 2015

Craig Small: Juniper Firewalls and IPv6

A little firewall I found an interesting side-effect of the Juniper firewalls when you introduce IPv6. In hindsight it appears perfectly reasonable but if you are not aware of it in the first place you may have a much more permissive firewall than you thought. My setup is such that my internet address changes every time I connect to an ISP. I have services behind the Juniper that I want to expose onto the Internet, in this case a mailserver. Most of the documentation states to have a reasonably open firewall rule and a nat rule.
[edit security nat]
destination  
    pool mailserver-smtp  
        address 10.1.1.1/32 port 25;
     
 
[edit security policies]
from-zone Internet to-zone Internal  
    policy Mailserver  
        match  
            source-address any;
            destination-address any;
            application smtp;
         
        then  
             permit;
         
    
 
Pretty standard stuff and its documented in plenty of places. We cannot set a destination address because its dynamic, so set it to all. My next step was, ok my mailserver is on IPv4 and IPv6, how do I let the IPv6 connections in? Any means ANY That s where I noticed I had a problem, they could already get in. In fact anyone could get to the mailserver (good) and anything else that had an open SMTP port on my network and used IPv6 (bad). It seems that any destination address means ANY IPv4 or IPv6 address. Both myself and the writers of the documentation hadn t initially thought of what happens when you add IPv6. The solution is to not let any destination, but any IPv4 and the specific mailserver destination. First create an addressbook entry for the IPv6 address of the mailserver.
[edit security address-book global]
address mailserver-ipv6 2001:Db8:1111:2222::100/128;
then adjust the rule
[edit security policies]
from-zone Internet to-zone Internal  
    policy Mailserver  
        match  
            source-address any;
            destination-address [ any-ipv4 mailserver-ipv6];
            application smtp;
         
        then  
             permit;
         
    
 
That way there is access to the mailserver using either IPv4 or IPv6. I m also going to try to see if I adjust the rule so the destination only includes the mailserver address (both IPv4 and IPv6) even though the IPv4 is NATed and see if that works.

25 January 2015

Richard Hartmann: KDE battery monitor

Dear lazyweb, using a ThinkPad X1 Carbon with Debian unstable and KDE 4.14.2, I have not had battery warnings for a few weeks, now. The battery status can be read out via acpi -V as well as via the KDE widget. Hibernation via systemctl hibernate works as well. What does not work is the warning when my battery is low, or automagic hibernation when shutting the lid or when the battery level is critical. From what I gather, something in the communication between upower and KDE broke down, but I can't find what it is. I have also been told that Cinnamon is affected as well, so this seems to be a more general problem Sadly, me and anyone else who's affected has been unable to fix this. So, dear lazyweb, please help. In loosely related news, this old status is still valid. UMTS is stable-ish now but even though I saved the SIM's PIN, KDE always displays a "SIM PIN unlock request" prompt after booting or hibernating. Once I enter that PIN, systemd tells me that a system policy prevents the change and wants my user password. If anyone knows how to get rid of that, I would also appreciate any pointers.

Jonathan Dowland: Frontier: First Encounters

Cobra mk. 3 Cobra mk. 3
Four years ago, whilst looking for something unrelated, I stumbled across Tom Morton's port of "Frontier: Elite II" for the Atari to i386/OpenGL. This took me right back to playing Frontier on my Amiga in the mid-nineties. I spent a bit of time replaying Frontier and its sequel, First Encounters, for which there exists an interesting family of community-written game engines based on a reverse-engineering of the original DOS release. I made some scrappy notes about engines, patches etc. at the time, which are on my frontier page. With the recent release of Elite: Dangerous, I thought I'd pick up where I left in 2010 and see if I could get the Thargoid ship. I'm nowhere near yet, but I've spent some time trying to maximize income during the game's initial Soholian Fever period. My record in a JJFFE-derived engine (and winning the Wiccan Ware race during the same period) is currently 727,800. Can you do better?

21 January 2015

Tomasz Buchert: Expired keys in Debian keyring

A new version of Stellarium was recently released (0.13.2), so I wanted to upload it to Debian unstable as I usually do. And so I did, but it was rejected without me even knowing, since I got no e-mail response from ftp-masters. It turns out that my GPG key in the Debian keyring expired recently and so my upload was rightfully rejected. Not a big deal, actually, since you can easily move the expiration date (even after its expiration!). I did it already and the updated key is already propagated, but be aware that Debian keyring does not synchronize with other keyservers! To update your key in Debian (if you are a Debian Developer or Mantainer) you must send your updated keys to keyring.debian.org like that (you should replace my ID with your own):
$ gpg --keyserver keyring.debian.org --send-keys 24B17D29
Debian keyring is distributed as a standard DEB package and apparently it may take up to a month to have your updated key in Debian. It seems that I may be unable to upload packages for some time. But the whole story made me thinking: am I the only one who forgot to update his key in Debian keyring? To verify it I wrote the following snippet (works in Python 2 and 3!) which shows keys expired in the Debian keyring (well, two of them). As a bonus, it also shows keys that have non-UTF8 characters in UIDs see #738483 for more information.
#
# be sure to do "apt-get install python-gnupg"
#
import gnupg
import datetime
def check_keys(keyring, tab = ""):
    gpg = gnupg.GPG(keyring = keyring)
    gpg.decode_errors = 'replace' # see: https://bugs.debian.org/738483
    keys = gpg.list_keys()
    now = datetime.datetime.now()
    for key in keys:
        uids = key['uids']
        uid = uids[0]
        if key['expires'] != '':
            expire = datetime.datetime.fromtimestamp(int(key['expires']))
            diff = expire - now
            if diff.days < 0:
                print(u' EXPIRED: Key of   expired   days ago.'.format(tab, uid, -diff.days))
        mangled_uids = [ u for u in uids if u'\ufffd' in u ]
        if len(mangled_uids) > 0:
            print(u' MANGLED: Key of   has some mangled uids:  '.format(tab, uid, mangled_uids))
keyrings = [
    "/usr/share/keyrings/debian-keyring.gpg",
    "/usr/share/keyrings/debian-maintainers.gpg"
]
for keyring in keyrings:
    print(u"CHECKING  ".format(keyring))
    check_keys(keyring, tab = "    ")
I m not going to show the output of this code, because it contains names and e-mail adresses which I really shouldn t post. But you can run it yourself. You will see that there is a small group of people with expired keys (including me!). Interestingly, some keys have expired a long time ago: there is one that expired more than 7 years ago! The outcome of the story is: yes, you should have an expiration date on your key for safety reasons, but be careful - it can surprise you at the worst moment.

Jonathan McDowell: Moving to Jekyll

I ve been meaning to move away from Movable Type for a while; they no longer provide the Open Source variant, I ve had some issues with the commenting side of things (more the fault of spammers than Movable Type itself) and there are a few minor niggles that I wanted to resolve. Nothing has been particularly pressing me to move and I haven t been blogging as much so while I ve been keeping an eye open for a replacement I haven t exerted a lot of energy into the process. I have a little bit of time at present so I asked around on IRC for suggestions. One was ikiwiki, which I use as part of helping maintain the SPI website (and think is fantastic for that), the other was Jekyll. Both are available as part of Debian Jessie. Jekyll looked a bit fancier out of the box (I m no web designer so pre-canned themes help me a lot), so I decided to spend some time investigating it a bit more. I d found a Movable Type to ikiwiki converter which provided a starting point for exporting from the SQLite3 DB I was using for MT. Most of my posts are in markdown, the rest (mostly from my Blosxom days) are plain HTML, so there wasn t any need to do any conversion on the actual content. A minor amount of poking convinced Jekyll to use the same URL format (permalink: /:year/:month/:title.html in the _config.yml did what I wanted) and I had to do a few bits of fix up for some images that had been uploaded into MT, but overall fairly simple stuff. Next I had to think about comments. My initial thought was to just ignore them for the moment; they weren t really working on the MT install that well so it s not a huge loss. I then decided I should at least see what the options were. Google+ has the ability to embed in your site, so I had a play with that. It worked well enough but I didn t really want to force commenters into the Google ecosystem. Next up was Disqus, which I ve seen used in various places. It seems to allow logins via various 3rd parties, can cope with threading and deals with the despamming. It was easy enough to integrate to play with, and while I was doing so I discovered that it could cope with importing comments. So I tweaked my conversion script to generate a WXR based file of the comments. This then imported easily into Disqus (and also I double checked that the export system worked). I m sure the use of a third party to handle comments will put some people off, but given the ability to export I m confident if I really feel like dealing with despamming comments again at some point I can switch to something locally hosted. I do wish it didn t require Javascript, but again it s a trade off I m willing to make at present. Anyway. Thanks to Tollef for the pointer (and others who made various suggestions). Hopefully I haven t broken (or produced a slew of new posts for) any of the feed readers pointed at my site (but you should update to use feed.xml rather than any of the others - I may remove them in the future once I see usage has died down). (On the off chance it s useful to someone else the conversion script I ended up with is available. There s a built in Jekyll importer that may be a better move, but I liked ending up with a git repository containing a commit for each post.)

18 January 2015

Chris Lamb: Adjusting a backing track with SoX

Earlier today I came across some classical sheet music that included a "playalong" CD, just like a regular recording except it omits the solo cello part. After a quick listen it became clear there were two problems: SoX, the "Swiss Army knife of sound processing programs", can easily adjust the latter, but to remedy the former it must be provided with a dimensionless "cent" unit ie. 1/100th of a semitone rather than the 442Hz and 440Hz reference frequencies. First, we calculate the cent difference with: https://d1icoid1cnixnp.cloudfront.net/yadt/blog.Image/image/original/24.jpeg Next, we rip the material from the CD:
$ sudo apt-get install ripit flac
[..]
$ ripit --coder 2 --eject --nointeraction
[..]
And finally we adjust the tempo and pitch:
$ apt-get install sox libsox-fmt-mp3
[..]
$ sox 01.flac 01.mp3 pitch -7.85 tempo 1.00 # (Tuning notes)
$ sox 02.flac 02.mp3 pitch -7.85 tempo 0.95 # Too fast!
$ sox 03.flac 03.mp3 pitch -7.85 tempo 1.01 # Close..
$ sox 04.flac 04.mp3 pitch -7.85 tempo 1.03 # Too slow!
(I'm converting to MP3 at the same time it'll be more convenient on my phone.)

15 January 2015

Lunar: 80%

Unfortunately I could not go on stage at the 31st Chaos Communication Congress to present reproducible builds in Debian alongside Mike Perry from the Tor Project and Seth Schoen from the Electronic Frontier Foundation. I've tried to make it up for it, though and we have made amazing progress. Wiki reorganization What was a massive and frightening wiki page now looks really more welcoming: Screenshot of ReproducibleBuilds on Debian wiki Depending on what one is looking for, it should be much easier to find. There's now a high-level status overview given on the landing page, maintainers can learn how to make their packages reproducible, enthusiasts can more easily find what can help the project, and we have even started writing some history. .buildinfo for all packages New year's eve saw me hacking Perl to write dpkg-genbuildinfo. Similar to dpkg-genchanges, it's run by dpkg-buildpackage to produce .buildinfo control files. This is where the build environment, and hash of source and binary packages are recorded. This script, integrated with dpkg, replace the previous debhelper interim solution written by Niko Tyni. We used to fix mtimes in control.tar and data.tar using a specific addition to debhelper named dh_fixmtimes. To better support the ALWAYS_EXCLUDE environment variable and for pragramtic reasons, we moved the process in dh_builddeb. Both changes were quickly pushed to our continuous integration platform. Before, only packages using dh would create a .buildinfo and thus eventually be considered reproducible. With these modifications, many more packages had their chance and this shows: Growing amount of packages considered reproducible Yes, with our experimental toolchain we are now at more than eighty percent! That's more than 17200 source packages! srebuild Another big item on the todo-list was crossed over by Johannes Schauer. srebuild is a wrapper around sbuild:
Given a .buildinfo file, it first finds a timestamp of Debian Sid from snapshot.debian.org which contains the requested packages in their exact versions. It then runs sbuild with the right architecture as given by the .buildinfo file and the right base system to upgrade from, as given by the version of the base-files package version in the .buildinfo file. Using two hooks it will install the right package versions and verify that the installed packages are in the right version before the build starts.
Understanding problems Over 1700 packages have now been reviewed to understand why build results could not be reproduced on our experimental platform. The variations between the two builds are currently limited to time and file ordering, but this still has uncovered many problems. There are still toolchain fixes to be made (more than 180 packages for the PHP registry) which can make many packages reproducible at once, but others like C pre-processor macros will require many individual changes. debbindiff, the main tool used to understand differences, has gained support for .udeb, TrueType and OpenType fonts, PNG and PDF files. It's less likely to crash on problems with encoding or external tool. But most importantly for large package, it has been made a lot faster, thanks to Reiner Herrmann and Helmut Grohne. Helmut has also been able to spot cross-compilation issues by using debbindiff! Targeting our efforts It gives warm fuzzy feelings to hit the 80% mark, but it would be a bit irrelevant if this would not concern packages that matter. Thankfully, Holger worked on producing statistics for more specific package sets. Mattia Rizzolo has also done great work to improve the scripts generating the various pages visible on reproducible.debian.net. All essential and build-esential packages, except gcc and bash, are considered reproducible or have patches ready. After some lengthy builds, I also managed to come up with a patch to make linux build reproducibly. Miscellaneous After my initial attempt to modify r-base to remove a timestamp in R packages, Dirk Eddelbuettel discussed the issue with upstream and came up with a better patch. The latter has already been merged upstream! Dirk's solution is to allow timestamps to be set using an external environment variable. This is also how I modified FontForge to make it possible to reproduce fonts. Identifiers generated by xsltproc have also been an issue. After reviewing my initial patch, Andrew Awyer came up with a much nicer solution. Its potential performance implications need to be evaluated before submission, though. Chris West has been working on packages built with Maven amongst other things. PDF generated by GhostScript, another painful source of troubles, is being worked on by Peter De Wachter. Holger got X.509 certificates signed by the CA cartel for jenkins.debian.net and reproducible.debian.net. No more scary security messages now. Let's hope next year we will be able to get certificates through Let's Encrypt! Let's make a difference together As you can imagine with all that happened in the past weeks, the #debian-reproducible IRC channel has been a cool place to hang out. It's very energizing to get together and share contributions, exchange tips and discuss hardest points. Mandatory quote:
* h01ger is very happy to see again and again how this is a nice
         learning circle...! i've learned a whole lot here too... in
         just 3 months... and its going on...!
Reproducible builds are not going to change anything for most of our users. They simply don't care how they get software on their computer. But they care to get the right software without having to worry about it. That's our responsibility, as developers. Enabling users to trust their software is important and a major contribution, we as Debian, can make to the wider free software movement. Once Jessie is released, we should make a collective effort to make reproducible builds an highlight of our next release.

8 January 2015

Jonathan McDowell: Cup!

I got a belated Christmas present today. Thanks Jo + Simon! cup.jpg

22 December 2014

Jonathan Dowland: Blade Runner: Alien Easter Egg?

A few weeks ago I went to see Blade Runner: The Final Cut in a one-off showing at the Tyneside Cinema. I've only watched the Final Cut once and that viewing was a bit compromised, so it was nice to see it properly, and on a massive screen with decent surround sound too. Whilst watching it I saw something that I thought might potentially have been a visual reference to Scott's earlier movie, Alien. When Deckard is climbing onto the roof of the Bradbury building, there's a decorative motif that to me, looks very Giger-esque, biomechanical, a bit like a chest-burster.
click for full frame click for full frame
I went back and took a screenshot from the Blu Ray at the same point. What do you think? As it happens, whilst the inner regions of the building in the movie are the Bradbury building (or at least the entrance hall), I don't think the upper exterior shots are.
Purge Purge
There are a few other, well-known references to Alien in Blade Runner, although they are likely simply re-used effects rather than explicitly easter eggs. The display screen in Gaff's Spinner is re-used from the Narcissus (here's a comparison) and the ambience in Deckard's apartment also featured in the Nostromo's medical bay. Some enterprising person has put together a 12 hour session of that particular sound effect looping. There's a number of other comparisons and spots at this propsummit.com thread.

30 November 2014

Gregor Herrmann: RC bugs 2014/47-48

these are the RC bugs I've worked on during the last two weeks:

Johannes Schauer: simple email setup

I was unable to find a good place that describes how to create a simple self-hosted email setup. The most surprising discovery was, how much already works after:
apt-get install postfix dovecot-imapd
Right after having finished the installation I was able to receive email (but only in in /var/mail in mbox format) and send email (bot not from any other host). So while I expected a pretty complex setup, it turned out to boil down to just adjusting some configuration parameters.

Postfix The two interesting files to configure postfix are /etc/postfix/main.cf and /etc/postfix/master.cf. A commented version of the former exists in /usr/share/postfix/main.cf.dist. Alternatively, there is the ~600k word strong man page postconf(5). The latter file is documented in master(5).

/etc/postfix/main.cf I changed the following in my main.cf
@@ -37,3 +37,9 @@
mailbox_size_limit = 0
recipient_delimiter = +
inet_interfaces = all
+
+home_mailbox = Mail/
+smtpd_recipient_restrictions = permit_mynetworks reject_unauth_destination permit_sasl_authenticated
+smtpd_sasl_type = dovecot
+smtpd_sasl_path = private/auth
+smtp_helo_name = my.reverse.dns.name.com
At this point, also make sure that the parameters smtpd_tls_cert_file and smtpd_tls_key_file point to the right certificate and private key file. So either change these values or replace the content of /etc/ssl/certs/ssl-cert-snakeoil.pem and /etc/ssl/private/ssl-cert-snakeoil.key. The home_mailbox parameter sets the default path for incoming mail. Since there is no leading slash, this puts mail into $HOME/Mail for each user. The trailing slash is important as it specifies qmail-style delivery'' which means maildir. The default of the smtpd_recipient_restrictions parameter is permit_mynetworks reject_unauth_destination so this just adds the permit_sasl_authenticated option. This is necessary to allow users to send email when they successfully verified their login through dovecot. The dovecot login verification is activated through the smtpd_sasl_type and smtpd_sasl_path parameters. I found it necessary to set the smtp_helo_name parameter to the reverse DNS of my server. This was necessary because many other email servers would only accept email from a server with a valid reverse DNS entry. My hosting provider charges USD 7.50 per month to change the default reverse DNS name, so the easy solution is, to instead just adjust the name announced in the SMTP helo.

/etc/postfix/master.cf The file master.cf is used to enable the submission service. The following diff just removes the comment character from the appropriate section.
@@ -13,12 +13,12 @@
#smtpd pass - - - - - smtpd
#dnsblog unix - - - - 0 dnsblog
#tlsproxy unix - - - - 0 tlsproxy
-#submission inet n - - - - smtpd
-# -o syslog_name=postfix/submission
-# -o smtpd_tls_security_level=encrypt
-# -o smtpd_sasl_auth_enable=yes
-# -o smtpd_client_restrictions=permit_sasl_authenticated,reject
-# -o milter_macro_daemon_name=ORIGINATING
+submission inet n - - - - smtpd
+ -o syslog_name=postfix/submission
+ -o smtpd_tls_security_level=encrypt
+ -o smtpd_sasl_auth_enable=yes
+ -o smtpd_client_restrictions=permit_sasl_authenticated,reject
+ -o milter_macro_daemon_name=ORIGINATING
#smtps inet n - - - - smtpd
# -o syslog_name=postfix/smtps
# -o smtpd_tls_wrappermode=yes

Dovecot Since above configuration changes made postfix store email in a different location and format than the default, dovecot has to be informed about these changes as well. This is done in /etc/dovecot/conf.d/10-mail.conf. The second configuration change enables postfix to authenticate users through dovecot in /etc/dovecot/conf.d/10-master.conf. For SSL one should look into /etc/dovecot/conf.d/10-ssl.conf and either adapt the parameters ssl_cert and ssl_key or store the correct certificate and private key in /etc/dovecot/dovecot.pem and /etc/dovecot/private/dovecot.pem, respectively. The dovecot-core package (which dovecot-imapd depends on) ships tons of documentation. The file /usr/share/doc/dovecot-core/dovecot/documentation.txt.gz gives an overview of what resources are available. The path /usr/share/doc/dovecot-core/dovecot/wiki contains a snapshot of the dovecot wiki at http://wiki2.dovecot.org/. The example configurations seem to be the same files as in /etc/ which are already well commented.

/etc/dovecot/conf.d/10-mail.conf The following diff changes the default email location in /var/mail to a maildir in ~/Mail as configured for postfix above.
@@ -27,7 +27,7 @@
#
# <doc/wiki/MailLocation.txt>
#
-mail_location = mbox:~/mail:INBOX=/var/mail/%u
+mail_location = maildir:~/Mail

# If you need to set multiple mailbox locations or want to change default
# namespace settings, you can do it by defining namespace sections.

/etc/dovecot/conf.d/10-master.conf And this enables the authentication socket for postfix:
@@ -93,9 +93,11 @@


# Postfix smtp-auth
- #unix_listener /var/spool/postfix/private/auth
- # mode = 0666
- #
+ unix_listener /var/spool/postfix/private/auth
+ mode = 0660
+ user = postfix
+ group = postfix
+

# Auth process is run as this user.
#user = $default_internal_user

Aliases Now Email will automatically put into the '~/Mail' directory of the receiver. So a user has to be created for whom one wants to receive mail...
$ adduser josch
...and any aliases for it to be configured in /etc/aliases.
@@ -1,2 +1,4 @@
-# See man 5 aliases for format
-postmaster: root
+root: josch
+postmaster: josch
+hostmaster: josch
+webmaster: josch
After editing /etc/aliases, the command
$ newaliases
has to be run. More can be read in the aliases(5) man page.

Finishing up Everything is done and now postfix and dovecot have to be informed about the changes. There are many ways to do that. Either restart the services, reboot or just do:
$ postfix reload
$ doveadm reload
Have fun!

27 November 2014

Jonathan Dowland: PGP transition statement

-----BEGIN PGP SIGNED MESSAGE-----
Hash: SHA512,SHA1
I'm transitioning from my old, 1024-bit DSA PGP key, FD35 0B0A C6DD 5D91 DB7A 83D1 168B 4E71 7032 F238, to my newer, 4096-bit RSA key, E037 CB2A 1A00 61B9 4336 3C8B 0907 4096 06AA AAAA. If you have signed my old key, I'd be very grateful if you would consider signing my new key. (Thanks in advance!) This is long overdue! I've had 06AAAAAA since 2009, but it took me a while to get enough signatures on it for me to consider a transition. I still have far more signatures on my older key, owing to attending more conferences when I was using it than since I switched. This statement, available in plaintext at http://jmtd.net/log/pgp_transition/statement.txt, has been signed with both keys. I've marked my old key as expiring in around 72 days time, which coincides with my change of job, and will be just short of ten years since I generated it.
-----BEGIN PGP SIGNATURE-----
Version: GnuPG/MacGPG2 v2.0.22 (Darwin)
iQIcBAEBCgAGBQJUdysuAAoJEAkHQJYGqqqquQ0QALVGcn9Wbasg0oh8JeE8rN7h
k7FeZjmGk+ZwfXwo4Eq4pjxKp+TN+r4xBCFSHDRmMaAm9q7aWF/RqkdNiEtioQfJ
SmfzLsoEnSBmS9dr1LvAoxIlUzs/9Lt8EY2tB4NQne3QIu3VyRBjQDvqff6KJgkV
849+GsrJezkg/2VFZkZp1N9y0YwrmezV/1j0N6zR8Y20LlF4YuD3egUJYvbrQ1pA
krJwhiHskXRinqviYMg4CuTZZ3m2wc4fM6uiY5t9taqK90KFgmg73KvW6Rx2L+XP
BUzTlSYHnK+qI23Cng//DthFww2Wf9RofkfRgXk0zAe4+DcoMZzALKtZQA6GrT46
wzsmu459p5nUwwa6BzUvqBzyVtzbtHN5xaFwbjHCPlemsFlH/8LHackJLsBvr2Gp
q292huYu/HNo+Q6OIslFX6rc5AR3HKdbU2DxYMPAfI+CEtW1XwetK8ADSZc1v54C
+sc+Yb2kAleLX2xMIfS9JqvcKOP2Gbo7nTdRPnS2jZRWOv8JxVfFiSzHVODtlLfk
AmRvms5rQ2nPnB21yOd+DP2sACVKY0MS7/UwMh2hoQLR/bSu/jRh24n3WHDfQWtF
Jp4AD80ABFV2t/pSP1L70HlxwPmOzra8AVzCdXMOT/SdT387rSM9fvue4FY5goT+
/pResJSl9pAbBCtjOFJoiEYEARECAAYFAlR3Ky4ACgkQFotOcXAy8jiLawCgofsp
ggze/iSpfkyeL0vhXi6N/WMAoLQogFQY+VaQrVP3lGl1VVh4jw0W
=JBA4
-----END PGP SIGNATURE-----

25 November 2014

Erich Schubert: Installing Debian with sysvinit

First let me note that I am using systemd, so these things here are untested by me. See e.g. Petter's and Simon's blog entries on the same overall topic.
According to the Debian installer maintainers, the only accepted way to install Debian with sysvinit is to use preseeding. This can either be done at the installer boot prompt by manually typing the magic spell:
preseed/late_command="in-target apt-get install -y sysvinit-core"
or by using a preseeding file (which is a really nice feature I used for installing my Hadoop nodes) to do the same:
d-i preseed/late_command string in-target apt-get install -y sysvinit-core
If you are a sysadmin, using preseeding can save you a lot of typing. Put all your desired configuration into preseeding files, put them on a webserver (best with a short name resolvable by local DNS). Let's assume you have set up the DNS name d-i.example.com, and your DHCP is configured such that example.com is on the DNS search list. You can also add a vendor extension to DHCP to serve a full URL. Manually enabling preseeding then means adding
auto url=d-i
to the installer boot command line (d-i is the hostname I suggested to set up in your DNS before, and the full URL would then be http://d-i.example.com/d-i/jessie/./preseed.cfg. Preseeding is well documented in Appendix B of the installer manual, but nevertheless will require a number of iterations to get everything work as desired for a fully automatic install like I used for my Hadoop nodes.

There might be an easier option.
I have filed a wishlist bug suggesting to use the tasksel mechanism to allow the user to choose sysvinit at installation time. However, it got turned down by the Debian installer maintainers quire rudely in a "No." - essentially this is a "shut the f... up and go away", which is in my opinion an inappropriate to discard a reasonable user wishlist request.
Since I don't intend to use sysvinit anymore, I will not be pursuing this option further. It is, as far as I can tell, still untested. If it works, it might be the least-effort, least-invasive option to allow the installation of sysvinit Jessie (except for above command line magic).
If you have interest in sysvinit, you (because I don't use sysvinit) should now test if this approach works.
  1. Get the patch proposed to add a task-sysvinit package.
  2. Build an installer CD with this tasksel (maybe this documentation is helpful for this step).
  3. Test whether the patch works. Report results to above bug report, so that others interested in sysvinit can find them easily.
  4. Find and fix bugs if it didn't work. Repeat.
  5. Publish the modified ("forked") installer, and get user feedback.
If you are then still up for a fight, you can try to convince the maintainers (or go the nasty way, and ask the CTTE for their opinion, to start another flamewar and make more maintainers give up) that this option should be added to the mainline installer. And hurry up, or you may at best get this into Jessie reloaded, 8.1. - chance are that the release manager will not accept such patches this late anymore. The sysvinit supporters should have investigated this option much, much earlier instead of losing time on the GR.
Again, I won't be doing this job for you. I'm happy with systemd. But patches and proof-of-concept is what makes open source work, not GRs and MikeeUSA's crap videos spammed to the LKML...
(And yes, I am quite annoyed by the way the Debian installer maintainers handled the bug report. This is not how open-source collaboration is supposed to work. I tried to file a proper wishlist bug reporting, suggesting a solution that I could not find discussed anywhere before and got back just this "No. Shut up." answer. I'm not sure if I will be reporting a bug in debian-installer ever again, if this is the way they handle bug reports ...)
I do care about our users, though. If you look at popcon "vote" results, we have 4179 votes for sysvinit-core and 16918 votes for systemd-sysv (graph) indicating that of those already testing jessie and beyond - neglecting 65 upstart votes, and assuming that there is no bias to not-upgrade if you prefer sysvinit - about 20% appear to prefer sysvinit (in fact, they may even have manually switched back to sysvinit after being upgraded to systemd unintentionally?). These are users that we should listen to, and that we should consider adding an installer option for, too.

19 November 2014

Jonathan Dowland: Moving to Red Hat

I'm changing jobs! From February 2015, I will be joining Red Hat as a Senior Software Engineer. I'll be based in Newcastle and working with the Middleware team. I'm going to be working with virtualisation, containers and Docker in particular. I know a few of the folks in the Newcastle office already, thanks to their relationship with the School of Computing Science, and I'm very excited to work with them, as well as the wider company. It's also going to be great to be contributing to the free software community as part of my day job. This October marked my tenth year working for Newcastle University. I've had a great time, learned a huge amount, and made some great friends. It's going to be sad to leave, especially the School of Computing Science where I've spent the last four years, but it's the right time to move on, It's an area that I've been personally interested in for a long time and I'm very excited to be trying something new.

16 November 2014

Vincent Bernat: Intel Wireless 7260 as an access point

My home router acts as an access point with an Intel Dual-Band Wireless-AC 7260 wireless card. This card supports 802.11ac (on the 5 GHz band) and 802.11n (on both the 5 GHz and 2.4 GHz band). While this seems a very decent card to use in managed mode, this is not really a great choice for an access point.
$ lspci -k -nn -d 8086:08b1
03:00.0 Network controller [0280]: Intel Corporation Wireless 7260 [8086:08b1] (rev 73)
        Subsystem: Intel Corporation Dual Band Wireless-AC 7260 [8086:4070]
        Kernel driver in use: iwlwifi
TL;DR: Use an Atheros card instead.

Limitations First, the card is said dual-band but you can only uses one band at a time because there is only one radio. Almost all wireless cards have this limitation. If you want to use both the 2.4 GHz band and the less crowded 5 GHz band, two cards are usually needed.

5 GHz band There is no support to set an access point on the 5 GHz band. The firmware doesn t allow it. This can be checked with iw:
$ iw reg get
country CH: DFS-ETSI
        (2402 - 2482 @ 40), (N/A, 20), (N/A)
        (5170 - 5250 @ 80), (N/A, 20), (N/A)
        (5250 - 5330 @ 80), (N/A, 20), (0 ms), DFS
        (5490 - 5710 @ 80), (N/A, 27), (0 ms), DFS
        (57240 - 65880 @ 2160), (N/A, 40), (N/A), NO-OUTDOOR
$ iw list
Wiphy phy0
[...]
        Band 2:
                Capabilities: 0x11e2
                        HT20/HT40
                        Static SM Power Save
                        RX HT20 SGI
                        RX HT40 SGI
                        TX STBC
                        RX STBC 1-stream
                        Max AMSDU length: 3839 bytes
                        DSSS/CCK HT40
                Frequencies:
                        * 5180 MHz [36] (20.0 dBm) (no IR)
                        * 5200 MHz [40] (20.0 dBm) (no IR)
                        * 5220 MHz [44] (20.0 dBm) (no IR)
                        * 5240 MHz [48] (20.0 dBm) (no IR)
                        * 5260 MHz [52] (20.0 dBm) (no IR, radar detection)
                          DFS state: usable (for 192 sec)
                          DFS CAC time: 60000 ms
                        * 5280 MHz [56] (20.0 dBm) (no IR, radar detection)
                          DFS state: usable (for 192 sec)
                          DFS CAC time: 60000 ms
[...]
While the 5 GHz band is allowed by the CRDA, all frequencies are marked with no IR. Here is the explanation for this flag:
The no-ir flag exists to allow regulatory domain definitions to disallow a device from initiating radiation of any kind and that includes using beacons, so for example AP/IBSS/Mesh/GO interfaces would not be able to initiate communication on these channels unless the channel does not have this flag.

Multiple SSID This card can only advertise one SSID. Managing several of them is useful to setup distinct wireless networks, like a public access (routed to Tor), a guest access and a private access. iw can confirm this:
$ iw list
        valid interface combinations:
                 * #  managed   <= 1, #  AP, P2P-client, P2P-GO   <= 1, #  P2P-device   <= 1,
                   total <= 3, #channels <= 1
Here is the output of an Atheros card able to manage 8 SSID:
$ iw list
        valid interface combinations:
                 * #  managed, WDS, P2P-client   <= 2048, #  IBSS, AP, mesg point, P2P-GO   <= 8,
                   total <= 2048, #channels <= 1

Configuration as an access point Except for those two limitations, the card works fine as an access point. Here is the configuration that I use for hostapd:
interface=wlan-guest
driver=nl80211
# Radio
ssid=XXXXXXXXX
hw_mode=g
channel=11
# 802.11n
wmm_enabled=1
ieee80211n=1
ht_capab=[HT40-][SHORT-GI-20][SHORT-GI-40][DSSS_CCK-40][DSSS_CCK-40][DSSS_CCK-40]
# WPA
auth_algs=1
wpa=2
wpa_passphrase=XXXXXXXXXXXXXXX
wpa_key_mgmt=WPA-PSK
wpa_pairwise=TKIP
rsn_pairwise=CCMP
Because of the use of channel 11, only 802.11n HT40- rate can be enabled. Look at the Wikipedia page for 802.11n to check if you can use either HT40-, HT40+ or both.

Next.

Previous.