Search Results: "rrs"

26 December 2014

Ritesh Raj Sarraf: Linux Containers and Productization

Linux has improved many many things over the last couple of years. Of the many improvements, the one that I've started leveraging the most today, are Control Groups. In the past, when there was a need to build a prototype for a solution, we needed hardware. Then came the virtualization richness to Linux. It came in 2 major flavors, KVM (Full Virtualization) and Xen (Para Virtualization). Over the years, the difference of para vs full, for both the implementations, is almost none. KVM now has support for Para-Virtualizaiton, with para-virtualized drviers for most resource intensive tasks, like network and I/O. Similarly, Xen has Full Virtualization support with the help of Qemu-KVM. But, if you had to build a prototype implementation comprising of a multi node setup, virtualization could still be resource hungry. Otherwise too, if your focus was an application (say like a web framework), virtualization was an overkill. All thanks to Linux Containers, prototyping applicaiton based solutions, is now a breeze in Linux. The LXC project is very well designed, and well balanced, in terms of features (as compared to the recently introduced Docker implementation). From an application's point of view, linux containers provide virtualization for namespace, network and resources. Thus making more than 90% of your application's needs fulfilled. For some apps, where a dependency on the kernel is needed, linux containers will not serve the need. Beyond the defaults provided by the distribution, I like to create a base container with my customizations, as a template. This allows me to quickly create environements, without too much housekeeping to do for the initial setup. My base config, looks like:
rrs@learner:~$ sudo cat /var/lib/lxc/deb-template/config
[sudo] password for rrs:
# Template used to create this container: /usr/share/lxc/templates/lxc-debian
# Parameters passed to the template:
# For additional config options, please look at lxc.container.conf(5)
# CPU
lxc.cgroup.cpuset.cpus = 0,1
lxc.cgroup.cpu.shares = 1234
# Mem
lxc.cgroup.memory.limit_in_bytes = 2000M
lxc.cgroup.memory.soft_limit_in_bytes = 1500M
# Network
lxc.network.type = veth
lxc.network.hwaddr = 00:16:3e:0c:c5:d4
lxc.network.flags = up
lxc.network.link = lxcbr0
# Root file system
lxc.rootfs = /var/lib/lxc/deb-template/rootfs
# Common configuration
lxc.include = /usr/share/lxc/config/debian.common.conf
# Container specific configuration
lxc.mount = /var/lib/lxc/deb-template/fstab
lxc.utsname = deb-template
lxc.arch = amd64
# For apt
lxc.mount.entry = /var/cache/apt/archives var/cache/apt/archives none defaults,bind 0 0
23:07          
rrs@learner:~$
Some of the important settings to have in the templace are the mount point, to point to your local apt cache, and CPU and Memory limits. If there was one feature request to ask the LXC developers, I'd ask them to provide a util-lxc tools suite. Currently, to know the memory (soft/hard) allocation for the container, one needs to do the following:
rrs@learner:/sys/fs/cgroup/memory/lxc/deb-template$ cat memory.soft_limit_in_bytes memory.limit_in_bytes
1572864000
2097152000
23:21          
rrs@learner:/sys/fs/cgroup/memory/lxc/deb-template$ bc
bc 1.06.95
Copyright 1991-1994, 1997, 1998, 2000, 2004, 2006 Free Software Foundation, Inc.
This is free software with ABSOLUTELY NO WARRANTY.
For details type  warranty'.
1572864000/1024/1024
1500
quit
23:21          
rrs@learner:/sys/fs/cgroup/memory/lxc/deb-template$
Tools like lxc-cpuinfo, lxc-free would be much better. Finally, there's been a lot of buzz about Docker. Docker is an alternate product offering, like LXC, for Linux Containers. From what I have briefly looked at, docker doesn't seem to be providing any ground breaking new interface than what is already possible with LXC. It does take all the tidbit tools, and presents you with a unified docker interface. But other than that, I couldn't find it much appealing. And the assumption that the profiles should be pulled off the internet (Github ?) is not very exciting. I am hoping they do have other options, where dependence on the network is not really required.

Categories:

Keywords:

Ritesh Raj Sarraf: Linux Desktop in 2014

We are almost at the end of 2014. While 2014 has been a year with many mixed experiences, I think it does warrant one blog entry ;-) Recently, I've again started spending more time on Linux products / solutions, than spending time focused on a specfic subsystem. This change has been good. It has allowed me to re-cap all the advancements that have happened in the Linux world, umm... in the last 5 years. Once upon a time, the Linux kernel sucked on the Desktop. It led to many desktop improvement related initiatives. Many accepted in kernel, while others stood as it is (out-of-tree) still as of today. Over the years, there are many people that advocate for such out-of-tree features, for example the -ck patchset, claiming it has better performance. Most of the times, these are patches not carried by your distribution vendor, which leads you to alternate sources, if you want to try. Having some spare time, I tried the Alternative Kernel project. It is nothing but a bunch of patchsets, on top of the stock kernel. After trying it out, I must say that these patchsets are out-of-tree, for good. I hardly could make out any performance gain. But I did notice a considerable increase in the power consumption. On my stock Debian kernel, the power consumption lies around 15-18 W. That increased to 20+ W on the alternate kernels. I guess most advocates for the out-of-tree patchsets, only measure the 1-2% performance gain, where as completely neglect the fact that that kernel sleeps less often. But back to the generic Linux kernel performance problem...... Recently, in the last 2 years, the performance suckiness of the Linux kernel is hardly noticed. So what changed ? The last couple of years have seen a rise in high capacity RAM, at affordable consumer price. 8 - 16 GiB of RAM is common on laptops these days. If you go and look at the sucky bug report linked above, it is marked as closed, justified Working as Designed. The core problem with the bug reported, has to do with slow media. The Linux scheduler is (in?)efficient. It works hard to give you the best throughput and performance (for server workloads). I/O threads are a high priority task in the Linux kernel. Now map this scene to the typical Linux desktop. If you end up with doing too much buffered I/O, thus exhausting all your available cache, and trigger paging, you are in for some sweet experience. Given that the kernel highly priotizes I/O tasks, and if your underneath persistent storage device is slow (which is common if you have an external USB disk, or even an internal rotating magnetic disk), you end up blocking all your CPU cycles against the slow media. Which further leads to no available CPU cycles for your other desktop tasks. Hence, when you do I/O at such level, you find your desktop go terribly sluggish. It is not that your CPU is slow or in-capable. It is just that all your CPU slices are blocked. Blocked waiting for your write() to report a completion. So what exactly changed that we don't notice that problem any more ????
  1. RAM - Increase in RAM has led to more I/O be accommodated in cache. The best way to see this in action is to do a copy of a large file, something almost equivalent to the amount of RAM you have. But make sure it is less than the overall amount. For example, if you have 4 GiB of RAM, try copying a file of size 3.5 GiB in your graphical file manager. And at the same time, on the terminal, keep triggering the sync command. Check how long does it take for the sync to complete. By being able to cache large amount of data, the Linux kernel has been better at improving the overall performance in the eyes of the user.
  2. File System - But RAM is not alone. The file system has played a very important role too. Earlier, with ext3 file system, we had a commit interval of (5?) 30 seconds. That led to the above mentioned sync equivalent to get triggered every 30 secs. It was a safety measure to ensure, that at worst, you lose 30 secs worth of data. But it did hinder performance. With ext4, came delayed allocation. Delayed Allocation allowed the write() to return immediate while the data was in cache, and deferred the task of actual write() to the file system. This allowed for the allocator to find the best contiguous slot for the data to be written. Thus it improved the file system. It also brough corruption for some of the apps. :-)
  3. Solid State Drives - The file system and RAM alone aren't the sole factors that led to the drastic improvement in the overall experience of the Linux desktop. If you read through the bug report linked in this article, you'll find the core root cause to be slow persistent storage devices. Could the allocator have been improved (like Windows) to not be so pressing of the Linux desktop ? Maybe, yes. But that was a decision for the kernel devs and they believed (and believe) to keep those numbers to minimum. Thus for I/O, as for today, you have 3 schedulers and for CPU, just 1. What dramatically improved the overall Linux Desktop performance was the general availability of solid state devices. These device are real fast, which in effect made the write() calls return immediate, and did not block the CPU.
So, it was the advancement in both hardware and software that led to better overall desktop performance. Does the above mentioned bug still exist ? Yes. Its just that it is much harder to trigger it now. You'll have to ensure that you max out your cache and trigger paging. And then try to do ask for some CPU cycles. But it wasn't that back then we didn't use Linux on the desktop / laptop. It sure did suck more than, say, Windows. But hey, sometimes we have to eat our own dog food. Even then, there sure were some efforts to overcome the then limitations. The first and obvious one is the out-of-tree patchset. But ther were also some other efforts to improve the situation. The first such effort, that I can recollect, was ulatency. With Linux adding support for Control Groups, there were multiple avenues open on how to tackle and tame the resource starvation problem. The crux of the problem was that Linux gave way too much priority to the I/O tasks. I still wish Linux has a profile mechanism, where in on the kernel command line, we could specify what profile should Linux boot into. Anyways, with ulatency, we saw improvements in the Linux Desktop experience. ulatency had in-built policies to whitelist / blacklist a set of profiles. For example, KDE was a profile. Thus, ulatency would club all KDE processes into a group and give that group a higher precedence to ensure that it had its fair share of CPU cycles. Today, at almost the end of 2014, there are many more consumer of Linux's control groups. Prominent names would be: LXC and systemd. ulatency has hardly seen much development in the last year. Probably it is time for systemd to take over. systemd is expected to bring lots of features to the Linux world, thus bridging the (many) gap Linux has had on the desktop. It makes extensive use of Control Groups for a variety of (good) reasons, which has led it to be a linux-only product. I think it should have never marketed itself as the init daemon. It rather fits better when called as the System Management Daemon. The path to Linux Desktop looks much brighter in 2015 and beyond thanks to all the advancements that have happened so far. The other important players, who should be thanked are Mobile and Low Latency products (Android, ChromeBook), whose engagement to productize Linux has led to better features overall.

Categories:

Keywords:

17 December 2014

Keith Packard: MST-monitors

Multi-Stream Transport 4k Monitors and X I'm sure you've seen a 4k monitor on a friends desk running Mac OS X or Windows and are all ready to go get one so that you can use it under Linux. Once you've managed to acquire one, I'm afraid you'll discover that when you plug it in, you're limited to 30Hz refresh rates at the full size, unless you're running a kernel that is version 3.17 or later. And then... Good Grief! What Is My Computer Doing! Ok, so now you're running version 3.17 and when X starts up, it's like you're using a gigantic version of Google Cardboard. Two copies of a very tall, but very narrow screen greets you. Welcome to MST island. In order to drive these giant new panels at full speed, there isn't enough bandwidth in the display hardware to individually paint each pixel once during each frame. So, like all good hardware engineers, they invented a clever hack. This clever hack paints the screen in parallel. I'm assuming that they've got two bits of display hardware, each one hooked up to half of the monitor. Now, each paints only half of the pixels, avoiding costly redesign of expensive silicon, at least that's my surmise. In the olden days, if you did this, you'd end up running two monitor cables to your computer, and potentially even having two video cards. Today, thanks to the magic of Display Port Multi-Stream Transport, we don't need all of that; instead, MST allows us to pack multiple cables-worth of data into a single cable. I doubt the inventors of MST intended it to be used to split a single LCD panel into multiple "monitors", but hardware engineers are clever folk and are more than capable of abusing standards like this when it serves to save a buck. Turning Two Back Into One We've got lots of APIs that expose monitor information in the system, and across which we might be able to wave our magic abstraction wand to fix this:
  1. The KMS API. This is the kernel interface which is used by all graphics stuff, including user-space applications and the frame buffer console. Solve the problem here and it works everywhere automatically.
  2. The libdrm API. This is just the KMS ioctls wrapped in a simple C library. Fixing things here wouldn't make fbcons work, but would at least get all of the window systems working.
  3. Every 2D X driver. (Yeah, we're trying to replace all of these with the one true X driver). Fixing the problem here would mean that all X desktops would work. However, that's a lot of code to hack, so we'll skip this.
  4. The X server RandR code. More plausible than fixing every driver, this also makes X desktops work.
  5. The RandR library. If not in the X server itself, how about over in user space in the RandR protocol library? Well, the problem here is that we've now got two of them (Xlib and xcb), and the xcb one is auto-generated from the protocol descriptions. Not plausible.
  6. The Xinerama code in the X server. Xinerama is how we did multi-monitor stuff before RandR existed. These days, RandR provides Xinerama emulation, but we've been telling people to switch to RandR directly.
  7. Some new API. Awesome. Ok, so if we haven't fixed this in any existing API we control (kernel/libdrm/X.org), then we effectively dump the problem into the laps of the desktop and application developers. Given how long it's taken them to adopt current RandR stuff, providing yet another complication in their lives won't make them very happy.
All Our APIs Suck Dave Airlie merged MST support into the kernel for version 3.17 in the simplest possible fashion -- pushing the problem out to user space. I was initially vaguely tempted to go poke at it and try to fix things there, but he eventually convinced me that it just wasn't feasible. It turns out that all of our fancy new modesetting APIs describe the hardware in more detail than any application actually cares about. In particular, we expose a huge array of hardware objects: Each of these objects exposes intimate details about the underlying hardware -- which of them can work together, and which cannot; what kinds of limits are there on data rates and formats; and pixel-level timing details about blanking periods and refresh rates. To make things work, some piece of code needs to actually hook things up, and explain to the user why the configuration they want just isn't possible. The sticking point we reached was that when an MST monitor gets plugged in, it needs two CRTCs to drive it. If one of those is already in use by some other output, there's just no way you can steal it for MST mode. Another problem -- we expose EDID data and actual video mode timings. Our MST monitor has two EDID blocks, one for each half. They happen to describe how they're related, and how you should configure them, but if we want to hide that from the application, we'll have to pull those EDID blocks apart and construct a new one. The same goes for video modes; we'll have to construct ones for MST mode. Every single one of our APIs exposes enough of this information to be dangerous. Every one, except Xinerama. All it talks about is a list of rectangles, each of which represents a logical view into the desktop. Did I mention we've been encouraging people to stop using this? And that some of them listened to us? Foolishly? Dave's Tiling Property Dave hacked up the X server to parse the EDID strings and communicate the layout information to clients through an output property. Then he hacked up the gnome code to parse that property and build a RandR configuration that would work. Then, he changed to RandR Xinerama code to also parse the TILE properties and to fix up the data seen by application from that. This works well enough to get a desktop running correctly, assuming that desktop uses Xinerama to fetch this data. Alas, gtk has been "fixed" to use RandR if you have RandR version 1.3 or later. No biscuit for us today. Adding RandR Monitors RandR doesn't have enough data types yet, so I decided that what we wanted to do was create another one; maybe that would solve this problem. Ok, so what clients mostly want to know is which bits of the screen are going to be stuck together and should be treated as a single unit. With current RandR, that's some of the information included in a CRTC. You pull the pixel size out of the associated mode, physical size out of the associated outputs and the position from the CRTC itself. Most of that information is available through Xinerama too; it's just missing physical sizes and any kind of labeling to help the user understand which monitor you're talking about. The other problem with Xinerama is that it cannot be configured by clients; the existing RandR implementation constructs the Xinerama data directly from the RandR CRTC settings. Dave's Tiling property changes edit that data to reflect the union of associated monitors as a single Xinerama rectangle. Allowing the Xinerama data to be configured by clients would fix our 4k MST monitor problem as well as solving the longstanding video wall, WiDi and VNC troubles. All of those want to create logical monitor areas within the screen under client control What I've done is create a new RandR datatype, the "Monitor", which is a rectangular area of the screen which defines a rectangular region of the screen. Each monitor has the following data: There are three requests to define, delete and list monitors. And that's it. Now, we want the list of monitors to completely describe the environment, and yet we don't want existing tools to break completely. So, we need some way to automatically construct monitors from the existing RandR state while still letting the user override portions of it as needed to explain virtual or tiled outputs. So, what I did was to let the client specify a list of outputs for each monitor. All of the CRTCs which aren't associated with an output in any client-defined monitor are then added to the list of monitors reported back to clients. That means that clients need only define monitors for things they understand, and they can leave the other bits alone and the server will do something sensible. The second tricky bit is that if you specify an empty rectangle at 0,0 for the pixel geometry, then the server will automatically compute the geometry using the list of outputs provided. That means that if any of those outputs get disabled or reconfigured, the Monitor associated with them will appear to change as well. Current Status Gtk+ has been switched to use RandR for RandR versions 1.3 or later. Locally, I hacked libXrandr to override the RandR version through an environment variable, set that to 1.2 and Gtk+ happily reverts back to Xinerama and things work fine. I suspect the plan here will be to have it use the new Monitors when present as those provide the same info that it was pulling out of RandR's CRTCs. KDE appears to still use Xinerama data for this, so it "just works". Where's the code As usual, all of the code for this is in a collection of git repositories in my home directory on fd.o:
git://people.freedesktop.org/~keithp/randrproto master
git://people.freedesktop.org/~keithp/libXrandr master
git://people.freedesktop.org/~keithp/xrandr master
git://people.freedesktop.org/~keithp/xserver randr-monitors
RandR protocol changes Here's the new sections added to randrproto.txt
                   
1.5. Introduction to version 1.5 of the extension
Version 1.5 adds monitors
   A 'Monitor' is a rectangular subset of the screen which represents
   a coherent collection of pixels presented to the user.
   Each Monitor is be associated with a list of outputs (which may be
   empty).
   When clients define monitors, the associated outputs are removed from
   existing Monitors. If removing the output causes the list for that
   monitor to become empty, that monitor will be deleted.
   For active CRTCs that have no output associated with any
   client-defined Monitor, one server-defined monitor will
   automatically be defined of the first Output associated with them.
   When defining a monitor, setting the geometry to all zeros will
   cause that monitor to dynamically track the bounding box of the
   active outputs associated with them
This new object separates the physical configuration of the hardware
from the logical subsets  the screen that applications should
consider as single viewable areas.
1.5.1. Relationship between Monitors and Xinerama
Xinerama's information now comes from the Monitors instead of directly
from the CRTCs. The Monitor marked as Primary will be listed first.
                   
5.6. Protocol Types added in version 1.5 of the extension
MONITORINFO   name: ATOM
          primary: BOOL
          automatic: BOOL
          x: INT16
          y: INT16
          width: CARD16
          height: CARD16
          width-in-millimeters: CARD32
          height-in-millimeters: CARD32
          outputs: LISTofOUTPUT  
                   
7.5. Extension Requests added in version 1.5 of the extension.
 
    RRGetMonitors
    window : WINDOW
      
    timestamp: TIMESTAMP
    monitors: LISTofMONITORINFO
 
    Errors: Window
    Returns the list of Monitors for the screen containing
    'window'.
    'timestamp' indicates the server time when the list of
    monitors last changed.
 
    RRSetMonitor
    window : WINDOW
    info: MONITORINFO
 
    Errors: Window, Output, Atom, Value
    Create a new monitor. Any existing Monitor of the same name is deleted.
    'name' must be a valid atom or an Atom error results.
    'name' must not match the name of any Output on the screen, or
    a Value error results.
    If 'info.outputs' is non-empty, and if x, y, width, height are all
    zero, then the Monitor geometry will be dynamically defined to
    be the bounding box of the geometry of the active CRTCs
    associated with them.
    If 'name' matches an existing Monitor on the screen, the
    existing one will be deleted as if RRDeleteMonitor were called.
    For each output in 'info.outputs, each one is removed from all
    pre-existing Monitors. If removing the output causes the list of
    outputs for that Monitor to become empty, then that Monitor will
    be deleted as if RRDeleteMonitor were called.
    Only one monitor per screen may be primary. If 'info.primary'
    is true, then the primary value will be set to false on all
    other monitors on the screen.
    RRSetMonitor generates a ConfigureNotify event on the root
    window of the screen.
 
    RRDeleteMonitor
    window : WINDOW
    name: ATOM
 
    Errors: Window, Atom, Value
    Deletes the named Monitor.
    'name' must be a valid atom or an Atom error results.
    'name' must match the name of a Monitor on the screen, or a
    Value error results.
    RRDeleteMonitor generates a ConfigureNotify event on the root
    window of the screen.
                   

8 November 2014

Ingo Juergensmann: Bind9 vs. PowerDNS - part 2

AttachmentSize
dnssec.sh.txt3.55 KB
Two weeks ago I wrote about implementing DNSSEC with Bind9 or PowerDNS and asked for opinions, because Bind9 appeared to me to be too complex to set it up with regular key signing and such and PowerDNS seemed to me to be nice and easy, but some kind of black box where I don't now what's happening. I think I've now found the best and most suitable way for me to deal with DNSSEC. Or in short words: Bind9 won! It won because of its inline-signing config option that you can use in bind9.9, which happens to be in backports. Another tip I can give due to my findings on the web: if you plan to implement DNSSEC with Bind9, do NOT! search for "bind dnssec" on the web. This will only bring up old HowTos and manuals which leaves you with the burden of manually update your keys. Just add the magic word "inline-signing" to your search phrase and you'll find proper results like the one from Michael McNally on a subpage of ISC.org: In-line Signing With NSEC3 in BIND 9.9+ -- A Walk-through. It's a fairly good starting point, but still left me with several manual steps to do to get a DNSSEC-signed zone. I'm quite a lazy guy when it comes down to manual steps that needs to get executed repeatedly, as many others in IT as well, I think. So I wrote some sort of small wrapper script to do the necessary steps of creating the keys, adding the necessary config options to your named.conf.local, enabling nsec3params, add the DS records to your zone file and displaying the DNSKEY to you, so that you just need to upload it to your registrar. One problem was still open: when doing auto-signing/inline-signing with bind9, you are left with your plain text zone file whereas your signed zone file will keep to increase the serial with each key rollover. When changing your plain text zone file by adding, changing or removing RRs of that domain, you'll be left with the manual task of finding out was your actual serial is that is currently used, because it's not your serial +1 from your plain text zone file anymore. This is of course an awkward part I wanted to get rid off. And therefor my script includes an option to edit zone files with your favorite editor, increase the serial automatically by determing the currently highest number, either on disk or in DNS and raising this serial by 1. Finally the zone is automatically reloaded by rndc. That way I now have the same comfort as in PowerDNS with Bind9, but also know what's going on, because it's not a black box anymore. Me happy. :-) P.S.: I don't know whether this script is of interest to other users, because it relies heavily on my own setting, e.g. paths and such. But if there's interest, just ask... P.P.S.: Well, I think it's better when you can decide yourself if my script is of interest to you... please find it attached...
Kategorie:

27 September 2014

Ritesh Raj Sarraf: Laptop Mode Tools 1.66

I am pleased to announce the release of Laptop Mode Tools at version 1.66. This release fixes an important bug in the way Laptop Mode Tools is invoked. Users, now when disable it in the config file, the tool will be disabled. Thanks to bendlas@github for narrowing it down. The GUI configuration tool has been improved, thanks to Juan. And there is a new power saving module for users with ATI Radeon cards. Thanks to M. Ziebell for submitting the patch. Laptop Mode Tools development can be tracked @ GitHub

AddThis:

Categories:

Keywords:

23 September 2014

Dirk Eddelbuettel: RcppArmadillo 0.4.450.1.0

Continuing with his standard pace of approximately one new version per month, Conrad released a new minor release of Armadillo a few days ago. As before, I had created a GitHub-only pre-release which was tested against all eighty-seven (!!) CRAN dependents of our RcppArmadillo package and then uploaded RcppArmadillo 0.4.450.0 to CRAN. The CRAN maintainers pointed out that under the R-development release, a NOTE was issued concerning the C-library's rand() call. This is a pretty new NOTE, but it means using the (sometimes poor quality) rand() generator is now a no-no. Now, Armadillo being as robustly engineered as it is offers a new random number generator based on C++11 as well as a fallback generator for those unfortunate enough to live with an older C++98 compiler. (I would like to note here that I find Conrad's continued support for both C++11, offering very useful modern language idioms, as well as the fallback code for continued deployment and usage by those constrained in their choice of compilers rather exemplary --- because contrary to what some people may claim, it is not a matter of one or the other. C++ always was, and continues to be, a multi-paradigm language which can be supported easily by several standard. But I digress...) In any event, one cannot argue with CRAN about their prescription of a C++98 compiler. So Conrad and I discussed this over email, and came up with a scheme where a user-package (such as RcppArmadillo) can provide an alternate generator which Armadillo then deploys. I implemented a first solution which was then altered / reflected by Conrad in a revised version 4.450.1 of Armadillo. I packaged, and now uploaded, that version as RcppArmadillo 0.4.450.1.0 to both CRAN and into Debian. Besides the RNG change already discussed, this release brings a few smaller changes from the Armadillo side. These are detailed below in the extract from the NEWS file. On the RcppArmadillo side, we now have support for pkgKitten which is both very exciting and likely the topic of another blog post with an example of creating an RcppArmadillo package that purrs. In the process, I overhauled and polished how new packages are created by RcppArmadillo.package.skeleton(). An upcoming blog post may provide an example.
Changes in RcppArmadillo version 0.4.450.1.0 (2014-09-21)
  • Upgraded to Armadillo release Version 4.450.1 (Spring Hill Fort)
    • faster handling of matrix transposes within compound expressions
    • expanded symmatu()/symmatl() to optionally disable taking the complex conjugate of elements
    • expanded sort_index() to handle complex vectors
    • expanded the gmm_diag class with functions to generate random samples
  • A new random-number implementation for Armadillo uses the RNG from R as a fallback (when C++11 is not selected so the C++11-based RNG is unavailable) which avoids using the older C++98-based std::rand
  • The RcppArmadillo.package.skeleton() function was updated to only set an "Imports:" for Rcpp, but not RcppArmadillo which (as a template library) needs only LinkingTo:
  • The RcppArmadillo.package.skeleton() function will now prefer pkgKitten::kitten() over package.skeleton() in order to create a working package which passes R CMD check.
  • The pkgKitten package is now a Suggests:
  • A manual page was added to provide documentation for the functions provided by the skeleton package.
  • A small update was made to the package manual page.
Courtesy of CRANberries, there is also a diffstat report for the most recent release. As always, more detailed information is on the RcppArmadillo page. Questions, comments etc should go to the rcpp-devel mailing list off the R-Forge page.

This post by Dirk Eddelbuettel originated on his Thinking inside the box blog. Please report excessive re-aggregation in third-party for-profit settings.

15 September 2014

Ritesh Raj Sarraf: apt-offline 1.5

I am very pleased to announce the release of apt-offline, version 1.5. In version 1.4, the offline bug report functionality had to be dropped. In version 1.5, it is back again. apt-offline now uses the new Debian native BTS library. Thanks to its developers, this library is much more slim and neat. The only catch is that it depends on the SOAPpy library which currently is not stock in Python. If you run apt-offline of Debian, you may not have to worry as I will add a Recommends on that package. For users using it on Microsoft Windows, please ensure that you have the SOAPpy library installed. It is available on pypi. The old bundled magic library has been replaced with the version of python magic library that Debian ships. This library is derived from the file package and is portable on almost all Unixes. For Debian users, there will be a Recommends on it too. There were also a bunch of old, outstanding, and annoying bugs that have been fixed in this release. For a full list of changes, please refer to the git logs. With this release, apt-offline should be in good shape for the Jessie release. apt-offline is available on Alioth @ https://alioth.debian.org/projects/apt-offline/

AddThis:

Categories:

Keywords:

31 August 2014

Ritesh Raj Sarraf: apt-offline 1.4

apt-offline 1.4 has been released [1]. This is a minor bug fix release. In fact, one feature, offline bug reports (--bug-reports), has been dropped for now. The Debian BTS interface seems to have changed over time and the older debianbts.py module (that used the CGI interface) does not seem to work anymore. The current debbugs.py module seems to have switched to the SOAP interface. There are a lot of changes going on personally, I just haven't had the time to spend. If anyone would like to help, please reach out to me. We need to use the new debbugs.py module. And it should be cross-platform. Also, thanks to Hans-Christoph Steiner for providing the bash completion script. [1] https://alioth.debian.org/projects/apt-offline/

AddThis:

Categories:

Keywords:

2 August 2014

Don Armstrong: ErgoDox keyboard assembly

I routinely use a Kinesis Advantage Pro keyboard, which is a split, ergonomic keyboard with thumb clusters that uses brown cherryMX switches. Over the thirteen years that I've been using it, I've become a huge fan of this style of keyboard. However, I have two major annoyances with the Kinesis. First, while the firmware is good, remapping the keys is complicated and producing more complicated keyboard layouts with layers and keycodes that are not present in the original layout is not possible. Secondly, the interconnect between the main key wells and the controller board in the middle occasionally fails, and requires disassembly and occasional re-tinning of the circuit board interconnect connector. 1 About a year ago, I became aware of the ErgoDox keyboard, which is a keyboard design which mimics the kinesis to some degree, but with completely separated key halves (useful, because I'm substantially bigger than the average human), programmable firmware (so I can finally have the layers and missing keys) and with slightly more elegant interconnects (TRRS cables). Unfortunately, at the time I first heard about it (and other custom keyboards), making it required sourcing circuit boards, parts, and finding someone to cut a case for the keyboard. Then, a few months ago, I learned about MassDrop, a company who puts together groups of people to do buys of products at near-wholesale level prices, and their offer of all of the parts to build an ErgoDox. After waiting for a group buy of the keyboard to become available, I put in an order, and received the parts two months later. Over a few hours yesterday, I learned how to do surface mount soldering of the 78 diodes (one for each key), and finished assembling and flashing the firmware. This morning, I fixed up the few key bindings that I needed to be productive, and viola, my laptop at home now has a brand new ergonomic keyboard.

29 June 2014

Ritesh Raj Sarraf: Fibre Channel over Ethernet

Fibre Channel over Ethernet (FCoE) is a computer network technology that encapsulates Fibre Channel frames over Ethernet networks. This allows Fibre Channel to use 10 Gigabit Ethernet networks (or higher speeds) while preserving the Fibre Channel protocol. The specification was part of the International Committee for Information Technology Standards T11 FC-BB-5 standard published in 2009 (As descripted on Wikipedia) I just orphaned the FCoE packages for Debian. I don't really have the time and enthusiasm to maintain FCoE any more. The packages may not be in top notch shape, but FCoE as a technology, itself did not see many takers. The popcon stats are low. In case anyone is interested to takeover the maintenance, there is a pkg-fcoe group on alioth. There are 4 packages that build the stack: lldpad, libhbaapi, libhbalinux and fcoe-utils.

AddThis:

Categories:

Keywords:

18 June 2014

Ritesh Raj Sarraf: Laptop Mode Tools 1.65

I am very pleased to announce the release of Laptop Mode Tools, at version 1.65 This release took a toll given things have been changing for me, both personally and professionally. 1.64 was released on September 1st, 2013. So it was a full 9 month period, of which a good 2-3 months were procrastination. That said, this release has some pretty good bug fixes and I urge all distribution packagers to push it to their repositories soon. While I'd thank all contributors who have helped make this release, a special thank you to Stefan Huber. Stefan found/fixed many issues, did the messy code clean up etc.. Thank you. Worthy changes are mentioned below. For full details, please refer to the git commit logs. 1.65 - Wed Jun 18 19:22:35 IST 2014
* fix grep error on missing $device/uevent
* ethernet: replace sysfs/enabled by 'ip link down'
* wireless-iwl-power: sysfs attr enbable -> enabled
* wireless-iwl-power: Add iwlwifi support
* Use Runtime Power Managemet Framework is more robust now. Deprecates module
usb-autosuspend

* Fix multiple hibernate issue
* When resuming, run LMT in force initialization mode
* Add module for Intel PState driver
* GUI: Implement suspend/hibernate interface


AddThis:

Categories:

Keywords:

22 April 2014

Ritesh Raj Sarraf: Basis B1

Starting yesterday, I am a happy user of the Basis B1 (Carbon Edition) Smart Watch The company recently announced being acquired by Intel. Overall I like the watch. The price is steep, but if you care of a watch like that, you may as well try Basis. In case you want to go through the details, there's a pretty comprehensive review here. Since I've been wearing it for just over 24hrs, there's not much data to showcase a trend. But the device was impressively precise in monitoring my sleep.

Pain points - For now, sync is the core of the pains. You need either a Mac or a Windows PC. I have a Windows 7 VM with USB Passthru, but that doesn't work. There's also an option to sync over mobile (iOS and Android). That again does not work for my Chinese Mobile Handset running MIUI.

AddThis:

Categories:

Keywords:

Martín Ferrari: DNSSEC, DANE, SSHFP, etc

While researching some security-related stuff for a post I am currently writing, I found some interesting bits here and there that I though I should share, as they were new to me, and probably for many others.

DNSSEC The first thing is DNSSEC. I knew about it, of course, but never bothered to dig much into it. While reading about many interesting applications of DNS for key distribution, and thinking of ways to use them, it is clear that DNSSEC is a precondition for any of that to work. In case you don't know about it, it is an extension for the DNS service to make it safer, for example, to avoid the bad guys having you think that google.com points to sniffer.nsa.gov. Apart from these ber-cool applications I was thinking about, avoiding DNS-based attacks becomes more and more relevant these days. And I think Debian and the rest of the Free Software world should work on making this available to all end-users as easily as possible. While adoption still looks pretty low, there are some good news. First, Google claims its public DNS supports DNSSEC. Of course, you need to trust Google servers, and the path between your machine and them. But if your resolver supports DNSSEC, you can use their servers and validate the answers. On the other side, I am not too sure about their implementation, as half of the time, it would return a valid answer to a query for an invalid record: dig +dnssec sigfail.verteiltesysteme.net @8.8.8.8). Also, they have not published DNSSEC records for google.com, which seems crazy. Some packages included in Debian already take advantage of DNSSEC, if available (more on that later), but more importantly, there are a couple of DNSSEC-enabled recursive servers, including bind, unbound, and the more commonly-used dnsmasq (there is a wiki page summarising Debian's status). Sadly, the default configuration for dnsmasq does not enable DNSSEC, and most people will not use it, even if it installed, because DHCP-provided servers are usually preferred. It seems to me that it would be wise to have a package that would install dnsmasq with DNSSEC enabled, and make it the only valid resolver for the system. If you want to check if your resolver is correctly validating DNSSEC, you can use this test web page. Another good news is that many top-level domains already support DNSSEC, and in my case, Gandi.net has support in place to set it up. So I am going to look into enabling it for my own domain.

SSHFP One useful and simple advantage of using DNSSEC, is that you can store information there, and then trust it to be correct. One new DNS RR (resource record) that is useful in this context is the SSHFP RR, which allows the sysadmin of a host to publish the host SSH key fingerprint in the DNS zone. The ssh client, when enabling the VerifyHostKeyDNS option, will use that information to trust unknown hosts. One downside to this, is that either if you set the option to ask, or if your resolver does not support DNSSEC, you get the same message, which does not warn you about the extra risk. To help you create your DNS records, you can just run this command:
$ ssh-keygen -r brie.tincho.org
brie.tincho.org IN SSHFP 1 1 6ac93c63379828b5b75847bc37d8ab2b48983343
brie.tincho.org IN SSHFP 2 1 cf0d11515367e3aa7eeb37056688f11b53c8ef23

DANE, S/MIME and GPG Recently, while at FOSDEM, I attended talks that mentioned DANE. This proposed IETF standard introduces a mechanism to use DNS as a secure key distribution system, which could completely override the CA oligopoly, a very attractive proposition for many people. In short, it is very similar to the SSHFP mechanism, but it is not restricted to SSH host keys: it can be used to distribute public key information for any TLS-enabled service. So, instead (or in addition to) of having a CA sign your certificate, and relying on the chain of trust by means of having a local copy of all root CA certificates, you use the chain of trust embedded in DNSSEC to make sure that the DNS RRs you publish are valid. Then, the client application can trust the fingerprint published for the relevant service to verify that it is talking to the right server. This is a very exciting development, and I hope it gets widespread adoption. It is already supported in Postfix, there seem to be some work going on in Mozilla, as well as in Prosody which is a great start. Another exciting development of this, is the generalisation of DANE for other entities, like email addresses. There are two draft RFCs being worked on right now to deploy S/MIME and OpenPGP key material using DNSSEC. This could also change completely the way we manage the Web of Trust.

13 November 2013

Andrea Veri: Configuring DNSSEC on your personal domain

Today I ll be working out how to properly configure DNSSEC on a BIND9 installation, I ll also make sure to give you all the needed instructions to properly verify if a specific domain is being correctly covered by DNSSEC itself. In addition to that a few more details will be provided about adding the relevant SSHFP s entries on your DNS zone files to be able to automatically verify the authenticity of your domain when connecting to it with SSH avoiding any possible MITM attack. First of all, let s create the Zone Signing Key (ZSK) which is the key that will be responsible to sign any other record on the zone file which is not a DNSKEY record itself:
dnssec-keygen -a RSASHA1 -b 1024 -n ZONE gnome.org


Note: the dnssec-keygen binary file should be part of bind97 (RHEL 5) or bind (RHEL6) package according to yum whatprovides: RHEL 5:
32:bind97-9.7.0-17.P2.el5_9.2.x86_64 : The Berkeley Internet Name Domain (BIND) DNS (Domain Name System) server
Repo : rhel-x86_64-server-5
Matched from:
Filename : /usr/sbin/dnssec-keygen


RHEL 6:
32:bind-9.8.2-0.17.rc1.el6.3.x86_64 : The Berkeley Internet Name Domain (BIND) DNS (Domain Name System) server
Repo : rhel-x86_64-server-6
Matched from:
Filename : /usr/sbin/dnssec-keygen


Then, create the Key Signing Key (KSK), which will be used to sign all the DNSKEY records:
dnssec-keygen -a RSASHA1 -b 2048 -n ZONE -f KSK gnome.org


Creating the above keys can take several minutes, when done copy the public keys to the zone file this way:
cat Kgnome.org*.key >> gnome.org


When done you can clean out the useless bits from the zone file and just leave the DNSKEY records (which are not commented out as you will notice) An additional and cleaner way of accomplishing the above is to use the INCLUDE rule on the zone file itself as follows:
$INCLUDE /srv/dnssec-keys/Kgnome.org+005+12345.key
$INCLUDE /srv/dnssec-keys/Kgnome.org+005+67890.key


Choosing which method to use is really up to you. Once that is done you can go ahead and sign the zone file. As of myself I m making use of the do-domain script taken from the Fedora Infrastructure Team s repositories. If you are going to use it yourself, make sure to adjust all the relevant variables to match your setup, especially keyspath, region_zones, template_zone, signed_zones and AREA. The do-domain script also checks your zone file through named-checkzone before signing it.
/me while editing the do-domains script with the preview of gnome-code-assistance!

/me while editing the do-domains script with the preview of gnome-code-assistance!

If instead you don t want to use the script above, you can sign the zone file manually in the following way:
dnssec-signzone -K /path/to/your/dnssec/keys -e +3024000 -N INCREMENT gnome.org


By default, the above command will append .signed to the file name, you can modify that behaviour by appending the -f flag to the dnssec-signzone call. The -N INCREMENT will increment the serial number automatically making use of the RFC 1982 arithmetics while the -e flag will extend the zone signature end date from the default 30 days to 35. (this way we can safely run a monthly cron job that will sign the zone file automatically) You can make use of the following script to achieve the above:
#!/bin/sh
SIGNZONE="/usr/sbin/dnssec-signzone"
DNSSEC_KEYS="/srv/dnssec-keys"
NAMEDCHROOT="/var/named/chroot"
ZONEFILES="gnome.org"
cd $NAMEDCHROOT
for ZONE in $ZONEFILES; do
$SIGNZONE -K $DNSSEC_KEYS -e +3024000 -f $ZONE.signed -N INCREMENT $ZONE
done
/sbin/service named reload


Once the zone file has been signed just make sure to include it on named.conf and restart named:
zone "gnome.org"  
file "gnome.org.signed";
 ;


When you re done with that we should be moving ahead adding a DS record for our domain at our domain registrar. My example is taken from the known gandi.net registrar.
gandi

Gandi s DNSSEC interface

Select KSK (257) and (RSA/SHA-1) on the dropdown list and paste your public key on the box. You will find the public key you need on one of the Kgnome.org*.key files, you should look for the DNSKEY 257 entry as dig DNSKEY gnome.org shows:
;; ANSWER SECTION:
gnome.org. 888 IN DNSKEY 257 3 5 AwEAAbRD7AymDFuKc2iXta7HXZMleMkUMwjOZTsn4f75ZUp0of8TJdlU DtFtqifEBnFcGJU5r+ZVvkBKQ0qDTTjayL54Nz56XGGoIBj6XxbG8Es+ VbZCg0RsetDk5EsxLst0egrvOXga27jbsJ+7Me3D5Xp1bkBnQMrXEXQ9 C43QfO2KUWJVljo1Bii3fTfnHSLRUsbRn8Puz+orK71qxs3G9mgGR6rm n91brkpfmHKr3S9Rbxq8iDRWDPiCaWkI7qfASdFk4TLV0gSVlA3OxyW9 TCkPZStZ5r/WRW2jhUY/kjHERQd4qX5dHAuYrjJSV99P6FfCFXoJ3ty5 s3fl1RZaTo8=


Once that is done you should have a fully covered DNSSEC domain, you can verify that this way:
dig . DNSKEY   grep -Ev '^($ ;)' > root.keys
dig +sigchase +trusted-key=./root.keys gnome.org. A   cat -n


The result:
105 ;; WE HAVE MATERIAL, WE NOW DO VALIDATION
106 ;; VERIFYING DS RRset for org. with DNSKEY:59085: success
107 ;; OK We found DNSKEY (or more) to validate the RRset
108 ;; Ok, find a Trusted Key in the DNSKEY RRset: 19036
109 ;; VERIFYING DNSKEY RRset for . with DNSKEY:19036: success
110
111 ;; Ok this DNSKEY is a Trusted Key, DNSSEC validation is ok: SUCCESS

Bonus content: Adding SSHFP entries for your domain and verifying them You can retrieve the SSHFP entries for a specific host with the following command:
ssh-keygen -r $(hostname --fqdn) -f /etc/ssh/ssh_host_rsa_key.pub


When retrieved just add the SSHFP entry on the zone file for your domain and verify it:
ssh -oVerifyHostKeyDNS=yes -v subdomain.gnome.org


Or directly add the above parameter into your /etc/ssh/ssh_config file this way:
VerifyHostKeyDNS=yes


And run ssh -v subdomain.gnome.org , the result you should receive:
debug1: Server host key: RSA 00:39:fd:1a:a4:2c:6b:28:b8:2e:95:31:c2:90:72:03
debug1: found 1 secure fingerprints in DNS
debug1: matching host key fingerprint found in DNS
debug1: ssh_rsa_verify: signature correct

That s it! Enjoy!

22 October 2013

Wouter Verhelst: bind module for puppet

Yesterday, I released a module to manage bind zones using puppet on the puppet forge. It uses a custom type to manage DNS RRs (using dig and nsupdate), and there's also a class for ensuring that a zone is installed, both on masters and on slaves, in a manner that you only need to list the zones, with their master and slave name servers, in a hiera data file. The main reason why I did it this way is because all the other modules that I looked at would either expect you to write zone files (and then just copy them over), or generate the zone files and use "rndc reload" or similar. I don't like that way of doing things. BIND does support dynamic updates of zone entries, and it's perfectly possible to query a name server to verify whether a given record actually exists. Additionally, this gave me a good excuse to learn how to write a custom type for puppet. Today, I was astonished to learn that my module has already had six downloads. In 20 hours, that's fairly impressive, in my opinion.

9 October 2013

John Goerzen: Two Kittens

Almost every time he got off the bus for the past month and a half, Jacob started his afternoon in the same way. Before toys, before his trains and his toy bus, before anything indoors, he went for our cats. Here he is, cradling his favorite, Tigger: Laura and I both grew up around cats. We had been talking about kittens, and shortly after we got engaged, one of my relatives offered us some free kittens. We went to his place one evening and selected two of them one calico and one tiger-colored. Since what is now my place will soon be our place, they came to live with me. Our cats were one of the first things we did to prepare for our lives together. Oliver wanted to name them some rather impractical sentence-long names ( The Cat Who Always Likes To Run ), so Laura and I suggested some names from one of their favorite books: Tigger and Roo. They both liked the names, but Oliver thought they should be called Tigger the Digger and Roo the Runner . Never mind that they were just 6 weeks old at the time, and not really old enough to either dig or run. Here s Oliver with Roo, the day after the kittens arrived here. I have always had outside cats, both because I m allergic to cats so I need them to be outside, and because they sometimes literally quiver with joy of being outdoors. Tigger and Roo often chased insects, wrestled with each other, ran up (and slowly came back down) trees, and just loved the outside. Sometimes, I have taken my laptop and wireless headset and work from the back porch. The kittens climb up my jeans, inspect the laptop, and once Roo even fell asleep on my lap at one of those times. Jacob has been particularly attached to Tigger, calling him my very best friend. When Jacob picks him up after school, Tigger often purrs while cradled in Jacob s arms, and Jacob comments that Tigger loves me. Oh dad, he knows I am his friend! The kittens have been growing, and becoming more and more comfortable with their home in the country. Whenever I go outside, it isn t long before there are two energetic kittens near my feet, running back and forth, sometimes being very difficult to avoid stepping on. I call and I see little heads looking at me, from up in a tree, or peeking out from the grain elevator door, or from under the grill. They stare for just a second, and then start running, sometimes comically crashing into something in their haste. Yesterday when I went to give them food, I called and no cats came. I was concerned, and walked around the yard, but at some point either they come or they don t. Yesterday afternoon, just after the bus dropped off Jacob, I discovered Tigger on the ground, motionless. Once Jacob was in the house, I went to investigate, and found Tigger was dead. As I was moving his body, I saw Roo was dead, too. Both apparently from some sort of sudden physical injury a bit mysterious, because neither of them were at a place where they had ever gone before. While all this was happening, I had to also think about how I was going to tell the boys about this. I tried to minimize what he could see, Jacob had caught an unavoidable glimpse of Tigger as we were walking back from the bus, but didn t know exactly what had happened. He waited in the house, and when I came back, asked me if Tigger was dead. I said he was. Jacob started crying, saying, Oh Dad, I am so sad , and reached up for a hug. I picked him up and held him, then sat down on the couch and let him curl up on my lap. I could quite honestly let him know he wasn t alone, telling him I am sad, too. Oliver arrived not long after, and he too was sad, though not as much as Jacob. Both boys pretty soon wanted to see them. I decided this was important for them for closure, and to understand, so while they waited in the house, I went back out to arrange the kittens to hide their faces, the part that looks most unnatural after they die. The boys and I walked out to where I put them, then I carried both of them the last few feet. We stood a little ways back close enough to see who was there, far enough to not get too much detail and they were both sniffling. I tried to put voice to the occasion, saying, Goodbye, Tigger and Roo. We love you. Oliver asked if they could hear us. I said No, but I told them what I felt like anyway. Jacob, through tears, said, Dad, maybe they are in heaven now. We went back inside. Jacob said, Oh dad, I am so sad. This is the saddest day of my life. My heart is breaking. Hearing a 7-year-old say that isn t exactly easy for a dad. Pretty soon he was thinking of sort of comfort activities to do, saying I think I would feel better if we did So they decided to watch a favorite TV program. Jacob asked if Laura knew yet, and when I said no, he got his take-charge voice and said, Dad, you will start the TV show for us. While we are watching, you will send Laura an email to tell her about Tigger and Roo. OK? What could I say, it wasn t a bad idea. Pretty soon both boys were talking and laughing. It was Big Truck Night last night, at a town about half an hour away. It s an annual event we were already planning to attend, where all sorts of Big Trucks firetrucks, school bus, combine, bucket truck, cement truck, etc show up and are open for kids to climb in and explore. It s always a highlight for them. They played and sang happily as we drove, excitedly opened and closed the big door on the school bus and yelled All Aboard! from the top of the combine. We ate dinner, and drove back home. When we got home, Jacob mentioned the cats again, in a sort of matter-of-fact way, and also wanted to make sure he knew Laura had got the message. A person never wakes up expecting to have to dump a bowl of un-eaten cat food, or to give an impromptu cat funeral for little boys. As it was happening, I wished they hadn t been around right then. But in retrospect, I am glad they were. They had been part of life for those kittens, and it is only right that they could be included in being part of death. They got visual closure this way, and will never wonder if the cats are coming back someday. They had a chance to say goodbye. Here is how I remember the kittens.

1 September 2013

Ritesh Raj Sarraf: Laptop Mode Tools 1.64

I just released Laptop Mode Tools @ version 1.64. And am pleased to introduce the new graphical utility to toggle individual power saving modules in the package. The GUI is written using the PyQT Toolkit and the options in the GUI are generated at runtime, based on the list of available power saving modules. Apart from the GUI configuration tool, this release also includes some bug fixes:
  • Don't touch USB Controller power settings. The individual devices, when plugged in, while on battery, inherit the power settings from the USB controller
  • start-stop-programs: add support for systemd. Thanks to Alexander Mezin
  • Replace hardcoded path to udevadm with "which udevadm". Thanks to Alexander Mezin
  • Honor .conf files only. Thanks to Sven K hler
  • Make '/usr/lib' path configurable. This is especially useful for systems that use /usr/lib64, or /lib64 directly. Thanks to Nicolas Braud-Santoni
  • Don't call killall with the -g argument. Thanks to Murray Campbell
  • Fix RPM Spec file build errors
The Debian package will follow soon. I don't intend to introduce a new package for the GUI tool because the source is hardly 200 lines. So the dependencies (pyqt packages) will go as Recommeds or Suggests

AddThis:

Categories:

Keywords:

23 July 2013

Ritesh Raj Sarraf: Power consumption on Linux 3.10

The power consumption on the Linux kernel 3.10 is pretty bad. On kernel 3.10, with the follwing config, the PowerTop results are:
#
# Timers subsystem
#
CONFIG_TICK_ONESHOT=y
CONFIG_NO_HZ_COMMON=y
# CONFIG_HZ_PERIODIC is not set
CONFIG_NO_HZ_IDLE=y
# CONFIG_NO_HZ_FULL is not set
CONFIG_NO_HZ=y
CONFIG_HIGH_RES_TIMERS=y
PowerTOP v2.0 Overview Idle stats Frequency stats Device stats Tunables
The battery reports a discharge rate of 28.0 W
The estimated remaining time is 23 minutes
Summary: 1785.5 wakeups/second, 0.0 GPU ops/second, 0.0 VFS ops/sec and 22.1% CPU use
Power est. Usage Events/s Category Description
16.3 W 2915 rpm Device Laptop fan
5.11 W 100.0% Device USB device: WALTON Primo-X1 Primo-X1
1.70 W 33.3% Device Display backlight
849 mW 33.3% Device Display backlight
425 mW 86.0 ms/s 330.7 Process /usr/bin/konsole
316 mW 63.9 ms/s 66.1 Process /usr/bin/plasma-desktop
142 mW 28.6 ms/s 396.8 Process /usr/bin/X :0 -auth /var/run/lightdm/root/:0 -nolisten tcp vt7
64.1 mW 13.0 ms/s 198.4 Process kwin -session 101261418fe3000136103713100000053880000_13746081
53.6 mW 10.8 ms/s 0.00 Process powertop
35.9 mW 7.3 ms/s 66.1 Process /usr/lib/chromium/chromium --type=plugin --plugin-path=/usr/li
24.3 mW 4.9 ms/s 396.8 Interrupt PS/2 Touchpad / Keyboard / Mouse
6.92 mW 1.4 ms/s 0.00 Interrupt [48] i915
5.94 mW 1.2 ms/s 66.1 Interrupt [9] RCU(softirq)
3.98 mW 0.8 ms/s 0.00 kWork flush_to_ldisc
3.78 mW 0.8 ms/s 66.1 Process [ksoftirqd/2]
3.33 mW 673.3 us/s 66.1 Process [rcu_sched]
1.80 mW 363.1 us/s 66.1 Interrupt [1] timer(softirq)
1.79 mW 363.0 us/s 0.00 Process [ksoftirqd/4]
Where as on the 3.9 kernel:
The battery reports a discharge rate of 13.2 W
The estimated remaining time is 43 minutes
Summary: 611.5 wakeups/second, 0.0 GPU ops/second, 0.0 VFS ops/sec and 14.2% CPU use
Power est. Usage Events/s Category Description
14.0 W 2722 rpm Device Laptop fan
1.72 W 33.3% Device Display backlight
862 mW 33.3% Device Display backlight
255 mW 65.7 ms/s 58.0 Process /usr/bin/plasma-desktop
91.9 mW 23.7 ms/s 27.5 Process /usr/lib/chromium/chromium --type=renderer --lang=en-US --forc
60.1 mW 15.5 ms/s 96.1 Process /usr/lib/chromium/chromium --type=plugin --plugin-path=/usr/li
25.0 mW 6.4 ms/s 25.1 Process kwin -session 101261418fe3000136103713100000053880000_13746094
21.5 mW 5.6 ms/s 34.2 Process /usr/bin/X :0 -auth /var/run/lightdm/root/:0 -nolisten tcp vt7
13.1 mW 3.4 ms/s 5.6 Process /usr/bin/konsole
9.82 mW 2.5 ms/s 53.7 Process [irq/48-iwlwifi]
9.11 mW 2.4 ms/s 2.2 Process /usr/bin/knemo
8.62 mW 2.2 ms/s 12.3 Process /usr/lib/chromium/chromium --password-store=detect
8.32 mW 2.1 ms/s 45.5 Interrupt [48] iwlwifi
6.96 mW 1.8 ms/s 35.3 Interrupt [7] sched(softirq)
5.13 mW 1.3 ms/s 57.1 Interrupt [47] i915
4.24 mW 1.1 ms/s 0.4 Process powertop
3.38 mW 0.9 ms/s 1.5 Timer tcp_keepalive_timer
2.85 mW 0.7 ms/s 11.9 Process /usr/sbin/mysqld --basedir=/usr --datadir=/var/lib/mysql --plu
<section class="field field-name-taxonomy-vocabulary-1 field-type-taxonomy-term-reference field-label-above view-mode-rss">

Categories: </section><section class="field field-name-taxonomy-vocabulary-2 field-type-taxonomy-term-reference field-label-above view-mode-rss">

Keywords: </section>

24 June 2013

Vincent Sanders: A picture is worth a thousand words

When Sir Tim made a band the first image on the web back in 1992, I do not imagine for a moment he understood the full scope of what was to follow. I also do not imagine Marc Andreessen understood the technical debt he was about to introduce and that fateful day in 1993 when he proposed the img tag allowing inline images.

Many will argue of course that the old adage of my title may not apply to the web, where if it were true, every page view would be like reading a novel! and many of those novels would involve cats or one pixel tall colour gradients.

Leaving aside the philosophical arguments for a moment it takes web browser author a great deal of effort to quickly and efficiently put those cat pictures in front of your eyeballs.
NavigatingImages are navigated by a user in a selection of ways, including:
    Molly the cat
  • Direct navigation to an image like this cat picture is the original way images were viewed i.e not inline and as a separate document not involving any html. Often this is now handled by constructing a generated web page within the browser with the image inline avoiding the need for explicit image content handling.
  • An inline img tag (ironically it really does take thousands of words to describe) which puts the image within the web page not requiring the user to navigate away from the document being displayed. These tags are processed as the Document Object Model (DOM) is constructed from the html source. When an img tag is encountered a fetch is scheduled for the object and when complete the DOM completion events happen and the rendered page is updated.
  • Imported by a CSS stylesheet.
  • inline element created by a script
Whatever the method, the resulting object is subject to the same caching operations as any content object within a browser. These caching and storage operations are not specific to images however images are one of the most resource intensive objects a browser must regularly deal with (though javascript and stylesheet sources are starting to rival it at times) because they are relatively large and numerous.
Pre render processing
When the image object fetch is completed the browser must process the image for rendering which is where this gets even more complicated. I shall use how this works in NetSurf as I know that browsers internals best, but operation is pretty similar in many browsers.

The content fetch will have performed basic content sniffing to determine the received objects mime type. This is necessary because a great number of servers are misconfigured and a disturbingly large number of images served as png are really jpegs etc. Indeed sometimes you even get files served which are not images!

Upon receipt of enough data to decode the image header, for the detected mime type, the images metadata is extracted. This metadata usually includes things like size and colour depth.

If the img tag in the original source document omitted the width or height of the image the entire document render may have to be reflowed at this point to allow the correct spacing. The reflow process is often unsightly and should be avoided. Additionally at this stage if the image is too large to handle or an unhandled format the object will be replaced with the "broken" image icon.

Often that will be everything that is done with the image, when I added "lazy" image conversion to NetSurf we performed extensive profiling and discovered well over 40% of images on visited pages are simply never viewed by the user but that small (100 pixel on a side) images were almost always displayed.

This odd distribution comes down to how images are used in modern web pages, they broadly fall into two categories of "decoration" and "content" for example all the background gradients and sidebar images etc. are generally small images used as decoration whereas the cat picture above is part of the "content". A user may not scroll down a page to see content but almost always gets to "view" the decoration.
Rendering
Created by Andrea R used under CC Attribution-NonCommercial-ShareAlike 2.0 licence
The exact browser heuristics used differ as to when the render operation is performed but they all have a similar job to perform. When i say render here this may be possibly as an "off screen" view if they are actually on another tab etc. Regardless the image data must be converted from the source data (a PNG, JPEG etc.) into a format suitable for the browsers display plotting routines.

The browser will create a render bitmap in whatever format the plotting routines require (for example the GTK plotters use a Cairo image surface) , use an image library to unpack the source image data (PNG) into the render bitmap (possibly performing transforms such as scaling and rotation) and then use that bitmap to update the pixels on screen.

The most common transform at render time is that of scaling, this can be problematic as not all image libraries have output scaling capabilities which results in having to decode the entire source image and then scaling from that bitmap.

This is especially egregious if the source image is large (perhaps a multi megabyte jpeg) but the width and height are set to produce a thumbnail. The effect is amplified if the user has set the image cache size limit to a small value like 8 Megabytes (yes some users do this apparently their machines have 32MB of RAM and they browse the web)

In addition the image may well require tiling (for background gradients) and quite complex transforms (including rotation) thanks to CSS 3. Add in that javascript can alter the css style and hence the transform and you can imagine quite how complex the renderers can become.
CachingThe keen reader might spot that repeated renderings of the source image (e.g. because window is scrolled or clipped) result in this computationally expensive operation also being repeated. We solve this by interposing a render image cache between the source data and the render bitmaps.

By keeping the data in the preferred format, image rendering performance can be greatly increased. It should be noted that this cache is completely distinct from the source object cache and relates only to the rendered images.

Originally NetSurf used to perform the render conversion for every image as it was received without exception, rather than at render time, resulting in a great deal of unnecessary processing and memory usage. This was originally done for simplicity and optimising for "decoration" images.

The rules for determining what gets cached and for how long are somewhat involved and the majority of the code within the current implementation NetSurf uses is metrics and statistic generation to produce better decisions.

There comes a time at which this cache is no longer sufficient and rendering performance becomes unacceptable. The NetSurf renderer errs on the side of reducing resource usage (clearing the cache) at the expense of increased render times. Other browsers make different compromises based on the expected resources of the user base.
FinallyHopefully that gives a reasonable overview to the operations a browser performs just to put that cat picture in front of your eyeballs.

And maybe next time your browser is guzzling RAM to plot thousands of images you might have a bit more appreciation to exactly what it is up to.

1 March 2013

Ritesh Raj Sarraf: I am so indebted to the community

As someone who learned computers on his own, I always acknowledged the value that the Free Software movement has brought. The accessibility of these topics, which are only supposed to be part of text books and schools, is available for anyone and everyone who has the resource and passion to do it. But this past week, 2 things made me pretty impressed with the maturity and quality of work that we do.

rrs@zan:/media$ sudo lvextend -r -v -L 100G /dev/BackupDisk/DATA

Finding volume group BackupDisk
Executing: fsadm --verbose check /dev/BackupDisk/DATA
fsadm: "ext4" filesystem found on "/dev/mapper/BackupDisk-DATA"
fsadm: Skipping filesystem check for device "/dev/mapper/BackupDisk-DATA" as the filesystem is mounted on /media/DATA
fsadm failed: 3
Archiving volume group "BackupDisk" metadata (seqno 7).
Extending logical volume DATA to 100.00 GiB
Found volume group "BackupDisk"
Found volume group "BackupDisk"
Loading BackupDisk-DATA table (254:2)
Suspending BackupDisk-DATA (254:2) with device flush
Found volume group "BackupDisk"
Resuming BackupDisk-DATA (254:2)
Creating volume group backup "/etc/lvm/backup/BackupDisk" (seqno 8).
Logical volume DATA successfully resized
Executing: fsadm --verbose resize /dev/BackupDisk/DATA 104857600K
fsadm: "ext4" filesystem found on "/dev/mapper/BackupDisk-DATA"
fsadm: Device "/dev/mapper/BackupDisk-DATA" size is 107374182400 bytes
fsadm: Parsing tune2fs -l "/dev/mapper/BackupDisk-DATA"
fsadm: Resizing filesystem on device "/dev/mapper/BackupDisk-DATA" to 107374182400 bytes (13107200 -> 26214400 blocks of 4096 bytes)
fsadm: Executing resize2fs /dev/mapper/BackupDisk-DATA 26214400
resize2fs 1.42.5 (29-Jul-2012)
Filesystem at /dev/mapper/BackupDisk-DATA is mounted on /media/DATA; on-line resizing required
old_desc_blocks = 4, new_desc_blocks = 7
The filesystem on /dev/mapper/BackupDisk-DATA is now 26214400 blocks long.
I didn't have much hope that this extend operation would succeed. But it did. When I initiated this operation, in the background, I had the backups being synced parallely (which actually made me resize my volume. :-)
The other item which made me happy yesterday was Audacity. Once upon a time, when I needed to split a music file to cut a ringtone out of it, I'd go looking for software that could do it. Then I would hope that one of those software vendors have a fully working version and not a demo with just 5 seconds clipping. Cowon was one media player I can recollect I've used to split audio files.
But in this case, I had a different requirement. I needed to increase the dB of my ringtone file so that it sounded really a ringtone (Example: Bourne Ultimatum OST). Audacity, not only did it do the job, it did it for me in just 3-5 minutes. And all with just button clicks. For a n00b with no experience in that domain, I was really impressed.

Next.

Previous.