Search Results: "cyb"

14 July 2020

Markus Koschany: My Free Software Activities in June 2020

Welcome to gambaru.de. Here is my monthly report (+ the first week in July) that covers what I have been doing for Debian. If you re interested in Java, Games and LTS topics, this might be interesting for you. Debian Games Short news
Debian Java Misc Debian LTS This was my 52. month as a paid contributor and I have been paid to work 60 hours on Debian LTS, a project started by Rapha l Hertzog. In that time I did the following: Thanks for reading and see you next time.

11 June 2020

Markus Koschany: My Free Software Activities in May 2020

Welcome to gambaru.de. Here is my monthly report (+ the first week in June) that covers what I have been doing for Debian. If you re interested in Java, Games and LTS topics, this might be interesting for you. Debian Games
Debian Java Misc Debian LTS This was my 51. month as a paid contributor and I have been paid to work 25 hours on Debian LTS, a project started by Rapha l Hertzog. In that time I did the following: ELTS Extended Long Term Support (ELTS) is a project led by Freexian to further extend the lifetime of Debian releases. It is not an official Debian project but all Debian users benefit from it without cost. The current ELTS release is Debian 7 Wheezy . This was my 24. month and I have been paid to work 9,25 hours on ELTS. Thanks for reading and see you next time.

22 April 2020

Jonathan Dowland: SUPERHOT

Continuing a series of blog posts about casual Nintendo Switch games, next in the series is SUPERHOT. Normally 19.99, I picked it up for 13.99 in a sale. That's a little bit more than I would usually pay for a casual game. SUPERHOT first came on my radar because someone I know from a baby group worked on their VR port in some capacity.
Slow-motion buckshot Slow-motion buckshot
A first-person shooter, SUPERHOT's USP is that time only progresses when you move well, nearly. Time is slowed to a complete crawl when you are not moving. The game's visual style is very distinctive: Almost everything is a washed out grey or white colour and porcelean-like texture, except weapons and objects you can interact with, which are a matt black, and enemies, which are a bright red. It reminds me a lot of the 1992 Amiga game Robocop 3.
Robocop 3 Robocop 3
The play-style is very reminiscent of the "Bullet Time" sequences in The Matrix seemingly impossibly overwhelming odds deftly manoeuvred through thanks to superhuman reaction times. The game has a relatively short campaign of little vignettes, linked together by a cyberpunk narrative. The game is sometimes criticised for the short campaign, but for me that's ideal. And the vignettes being short and quite standalone suits my play requirements very well.
Amiga easter-egg Amiga easter-egg
The narrative interspersed between the play scenarios is a little bit over-long, and you can spend an unreasonable amount of time bashing buttons to get through it. Despite that it's a moderately interesting story. Once you've beaten the campaign, you can go back and play any of the scenarios again, or try the newly unlocked endless mode. I haven't tried that yet. The original prototype for the game is a free-to-play in-browser demo, available here. On Windows PC, there's a sequel-of-sorts in the works called MIND CONTROL DELETE with a lot of new features to add replay value.

11 March 2020

Markus Koschany: My Free Software Activities in February 2020

Welcome to gambaru.de. Here is my monthly report (+ the first week in March) that covers what I have been doing for Debian. If you re interested in Java, Games and LTS topics, this might be interesting for you. Debian Games Debian Java Misc Debian LTS This was my 48. month as a paid contributor and I have been paid to work 10 hours on Debian LTS, a project started by Rapha l Hertzog. In that time I did the following: ELTS Extended Long Term Support (ELTS) is a project led by Freexian to further extend the lifetime of Debian releases. It is not an official Debian project but all Debian users benefit from it without cost. The current ELTS release is Debian 7 Wheezy . This was my 21. month and I have been paid to work 8 hours on ELTS. Thanks for reading and see you next time.

8 March 2020

Ulrike Uhlig: Implementing feedback into our work culture

Everywhere I worked in the past, the only feedback that was asked of employees was during a yearly evaluation meeting. These meetings always felt to me like talking to Santa Claus and his Knecht Ruprecht. I was asked: Were you a good employee last year? If yes, we might give you a raise. If no, admit all your mistakes now, even if we already know everything, ho ho ho. And don't you talk about your feelings, or your well-being, or say anything about the organization's (invisible) hierarchies, otherwise we will put you on the "naughty list", and that's it with candy. The yearly evaluation set aside, there was no other place to give feedback (except by escalating a matter by involving the Labour Court, if you happen to work in France, or going on strike, also mostly part of French culture). Feedback allows to reflect on work processes, to situate oneself, and to get closure. How surprised was I when, some years ago, I received an email from a collaborator asking me: kindly for just few paragraphs (doesn t have to be anything long) to hear from you about the process, your work, challenges you had, or anything else you want to mention there.. Wow!
This simple email allowed me to reflect about: How do we get to a feedback culture? How do we get from German Christmas folklore, protestant work ethics, and the deeply rooted principles of disciplining and punishing to a feedback culture on eye level? It sounds a bit like going from the dark ages to a really cool science fiction utopia with universal peace, telepathy, and magic between all sentient beings on all inhabited planets in the cosmos at least that's how I imagined it as a child, just like some of my heroes did: the cosmonaut girl who saves Earth, the boy who talks to space flowers that give him the capacity to fly, and the little onion who fights for justice (the Italian author was so popular on our side of the iron curtain that a soviet astronomer named a minor planet after him. His wife meanwhile immortalized Karl Marx.) and some romantic part of me hangs on to these ideas. Feedback is not always easy to hear and to give. I-Statements Giving and receiving feedback is hard in a culture where people learnt that when they made a mistake they won't get candy. Or that they have to constantly please other people because they are not worthy by themselves. This can lead to people putting mistakes on one another. Every sentence that starts with You are has the potential of creating a lot of hurt, and anger. Have you heard of I-Statements? They have very powerfully changed my world view, as they shift from accusation to ownership of feelings. So instead of telling someone Your writing style is impossible! You really need to change the way you write., with an I-Statement one could say I have a hard time understanding that part of the text. I-Statements make cooperation possible. Listening actively Feedback is not about being right or wrong, it's first of all about being able to see how another person has experienced a situation. Active listening is a tool that helps with understanding. It might seem easy, but needs quite some practice and a safe space. One part of active listening is to restate what you hear the other person say (by mirroring, or paraphrasing), to make sure you understood, and make sure they know you understood what they were trying to say. You can practise this: in a circle of three people, have one person tell how they experienced a (possibly conflictual) situation, have one person do the active listening, and the third person observing in order to give feedback to the active listener about how they did. Then switch roles, for example clockwise, until everyone has had every role. Encouraging continuous feedback A working feedback culture does not take place only once a year. It needs to be a continuous process and therefore implemented in meetings, teams, eventually on the level of a project. Making clear: Who can I talk to if I experience an issue? is not different than telling developers and users where and how they can report a bug, or request a feature. A safe space to express feedback is key. Encouraging multiple feedback channels Some people might feel less empowered or more vulnerable over a channel than others. Make sure to have different channels for receiving feedback such as email, a point on each meeting agenda, a one-to-one meeting, or a poll. Giving and receiving feedback on eye level In a workplace that does not have a working feedback culture, feedback is easily perceived as policing. If your feedback process consists of asking people to upload a form to a cloud server every 3 months, and you notice that some people don't do it, you could ask yourself if there is an issue with how your colleagues perceive giving feedback in your organization. Do you meet your colleagues on eye level when it comes to feedback? Do you take feedback seriously and act on it? How do you deal with unpleasant feedback? How do you react when colleagues don't meet your expectations? Can people participate in the feedback process within their paid work time? Did everybody understand what the feedback process is about? Don't jump to conclusions Humans are problem solving animals. When someone comes to us with a problem, the first thing we want to do is to solve it, to help them. But sometimes this is uncalled for, it can be disempowering, or prevent people from acquiring competences themselves, and it can even break people's boundaries. So instead of asking What can I do for you?, try asking What do you need right now? People will often reply something that you did not expect at all. Acting on feedback Make sure you have a process to collect feedback (possibly anonymized) and to regularly evaluate if the organization needs to implement changes to thrive. Conclusion I stumbled upon Hans-Christian Dany's critique of feedback again recently, therefore I need to make it clear: I'm not interested in improving capitalist work culture by using cybernetic principles of self-regulation through feedback. Instead, I am interested in improving cooperation between people who work either individually or in organizations on eye level. In this framework, I see feedback processes as profoundly anti-capitalist methods to improve cooperation while working towards common good. Implementing these ideas should be doable: there are organizations who provide feedback training for example. This document, initially aiming at people in cooperatives, gives many insights on communication skills and feedback, the agile and UX worlds do feedback "retrospectives". and otherwise I'll have to go and write science fiction stories for children myself.

10 October 2017

Carl Chenet: The Slack Threat

During a long era, electronic mail was the main communication tool for enterprises. Slack, which offer public or private group discussion boards and instant messaging between two people, challenge its position, especially in the IT industry. Not only Slack has features known and used since IRC launch in the late 80s, but Slack also offers file sending and sharing, code quoting, and it indexing for ulterior searches everything that goes through the application. Slack is also modular with numerous plug-in to easily add new features. Using the Software-As-A-Service (SAAS) model, Slack basic version is free, and users pay for options. Slack is now considered by the Github generation like the new main enterprise communication tool. As I did in my previous article on the Github threat, this one won t promote Slask s advantages, as many other articles have already covered all these points ad nauseam, but to show the other side and to warn the companies using this service about its inherent risks. So far, these risks have been ignored, sometimes voluntary in the name of the It works ideology. Neglecting all economic and safety consideration, neglecting all threat to privacy and individual freedom. We ll see about them below.

Github, a software forge as a SAAS, with all the advantage but also all the risk of its economic model

All your company communication since its creation When a start-up chooses Slack, all of its internal communication will be stored by Slack. When someone uses this service, the simple fact to chat through it means that the whole communication is archived. One may point that within the basic Slack offer, only the last 10.000 messages can be read and searched. Bad argument. Slack stored every message and every file shared as it pleases. We ll see below this application behavior is of capital importance in the Slack threat to enterprises. And the problem is the same for all other companies which choose Slack at one point or another. If they replace their traditional communication method with it, Slack will have access to capital data, not only in volume, but also because of their value for the company itself Or anyone interested in this company life. Search Your Entire Archive One of the main arguments to use Slack is its Search your entire archive feature. One can search almost anything one can think of. Why? Because everything is indexed. Your team chat archive or the more or less confidential documents exchanged with the accountant department; everything is in it in order to provide the most effective search tool.

The search bar, well-known by Slack users

We can t deny it s a very attractive feature for everyone inside the company. But it is also a very attractive feature for everyone outside of the company who would want to know more about its internal life. Even more if you re looking for a specific subject. If Slack is the main communication tool of your company, and if as I ve experienced in my professional life, some teams prefer to use it than to go to the office next door or even bug you to put the information on the dedicated channel, one can easily deduce that nothing in this type of company escape Slack. The automatic indexation and the search feature efficiency are excellent tools to get all the information needed, in quantity and in quality. As such, it s a great social engineering tool for everyone who has access to it, with a history as old as the use of Slack as a communication tool in the company. Across borders And Beyond! Slack is a Web service which uses mainly Amazon Web services and most specially Cloudfront, as stated by the available information on Slack infrastructure. Even without a complete study of said infrastructure, it s easy to state that all the data regarding many innovative global companies around the world (and some of them including for all their internal communication since their creation) are located in the United States, or at least in the hands of a US company, which must follow US laws, a country with a well-known history of large scale industrial espionage, as the whistleblower Edward Snowden demonstrated it in 2013 and where company data access has no restriction under the Patriot Act, as in the Microsoft case (2014) where data stored in Ireland by the Redmond software editor have been given to US authorities.

Edward Snowden, an individual and corporate freedom fighter

As such, Slack s automatic indexation and search tool are a boon for anyone spy agency or hacker which get authorized access to it. To trust a third party with all, or at least most of, your internal corporate communication is a certain risk for your company if the said third party doesn t follow the same regulations as yours or if it has different interests, from a data security point of view or more globally on its competitiveness. A badly timed data leak can be catastrophic. What s the point of secretly preparing a new product launch or an aggressive takeover if all your recent Slack conversations have leaked, including your secret plans? What if Slack is hacked? First let s remember that even if a cyber attack may appear as a rare or hypothetical scenario to a badly informed and hurried manager, it is far from being as rare as she or he believes it (or wants to believe it). Infrastructure hacking is quite common, as a regular visit to Hacker News will give you multiple evidence. And Slack itself has already been hacked. February 2015: Slack is the victim during four days of a cyber attack, which was made public by the company in March. Officially, the unauthorized access was limited to information on the users profiles. It is impossible to measure exactly what and who was impacted by this attack. In a recent announcement, Yahoo confessed that these 3 billion accounts (you ve read well: 3 billions) were compromised late 2014!

Yahoo, the company which suffered the largest recorded cyberattack regarding the compromised account numbers

Officially, Slack stated that No financial or payment information was accessed or compromised in this attack. Which is, and by far, the least interesting of all data stored within Slack! With company internal communication indexed sometimes from the very beginning of said company and searchable, Slack may be a potential target for cybercriminal not looking for its users financial credentials but more their internal data already in a usable format. One can imagine Slack must give information on a massive data leak, which can t be ignored. But what would happen if only one Slack user is the victim of said leak? The Free Alternative Solutions As we demonstrated above, companies need to find an alternative solution to Slack, one they can host themselves to reduce data leaks and industrial espionage and dependency on the Internet connection. Luckily, Slack success created its own copycats, some of them being also free software. Rocket.chat is one of them. Its comprehensive service offers chat rooms, direct messages and file sharing but also videoconferencing and screen sharing, and even most features. Check their dedicated page. You can also try an online demo. And even more, Rocket Chat has a very simple extension system and an API. Mattermost is another service which has the advantages of proximity and of compatibility with Slack. It offers numerous features including the main expected by this type of software. It also offers numerous apps and plug-ins to interact with online services, software forges, and continuous integration tools. It works In the introduction, we discussed the It works effect, usually invoked to dispel any arguments about data protection and exchange confidentiality we discussed in this article. True, one single developer can ask: why worry about it? All I want is to chat with my colleagues and send files! Because Slack service subscription in the long term put the company continuously at risk. Maybe it s not the employees place to worry about it, they just have to do their job the more efficiently possible. On the other side, the company management, usually non-technical, may not be aware of what risks will threaten their company with this technical choice. The technical management may pretend to be omniscient, nobody is fooled. Either someone from the direction will ask the right question (where are our data and who can access them?) or someone from the technical side alert them officially on these problems. This is this technical audience, even if not always heard by their direction, which is the target of this article. May they find in it the right arguments to be convincing. We hope that the several points we developed in this article will help you to make the right choice. About Me Carl Chenet, Free Software Indie Hacker, founder of the French-speaking Hacker News-like Journal du hacker. Follow me on social networks Translated from French by St phanie Chaptal. Original article written in October 2016.

25 September 2017

Chris Lamb: Lintian: We are all Perl developers now

Lintian is a static analysis tool for Debian packages, reporting on various errors, omissions and general quality-assurance issues to maintainers. I've previously written about my exploits with Lintian as well as authoring a short tutorial on how to write your own Lintian check. Anyway, I recently uploaded version 2.5.53 about two months since previous release. The biggest changes you may notice are supporting the latest version of the Debian Policy as well the addition of checks to encourage the migration to Python 3. Thanks to all who contributed patches, code review and bug reports to this release. The full changelog is as follows:
lintian (2.5.53) unstable; urgency=medium
  The "we are all Perl developers now" release.
  * Summary of tag changes:
    + Added:
      - alternatively-build-depends-on-python-sphinx-and-python3-sphinx
      - build-depends-on-python-sphinx-only
      - dependency-on-python-version-marked-for-end-of-life
      - maintainer-script-interpreter
      - missing-call-to-dpkg-maintscript-helper
      - node-package-install-in-nodejs-rootdir
      - override-file-in-wrong-package
      - package-installs-java-bytecode
      - python-foo-but-no-python3-foo
      - script-needs-depends-on-sensible-utils
      - script-uses-deprecated-nodejs-location
      - transitional-package-should-be-oldlibs-optional
      - unnecessary-testsuite-autopkgtest-header
      - vcs-browser-links-to-empty-view
    + Removed:
      - debug-package-should-be-priority-extra
      - missing-classpath
      - transitional-package-should-be-oldlibs-extra
  * checks/apache2.pm:
    + [CL] Fix an apache2-unparsable-dependency false positive by allowing
      periods (".") in dependency names.  (Closes: #873701)
  * checks/binaries.pm:
    + [CL] Apply patches from Guillem Jover & Boud Roukema to improve the
      description of the binary-file-built-without-LFS-support tag.
      (Closes: #874078)
  * checks/changes. pm,desc :
    + [CL] Ignore DFSG-repacked packages when checking for upstream
      source tarball signatures as they will never match by definition.
      (Closes: #871957)
    + [CL] Downgrade severity of orig-tarball-missing-upstream-signature
      from "E:" to "W:" as many common tools do not make including the
      signatures easy enough right now.  (Closes: #870722, #870069)
    + [CL] Expand the explanation of the
      orig-tarball-missing-upstream-signature tag to include the location
      of where dpkg-source will look. Thanks to Theodore Ts'o for the
      suggestion.
  * checks/copyright-file.pm:
    + [CL] Address a number of issues in copyright-year-in-future:
      - Prevent false positives in port numbers, email addresses, ISO
        standard numbers and matching specific and general street
        addresses.  (Closes: #869788)
      - Match all violating years in a line, not just the first (eg.
        "2000-2107").
      - Ignore meta copyright statements such as "Original Author". Thanks
        to Thorsten Alteholz for the bug report.  (Closes: #873323)
      - Expand testsuite.
  * checks/cruft. pm,desc :
    + [CL] Downgrade severity of file-contains-fixme-placeholder
      tag from "important" (ie. "E:") to "wishlist" (ie. "I:").
      Thanks to Gregor Herrmann for the suggestion.
    + [CL] Apply patch from Alex Muntada (alexm) to use "substr" instead
      of "substring" in mentions-deprecated-usr-lib-perl5-directory's
      description.  (Closes: #871767)
    + [CL] Don't check copyright_hints file for FIXME placeholders.
      (Closes: #872843)
    + [CL] Don't match quoted "FIXME" variants as they are almost always
      deliberate. Thanks to Adrian Bunk for the report.  (Closes: #870199)
    + [CL] Avoid false positives in missing source checks for "CSS Browser
      Selector".  (Closes: #874381)
  * checks/debhelper.pm:
    + [CL] Prevent a false positive of
      missing-build-dependency-for-dh_-command that can be exposed by
      following the advice for the recently added
      useless-autoreconf-build-depends tag.  (Closes: #869541)
  * checks/debian-readme. pm,desc :
    + [CL] Ensure readme-debian-contains-debmake-template also checks
      for templates "Automatically generated by debmake".
  * checks/description. desc,pm :
    + [CL] Clarify explanation of description-starts-with-leading-spaces
      tag. Thanks to Taylor Kline  for the report
      and patch.  (Closes: #849622)
    + [NT] Skip capitalization-error-in-description-synopsis for
      auto-generated packages (such as dbgsym packages).
  * checks/fields. desc,pm :
    + [CL] Ensure that python3-foo packages have "Section: python", not
      just python2-foo.  (Closes: #870272)
    + [RG] Do no longer require debug packages to be priority extra.
    + [BR] Use Lintian::Data for name/section mapping
    + [CL] Check for packages including "?rev=0&sc=0" in Vcs-Browser.
      (Closes: #681713)
    + [NT] Transitional packages should now be "oldlibs/optional" rather
      than "oldlibs/extra".  The related tag has been renamed accordingly.
  * checks/filename-length.pm:
    + [NT] Skip the check on auto-generated binary packages (such as
      dbgsym packages).
  * checks/files. pm,desc :
    + [BR] Avoid privacy-breach-generic false positives for legal.xml.
    + [BR] Detect install of node package under /usr/lib/nodejs/[^/]*$
    + [CL] Check for packages shipping compiled Java class files. Thanks
      Carn  Draug .  (Closes: #873211)
    + [BR] Privacy breach is no longer experimental.
  * checks/init.d.desc:
    + [RG] Do not recommend a versioned dependency on lsb-base in
      init.d-script-needs-depends-on-lsb-base.  (Closes: #847144)
  * checks/java.pm:
    + [CL] Additionally consider .cljc files as code to avoid false-
      positive codeless-jar warnings.  (Closes: #870649)
    + [CL] Drop problematic missing-classpath check.  (Closes: #857123)
  * checks/menu-format.desc:
    + [CL] Prevent false positives in desktop-entry-lacks-keywords-entry
      for "Link" and "Directory" .desktop files.  (Closes: #873702)
  * checks/python. pm,desc :
    + [CL] Split out Python checks from "scripts" check to a new, source
      check of type "source".
    + [CL] Check for python-foo without corresponding python3-foo packages
      to assist in Python 2.x deprecation.  (Closes: #870681)
    + [CL] Check for packages that Build-Depend on python-sphinx only.
      (Closes: #870730)
    + [CL] Check for packages that alternatively Build-Depend on the
      Python 2 and Python 3 versions of Sphinx.  (Closes: #870758)
    + [CL] Check for binary packages that depend on Python 2.x.
      (Closes: #870822)
  * checks/scripts.pm:
    + [CL] Correct false positives in
      unconditional-use-of-dpkg-statoverride by detecting "if !" as a
      valid shell prefix.  (Closes: #869587)
    + [CL] Check for missing calls to dpkg-maintscript-helper(1) in
      maintainer scripts.  (Closes: #872042)
    + [CL] Check for packages using sensible-utils without declaring a
      dependency after its split from debianutils.  (Closes: #872611)
    + [CL] Warn about scripts using "nodejs" as an interpreter now that
      nodejs provides /usr/bin/node.  (Closes: #873096)
    + [BR] Add a statistic tag giving interpreter.
  * checks/testsuite. desc,pm :
    + [CL] Remove recommendations to add a "Testsuite: autopkgtest" field
      to debian/control as it is added when needed by dpkg-source(1)
      since dpkg 1.17.1.  (Closes: #865531)
    + [CL] Warn if we see an unnecessary "Testsuite: autopkgtest" header
      in debian/control.
    + [NT] Recognise "autopkgtest-pkg-go" as a valid test suite.
    + [CL] Recognise "autopkgtest-pkg-elpa" as a valid test suite.
      (Closes: #873458)
    + [CL] Recognise "autopkgtest-pkg-octave" as a valid test suite.
      (Closes: #875985)
    + [CL] Update the description of unknown-testsuite to reflect that
      "autopkgtest" is not the only valid value; the referenced URL
      is out-of-date (filed as #876008).  (Closes: #876003)
  * data/binaries/embedded-libs:
    + [RG] Detect embedded copies of heimdal, libgxps, libquicktime,
      libsass, libytnef, and taglib.
    + [RG] Use an additional string to detect embedded copies of
      openjpeg2.  (Closes: #762956)
  * data/fields/name_section_mappings:
    + [BR] node- package section is javascript.
    + [CL] Apply patch from Guillem Jover to add more section mappings.
      (Closes: #874121)
  * data/fields/obsolete-packages:
    + [MR] Add dh-systemd.  (Closes: #872076)
  * data/fields/perl-provides:
    + [CL] Refresh perl provides.
  * data/fields/virtual-packages:
    + [CL] Update data file from archive. This fixes a false positive for
      "bacula-director".  (Closes: #835120)
  * data/files/obsolete-paths:
    + [CL] Add note to /etc/bash_completion.d entry regarding stricter
      filename requirements.  (Closes: #814599)
  * data/files/privacy-breaker-websites:
    + [BR] Detect custom donation logos like apache.
    + [BR] Detect generic counter website.
  * data/standards-version/release-dates:
    + [CL] Add 4.0.1 and 4.1.0 as known standards versions.
      (Closes: #875509)
  * debian/control:
    + [CL] Mention Debian Policy v4.1.0 in the description.
    + [CL] Add myself to Uploaders.
    + [CL] Drop unnecessary "Testsuite: autopkgtest"; this is implied from
      debian/tests/control existing.
  * commands/info.pm:
    + [CL] Add a --list-tags option to print all tags Lintian knows about.
      Thanks to Rajendra Gokhale for the suggestion.  (Closes: #779675)
  * commands/lintian.pm:
    + [CL] Apply patch from Maia Everett to avoid British spelling when
      using en_US locale.  (Closes: #868897)
  * lib/Lintian/Check.pm:
    + [CL] Stop emitting  maintainer,uploader -address-causes-mail-loops
      for @packages.debian.org addresses.  (Closes: #871575)
  * lib/Lintian/Collect/Binary.pm:
    + [NT] Introduce an "auto-generated" argument for "is_pkg_class".
  * lib/Lintian/Data.pm:
    + [CL] Modify Lintian::Data's "all" to always return keys in insertion
      order, dropping dependency on libtie-ixhash-perl.
  * helpers/coll/objdump-info-helper:
    + [CL] Apply patch from Steve Langasek to accommodate binutils 2.29
      outputting symbols in a different format on ppc64el.
      (Closes: #869750)
  * t/tests/fields-perl-provides/tags:
    + [CL] Update expected output to match new Perl provides.
  * t/tests/files-privacybreach/*:
    + [CL] Add explicit test for packages including external fonts via
      the Google Font API. Thanks to Ian Jackson for the report.
      (Closes: #873434)
    + [CL] Add explicit test for packages including external fonts via
      the Typekit API via <script/> HTML tags.
  * t/tests/*/desc:
    + [CL] Add missing entries in "Test-For" fields to make
      development/testing workflow less error-prone.
  * private/generate-tag-summary:
    + [CL] git-describe(1) will usually emit 7 hexadecimal digits as the
      abbreviated object name,  However, as this can be user-dependent,
      pass --abbrev=0 to ensure it does not vary between systems.  This
      also means we do not need to strip it ourselves.
  * private/refresh-*:
    + [CL] Use deb.debian.org as the default mirror.
    + [CL] Update locations of Contents-<arch> files; they are now
      namespaced by distribution (eg. "main").
 -- Chris Lamb <lamby@debian.org>  Wed, 20 Sep 2017 09:25:06 +0100

18 September 2017

Carl Chenet: The Github threat

Many voices arise now and then against risks linked to the Github use by Free Software projects. Yet the infatuation for the collaborative forge of the Octocat Californian start-ups doesn t seem to fade away.

These recent years, Github and its services take an important role in software engineering as they are seen as easy to use, efficient for a daily workload with interesting functions in enterprise collaborative workflow or amid a Free Software project. What are the arguments against using its services and are they valid? We will list them first, then we ll examine their validity.

1. Critical points

1.1 Centralization

The Github application belongs to a single entity, Github Inc, a US company which manage it alone. So, a unique company under US legislation manages the access to most of Free Software application code sources, which may be a problem with groups using it when a code source is no longer available, for political or technical reason.

The Octocat, the Github mascot

This centralization leads to another trouble: as it obtained critical mass, it becomes more and more difficult not having a Github account. People who don t use Github, by choice or not, are becoming a silent minority. It is now fashionable to use Github, and not doing so is seen as out of date . The same phenomenon is a classic, and even the norm, for proprietary social networks (Facebook, Twitter, Instagram).

1.2 A Proprietary Software

When you interact with Github, you are using a proprietary software, with no access to its source code and which may not work the way you think it is. It is a problem at different levels. First, ideologically, but foremost in practice. In the Github case, we send them code we can control outside of their interface. We also send them personal information (profile, Github interactions). And mostly, Github forces any project which goes through the US platform to use a crucial proprietary tools: its bug tracking system.

Windows, the epitome of proprietary software, even if others took the same path

1.3 The Uniformization

Working with Github interface seems easy and intuitive to most. Lots of companies now use it as a source repository, and many developers leaving a company find the same Github working environment in the next one. This pervasive presence of Github in free software development environment is a part of the uniformization of said developers working space.

Uniforms always bring Army in my mind, here the Clone army

2 Critical points cross-examination

2.1 Regarding the centralization

2.1.1 Service availability rate

As said above, nowadays, Github is the main repository of Free Software source code. As such it is a favorite target for cyberattacks. DDOS hit it in March and August 2015. On December 15, 2015, an outage led to the inaccessibility of 5% of the repositories. The same occurred on November 15. And these are only the incident reported by Github itself. One can imagine that the mean outage rate of the platform is underestimated.

2.1.2 Chain reaction could block Free Software development

Today many dependency maintenance tools, as npm for javascript, Bundler for Ruby or even pip for Python can access an application source code directly from Github. Free Software projects getting more and more linked and codependents, if one component is down, all the developing process stop.

One of the best examples is the npmgate. Any company could legally demand that Github take down some source code from its repository, which could create a chain reaction and blocking the development of many Free Software projects, as suffered the Node.js community from the decisions of Npm, Inc, the company managing npm.

2.2 A historical precedent: SourceForge

Github didn t appear out of the blue. In his time, its predecessor, SourceForge, was also extremely popular.

Heavily centralized, based on strong interaction with the community, SourceForge is now seen as an aging SAAS (Software As A Service) and sees most of its customers fleeing to Github. Which creates lots of hurdles for those who stayed. The Gimp project suffered from spams and terrible advertising, which led to the departure of the VLC project, then from installers corrupted with adwares instead of the official Gimp installer for Windows. And finally, the Project Gimp s SourceForge account was hacked by SourceForge team itself!

These are very recent examples of what can do a commercial entity when it is under its stakeholders pressure. It is vital to really understand what it means to trust them with data and exchange centralization, where it could have tremendous repercussion on the day-to-day life and the habits of the Free Software and open source community.

2.3. Regarding proprietary software

2.3.1 One community, several opinions on proprietary software

Mostly based on ideology, this point deals with the definition every member of the community gives to Free Software and open source. Mostly about one thing: is it viral or not? Or GPL vs MIT/BSD.

Those on the side of the viral Free Software will have trouble to use a proprietary software as this last one shouldn t even exist. It must be assimilated, to quote Star Trek, as it is a connected black box, endangering privacy, corrupting for profit our uses and restrain our freedom to use as we re pleased what we own, etc.

Those on the side of complete freedom have no qualms using proprietary software as their very existence is a consequence of freedom without restriction. They even agree that code they developed may be a part of proprietary software, which is quite a common occurrence. This part of the Free Software community has no qualm using Github, which is well within their ideology parameters. Just take a look at the Janson amphitheater during Fosdem and check how many Apple laptops running on macOS are around.

FreeBSD, the main BSD project under the BSD license

2.3.2 Data loss and data restrictions linked to proprietary software use

Even without ideological consideration, and just focusing on Github infrastructure, the bug tracking system is a major problem by itself.

Bug report builds the memory of Free Software projects. It is the entrance point for new contributors, the place to find bug reporting, requests for new functions, etc. The project history can t be limited only to the code. It s very common to find bug reports when you copy and paste an error message in a search engine. Not their historical importance is precious for the project itself, but also for its present and future users.

Github gives the ability to extract bug reports through its API. What would happen if Github is down or if the platform doesn t support this feature anymore? In my opinion, not that many projects ever thought of this outcome. How could they move all the data generated by Github into a new bug tracking system? One old example now is Astrid, a TODO list bought by Yahoo a few years ago. Very popular, it grew fast until it was closed overnight, with only a few weeks for its users to extract their data. It was only a to-do list. The same situation with Github would be tremendously difficult to manage for several projects if they even have the ability to deal with it. Code would still be available and could still live somewhere else, but the project memory would be lost. A project like Debian has today more than 800,000 bug reports, which are a data treasure trove about problems solved, function requests and where the development stand on each. The developers of the Cpython project have anticipated the problem and decided not to use Github bug tracking systems.

Issues, the Github proprietary bug tracking system

Another thing we could lose if Github suddenly disappear: all the work currently done regarding the push requests (aka PRs). This Github function gives the ability to clone one project s Github repository, to modify it to fit your needs, then to offer your own modification to the original repository. The original repository s owner will then review said modification, and if he or she agrees with them will fuse them into the original repository. As such, it s one of the main advantages of Github, since it can be done easily through its graphic interface.

However reviewing all the PRs may be quite long, and most of the successful projects have several ongoing PRs. And this PRs and/or the proprietary bug tracking system are commonly used as a platform for comment and discussion between developers.

Code itself is not lost if Github is down (except one specific situation as seen below), but the peer review works materialized in the PRs and the bug tracking system is lost. Let s remember than the PR mechanism let you clone and modify projects and then generate PRs directly from its proprietary web interface without downloading a single code line on your computer. In this particular case, if Github is down, all the code and the work in progress is lost. Some also use Github as a bookmark place. They follow their favorite projects activity through the Watch function. This technological watch style of data collection would also be lost if Github is down.

Debian, one of the main Free Software projects with at least a thousand official contributors

2.4 Uniformization

The Free Software community is walking a thigh rope between normalization needed for an easier interoperability between its products and an attraction for novelty led by a strong need for differentiation from what is already there.

Github popularized the use of Git, a great tool now used through various sectors far away from its original programming field. Step by step, Git is now so prominent it s almost impossible to even think to another source control manager, even if awesome alternate solutions, unfortunately not as popular, exist as Mercurial.

A new Free Software project is now a Git repository on Github with README.md added as a quick description. All the other solutions are ostracized? How? None or very few potential contributors would notice said projects. It seems very difficult now to encourage potential contributors into learning a new source control manager AND a new forge for every project they want to contribute. Which was a basic requirement a few years ago. It s quite sad because Github, offering an original experience to its users, cut them out of a whole possibility realm. Maybe Github is one of the best web versioning control systems. But being the main one doesn t let room for a new competitor to grow. And it let Github initiate development newcomers into a narrow function set, totally unrelated to the strength of the Git tool itself.

3. Centralization, uniformization, proprietary software What s next? Laziness?

Fight against centralization is a main part of the Free Software ideology as centralization strengthens the power of those who manage it and who through it control those who are managed by it. Uniformization allergies born against main software companies and their wishes to impose a closed commercial software world was for a long time the main fuel for innovation thirst and intelligent alternative development. As we said above, part of the Free Software community was built as a reaction to proprietary software and their threat. The other part, without hoping for their disappearance, still chose a development model opposite to proprietary software, at least in the beginning, as now there s more and more bridges between the two.

The Github effect is a morbid one because of its consequences: at least centralization, uniformization, proprietary software usage as their bug tracking system. But some years ago the Dear Github buzz showed one more side effect, one I ve never thought about: laziness. For those who don t know what it is about, this letter is a complaint from several spokespersons from several Free Software projects which demand to Github team to finally implement, after years of polite asking, new functions. Since when Free Software project facing a roadblock request for clemency and don t build themselves the path they need? When Torvalds was involved in the Bitkeeper problem and the Linux kernel development team couldn t use anymore their revision control software, he developed Git. The mere fact of not being able to use one tool or functions lacking is the main motivation to seek alternative solutions and, as such, of the Free Software movement. Every Free Software community member able to code should have this reflex. You don t like what Github offers? Switch to Gitlab. You don t like it Gitlab? Improve it or make your own solution.

The Gitlab logo

Let s be crystal clear. I ve never said that every Free Software developers blocked should code his or her own alternative. We all have our own priorities, and some of us even like their beauty sleep, including me. But, to see that this open letter to Github has 1340 names attached to it, among them some spokespersons for major Free Software project showed me that need, willpower and strength to code a replacement are here. Maybe said replacement will be born from this letter, it would be the best outcome of this buzz.

In the end, Github usage is just another example of Internet usage massification. As Internet users are bound to go to massively centralized social network as Facebook or Twitter, developers are following the same path with Github. Even if a large fraction of developers realize the threat linked this centralized and proprietary organization, the whole community is following this centralization and uniformization trend. Github service is useful, free or with a reasonable price (depending on the functions you need) easy to use and up most of the time. Why would we try something else? Maybe because others are using us while we are savoring the convenience? The Free Software community seems to be quite sleepy to me.

The lion enjoying the hearth warm

About Me Carl Chenet, Free Software Indie Hacker, founder of the French-speaking Hacker News-like Journal du hacker. Follow me on social networks Translated from French by St phanie Chaptal. Original article written in 2015.

18 June 2017

Hideki Yamane: Debian9 release party in Tokyo

We celebrated Debian9 "stretch" release in Tokyo (thanks to Cybozu, Inc. for the place).








We enjoyed beer, wine, sake, soft drinks, pizza, sandwich, snacks and cake&coffee (Nicaraguan one, it reminds me DebConf12 :)

13 April 2017

Antoine Beaupr : New approaches to network fast paths

With the speed of network hardware now reaching 100 Gbps and distributed denial-of-service (DDoS) attacks going in the Tbps range, Linux kernel developers are scrambling to optimize key network paths in the kernel to keep up. Many efforts are actually geared toward getting traffic out of the costly Linux TCP stack. We have already covered the XDP (eXpress Data Path) patch set, but two new ideas surfaced during the Netconf and Netdev conferences held in Toronto and Montreal in early April 2017. One is a patch set called af_packet, which aims at extracting raw packets from the kernel as fast as possible; the other is the idea of implementing in-kernel layer-7 proxying. There are also user-space network stacks like Netmap, DPDK, or Snabb (which we previously covered). This article aims at clarifying what all those components do and to provide a short status update for the tools we have already covered. We will focus on in-kernel solutions for now. Indeed, user-space tools have a fundamental limitation: if they need to re-inject packets onto the network, they must again pay the expensive cost of crossing the kernel barrier. User-space performance is effectively bounded by that fundamental design. So we'll focus on kernel solutions here. We will start from the lowest part of the stack, the af_packet patch set, and work our way up the stack all the way up to layer-7 and in-kernel proxying.

af_packet v4 John Fastabend presented a new version of a patch set that was first published in January regarding the af_packet protocol family, which is currently used by tcpdump to extract packets from network interfaces. The goal of this change is to allow zero-copy transfers between user-space applications and the NIC (network interface card) transmit and receive ring buffers. Such optimizations are useful for telecommunications companies, which may use it for deep packet inspection or running exotic protocols in user space. Another use case is running a high-performance intrusion detection system that needs to watch large traffic streams in realtime to catch certain types of attacks. Fastabend presented his work during the Netdev network-performance workshop, but also brought the patch set up for discussion during Netconf. There, he said he could achieve line-rate extraction (and injection) of packets, with packet rates as high as 30Mpps. This performance gain is possible because user-space pages are directly DMA-mapped to the NIC, which is also a security concern. The other downside of this approach is that a complete pair of ring buffers needs to be dedicated for this purpose; whereas before packets were copied to user space, now they are memory-mapped, so the user-space side needs to process those packets quickly otherwise they are simply dropped. Furthermore, it's an "all or nothing" approach; while NIC-level classifiers could be used to steer part of the traffic to a specific queue, once traffic hits that queue, it is only accessible through the af_packet interface and not the rest of the regular stack. If done correctly, however, this could actually improve the way user-space stacks access those packets, providing projects like DPDK a safer way to share pages with the NIC, because it is well defined and kernel-controlled. According to Jesper Dangaard Brouer (during review of this article):
This proposal will be a safer way to share raw packet data between user space and kernel space than what DPDK is doing, [by providing] a cleaner separation as we keep driver code in the kernel where it belongs.
During the Netdev network-performance workshop, Fastabend asked if there was a better data structure to use for such a purpose. The goal here is to provide a consistent interface to user space regardless of the driver or hardware used to extract packets from the wire. af_packet currently defines its own packet format that abstracts away the NIC-specific details, but there are other possible formats. For example, someone in the audience proposed the virtio packet format. Alexei Starovoitov rejected this idea because af_packet is a kernel-specific facility while virtio has its own separate specification with its own requirements. The next step for af_packet is the posting of the new "v4" patch set, although Miller warned that this wouldn't get merged until proper XDP support lands in the Intel drivers. The concern, of course, is that the kernel would have multiple incomplete bypass solutions available at once. Hopefully, Fastabend will present the (by then) merged patch set at the next Netdev conference in November.

XDP updates Higher up in the networking stack sits XDP. The af_packet feature differs from XDP in that it does not perform any sort of analysis or mangling of packets; its objective is purely to get the data into and out of the kernel as fast as possible, completely bypassing the regular kernel networking stack. XDP also sits before the networking stack except that, according to Brouer, it is "focused on cooperating with the existing network stack infrastructure, and on use-cases where the packet doesn't necessarily need to leave kernel space (like routing and bridging, or skipping complex code-paths)." XDP has evolved quite a bit since we last covered it in LWN. It seems that most of the controversy surrounding the introduction of XDP in the Linux kernel has died down in public discussions, under the leadership of David Miller, who heralded XDP as the right solution for a long-term architecture in the kernel. He presented XDP as a fast, flexible, and safe solution. Indeed, one of the controversies surrounding XDP was the question of the inherent security challenges with introducing user-provided programs directly into the Linux kernel to mangle packets at such a low level. Miller argued that whatever protections are expected for user-space programs also apply to XDP programs, comparing the virtual memory protections to the eBPF (extended BPF) verifier applied to XDP programs. Those programs are actually eBPF that have an interesting set of restrictions:
  • they have a limited size
  • they cannot jump backward (and thus cannot loop), so they execute in predictable time
  • they do only static allocation, so they are also limited in memory
XDP is not a one-size-fits-all solution: netfilter, the TC traffic shaper, and other normal Linux utilities still have their place. There is, however, a clear use case for a solution like XDP in the kernel. For example, Facebook and Cloudflare have both started testing XDP and, in Facebook's case, deploying XDP in production. Martin Kafai Lau, from Facebook, presented the tool set the company is using to construct a DDoS-resilience solution and a level-4 load balancer (L4LB), which got a ten-times performance improvement over the previous IPVS-based solution. Facebook rolled out its own user-space solution called "Droplet" to detect hostile traffic and deploy blocking rules in the form of eBPF programs loaded in XDP. Lau demonstrated the way Facebook deploys a three-part chained eBPF program: the first part allows debugging and dumping of packets, the second is Droplet itself, which drops undesirable traffic, and the last segment is the load balancer, which mangles the packets to tweak their destination according to internal rules. Droplet can drop DDoS attacks at line rate while keeping the architecture flexible, which were two key design requirements. Gilberto Bertin, from Cloudflare, presented a similar approach: Cloudflare has a tool that processes sFlow data generated from iptables in order to generate cBPF (classic BPF) mitigation rules that are then deployed on edge routers. Those rules are created with a tool called bpfgen, part of Cloudflare's BSD-licensed bpftools suite. For example, it could create a cBPF bytecode blob that would match DNS queries to any example.com domain with something like:
    bpfgen dns *.example.com
Originally, Cloudflare would deploy those rules to plain iptables firewalls with the xt_bpf module, but this led to performance issues. It then deployed a proprietary user-space solution based on Solarflare hardware, but this has the performance limitations of user-space applications getting packets back onto the wire involves the cost of re-injecting packets back into the kernel. This is why Cloudflare is experimenting with XDP, which was partly developed in response to the company's problems, to deploy those BPF programs. A concern that Bertin identified was the lack of visibility into dropped packets. Cloudflare currently samples some of the dropped traffic to analyze attacks; this is not currently possible with XDP unless you pass the packets down the stack, which is expensive. Miller agreed that the lack of monitoring for XDP programs is a large issue that needs to be resolved, and suggested creating a way to mark packets for extraction to allow analysis. Cloudflare is currently in a testing phase with XDP and it is unclear if its whole XDP tool chain will be publicly available. While those two companies are starting to use XDP as-is, there is more work needed to complete the XDP project. As mentioned above and in our previous coverage, massive statistics extraction is still limited in the Linux kernel and introspection is difficult. Furthermore, while the existing actions (XDP_DROP and XDP_TX, see the documentation for more information) are well implemented and used, another action may be introduced, called XDP_REDIRECT, which would allow redirecting packets to different network interfaces. Such an action could also be used to accelerate bridges as packets could be "switched" based on the MAC address table. XDP also requires network driver support, which is currently limited. For example, the Intel drivers still do not support XDP, although that should come pretty soon. Miller, in his Netdev keynote, focused on XDP and presented it as the standard solution that is safe, fast, and usable. He identified the next steps of XDP development to be the addition of debugging mechanisms, better sampling tools for statistics and analysis, and user-space consistency. Miller foresees a future for XDP similar to the popularization of the Arduino chips: a simple set of tools that anyone, not just developers, can use. He gave the example of an Arduino tutorial that he followed where he could just look up a part number and get easy-to-use instructions on how to program it. Similar components should be available for XDP. For this purpose, the conference saw the creation of a new mailing list called xdp-newbies where people can learn how to create XDP build environments and how to write XDP programs.

In-kernel layer-7 proxying The third approach that struck me as innovative is the idea of doing layer-7 (application) proxying directly in the kernel. This comes from the idea that, traditionally, we build firewalls to segregate traffic and apply controls, but as most services move to HTTP, those policies become ineffective. Thomas Graf, presented this idea during Netconf using a Star Wars allegory: what if the Death Star were a server with an API? You would have endpoints like /dock or /comms that would allow you to dock a ship or communicate with the Death Star. Those API endpoints should obviously be public, but then there is this /exhaust-port endpoint that should never be publicly available. In order for a firewall to protect such a system, it must be able to inspect traffic at a higher level than the traditional address-port pairs. Graf presented a design where the kernel would create an in-kernel socket that would negotiate TCP connections on behalf of user space and then be able to apply arbitrary eBPF rules in the kernel. Graf's design of in-kernel proxying In this scenario, instead of doing the traditional transfer from Netfilter's TPROXY to user space, the kernel directly decapsulates the HTTP traffic and passes it to BPF rules that can make decisions without doing expensive context switches or memory copies in the case of simply wanting to refuse traffic (e.g. issue an HTTP 403 error). This, of course, requires the inclusion of kTLS to process HTTPS connections. HTTP2 support may also prove problematic, as it multiplexes connections and is harder to decapsulate. This design was described as a "pure pre-accept() hook". Starovoitov also compared the design to the kernel connection multiplexer (KCM). Tom Herbert, KCM's author, agreed that it could be extended to support this, but would require some extensions in user space to provide an interface between regular socket-based applications and the KCM layer. In any case, if the application does TLS (and lots of them do), kTLS gets tricky because it breaks the end-to-end nature of TLS, in effect becoming a man in the middle between the client and the application. Eric Dumazet argued that HA-Proxy already does things like this: it uses splice() to avoid copying too much data around, but it still does a context switch to hand over processing to user space, something that could be fixed in the general case. Another similar project that was presented at Netdev is the Tempesta firewall and reverse-proxy. The speaker, Alex Krizhanovsky, explained the Tempesta developers have taken one person month to port the mbed TLS stack to the Linux kernel to allow an in-kernel TLS handshake. Tempesta also implements rate limiting, cookies, and JavaScript challenges to mitigate DDoS attacks. The argument behind the project is that "it's easier to move TLS to the kernel than it is to move the TCP/IP stack to user space". Graf explained that he is familiar with Krizhanovsky's work and he is hoping to collaborate. In effect, the design Graf is working on would serve as a foundation for Krizhanovsky's in-kernel HTTP server (kHTTP). In a private email, Graf explained that:
The main differences in the implementation are currently that we foresee to use BPF for protocol parsing to avoid having to implement every single application protocol natively in the kernel. Tempesta likely sees this less of an issue as they are probably only targeting HTTP/1.1 and HTTP/2 and to some [extent] JavaScript.
Neither project is really ready for production yet. There didn't seem to be any significant pushback from key network developers against the idea, which surprised some people, so it is likely we will see more and more layer-7 intelligence move into the kernel sooner rather than later.

Conclusion All of this work aims at replacing a rag-tag bunch of proprietary solutions that recently came up to bypass the Linux kernel TCP/IP stack and improve performance for firewalls, proxies, and other key edge network elements. The idea is that, unless the kernel improves its performance, or at least provides a way to bypass its more complex code paths, people will work around it. With this set of solutions in place, engineers will now be able to use standard APIs to hook high-performance systems into the Linux kernel.
The author would like to thank the Netdev and Netconf organizers for travel assistance, Thomas Graf for a review of the in-kernel proxying section of this article, and Jesper Dangaard Brouer for review of the af_packet and XDP sections. Note: this article first appeared in the Linux Weekly News.

24 March 2017

Gunnar Wolf: Dear lazyweb: How would you visualize..?

Dear lazyweb, I am trying to get a good way to present the categorization of several cases studied with a fitting graph. I am rating several vulnerabilities / failures according to James Cebula et. al.'s paper, A taxonomy of Operational Cyber Security Risks; this is a somewhat deep taxonomy, with 57 end items, but organized in a three levels deep hierarchy. Copying a table from the cited paper (click to display it full-sized): My categorization is binary: I care only whether it falls within a given category or not. My first stab at this was to represent each case using a star or radar graph. As an example: As you can see, to a "bare" star graph, I added a background color for each top-level category (blue for actions of people, green for systems and technology failures), red for failed internal processes and gray for external events), and printed out only the labels for the second level categories; for an accurate reading of the graphs, you have to refer to the table and count bars. And, yes, according to the Engineering Statistics Handbook:
Star plots are helpful for small-to-moderate-sized multivariate data sets. Their primary weakness is that their effectiveness is limited to data sets with less than a few hundred points. After that, they tend to be overwhelming.
I strongly agree with the above statement And stating that "a few hundred points" can be understood is even an overstatement. 50 points are just too much. Now, trying to increase usability for this graph, I came across the Sunburst diagram. One of the proponents for this diagram, John Stasko, has written quite a bit about it. Now... How to create my beautiful Sunburst diagram? That's a tougher one. Even though the page I linked to in the (great!) Data visualization catalogue presents even some free-as-in-software tools to do this... They are Javascript projects that will render their beautiful plots (even including an animation)... To the browser. I need them for a static (i.e. to be printed) document. Yes, I can screenshot and all, but I want them to be automatically generated, so I can review and regenerate them all automatically. Oh, I could just write JSON and use SaaS sites such as Aculocity to do the heavy-lifting, but if you know me, you will understand why I don't want to. So... I set out to find a Gunnar-approved way to display the information I need. Now, as the Protovis documentation says, an icicle is simply a sunburst transformed from polar to cartesian coordinates... But I came to a similar conclusion: The tools I found are not what I need. OK, but an icicle graph seems much simpler to produce I fired up my Emacs, and started writing using Ruby, RMagick and RVG... I decided to try a different way. This is my result so far: So... What do you think? Does this look right to you? Clearer than the previous one? Worst? Do you have any idea on how I could make this better? Oh... You want to tell me there is something odd about it? Well, yes, of course! I still need to tweak it quite a bit. Would you believe me if I told you this is not really a left-to-right icicle graph, but rather a strangely formatted Graphviz non-directed graph using the dot formatter? I can assure you you don't want to look at my Graphviz sources... But in case you insist... Take them and laugh. Or cry. Of course, this file comes from a hand-crafted template, but has some autogenerated bits to it. I have still to tweak it quite a bit to correct several of its usability shortcomings, but at least it looks somewhat like what I want to achieve. Anyway, I started out by making a "dear lazyweb" question. So, here it goes: Do you think I'm using the right visualization for my data? Do you have any better suggestions, either of a graph or of a graph-generating tool? Thanks! [update] Thanks for the first pointer, Lazyweb! I found a beautiful solution; we will see if it is what I need or not (it is too space-greedy to be readable... But I will check it out more thoroughly). It lays out much better than anything I can spew out by myself Writing it as a mindmap using TikZ directly from within LaTeX, I get the following result:

24 December 2016

Russ Allbery: Review: Warchild

Review: Warchild, by Karin Lowachee
Series: Warchild #1
Publisher: Warner Aspect
Copyright: April 2002
ISBN: 0-446-61077-1
Format: Mass market
Pages: 451
In a future world of deep space stations and starship trade routes, Jos Musey grew up on a merchant ship with a loving family and typical childhood companions. But, at the age of eight, his ship was taken by pirates and he's taken as a slave. That might have been the end of his story, but after a year of captivity he manages to escape during an alien attack on a distant frontier station. Jos then learns more than he ever expected to learn about the ongoing deep space war between the human military and the aliens and their human sympathizers. From both sides. Warchild feels so much like a collection of 1980s SF tropes that I'm a bit surprised it was published in 2002. Some of those have been part of SF well before the 1980s: the coming-of-age story of a child in space, deep-space combat and merchant fleets, pirates, and sketchy stations. But when one adds the Japanese-inspired philosophy and combat training, with a bit of Karate Kid feel, plus the (oddly bolted on) cyberpunk "burndiving," this book feels deeply embedded in a specific generation of SF storytelling. That's not necessarily a drawback. I like some of those tropes. The martial arts training coupled with careful and patient psychology worked very well for me. It may be a bit stereotyped, but Lowachee is careful to never present it as Asian; it's an alien philosophy and environment, and although it happens to wear its influences on its sleeves, it makes no attempt to tie that to any particular human culture. And the philosophy and, more to the point, the approach Niko takes with Jos is exactly what Jos needs. That section of the book (the second) was by far my favorite. I wish the whole book had been like that. Unfortunately, it's not. The first part is a deeply uncomfortable account of Jos's capture and enslavement (with bonus implied pedophilia). It's thankfully the shortest section of the book, but it's an endless parade of horrors that I didn't enjoy reading. Lowachee took the stylistic choice of writing it in the second person, which is a literary trick that rarely works for me and didn't work here. I'm sure the goal is to make it feel more immediate, but I didn't need this scene to be more immediate, and second person always reads as awkward and forced. If the authors write characters well, I will identify with them, but if I feel like I'm being forced to identify with them, I just start getting irritated. The third part of the book goes in yet a different direction: military SF, complete with hazing, camaraderie, esprit de corps, and bloody combat, with an uncomfortable undertone of constant stress due to Jos's complex and dangerous position. I wanted this to be much shorter and wanted the book to return to the part that I really liked. Unfortunately, that's not to be; the tone of this section is the tone for the rest of the book. To be fair, it's better than I expected it to be, and Jos's recovery and coming-of-age continues in more subtle and more satisfying ways than at first it seemed like it would. But Lowachee complicates and largely breaks a recovery that I was hoping would proceed down a more peaceful path, and replaced a beautiful and interesting (if a bit stereotyped) environment with bog-standard military SF. If you like that sort of thing, there's a lot of that thing here, but I've read a lot of books with that setting and far fewer about an Asian-inspired martial alien philosophy. I think Warchild has a bit too much stuff going on and not enough recovery space. The cyberpunk angle probably gets developed more in later books of the series (the next book is Burndive, which is the name for cyberpunk hacking in this book), but it felt bolted on here. Jos's story has multiple false starts and complications, and Lowachee keeps pulling the rug out from under him again until both he and the reader go a bit numb. The ending mostly works, but it's a brutal resolution to the complex psychological situation Lowachee sets up. This book reminds me a bit of C.J. Cherryh in that the characters seem constantly stressed beyond their ability to cope. I wanted something a bit kinder and softer. Despite that, the psychology and the brief moments of understanding and light are compelling enough that I'm still tempted to read on in this series. The subsequent books follow other characters; maybe they'll be a bit less nasty to their protagonists. Followed by Burndive. Rating: 6 out of 10

7 December 2016

Jonas Meurer: On CVE-2016-4484, a (securiy)? bug in the cryptsetup initramfs integration

On CVE-2016-4484, a (security)? bug in the cryptsetup initramfs integration On November 4, I was made aware of a security vulnerability in the integration of cryptsetup into initramfs. The vulnerability was discovered by security researchers Hector Marco and Ismael Ripoll of CyberSecurity UPV Research Group and got CVE-2016-4484 assigned. In this post I'll try to reflect a bit on

What CVE-2016-4484 is all about Basically, the vulnerability is about two separate but related issues:

1. Initramfs rescue shell considered harmful The main topic that Hector Marco and Ismael Ripoll address in their publication is that Debian exits into a rescue shell in case of failure during initramfs, and that this can be triggered by entering a wrong password ~93 times in a row. Indeed the Debian initramfs implementation as provided by initramfs-tools exits into a rescue shell (usually a busybox shell) after a defined amount of failed attempts to make the root filesystem available. The loop in question is in local_device_setup() at the local initramfs script In general, this behaviour is considered as a feature: if the root device hasn't shown up after 30 rounds, the rescue shell is spawned to provide the local user/admin a way to debug and fix things herself. Hector Marco and Ismael Ripoll argue that in special environments, e.g. on public computers with password protected BIOS/UEFI and bootloader, this opens an attack vector and needs to be regarded as a security vulnerability:
It is common to assume that once the attacker has physical access to the computer, the game is over. The attackers can do whatever they want. And although this was true 30 years ago, today it is not. There are many "levels" of physical access. [...] In order to protect the computer in these scenarios: the BIOS/UEFI has one or two passwords to protect the booting or the configuration menu; the GRUB also has the possibility to use multiple passwords to protect unauthorized operations. And in the case of an encrypted system, the initrd shall block the maximum number of password trials and prevent the access to the computer in that case.
While Hector and Ismael have a valid point in that the rescue shell might open an additional attack vector in special setups, this is not true for the vast majority of Debian systems out there: in most cases a local attacker can alter the boot order, replace or add boot devices, modify boot options in the (GNU GRUB) bootloader menu or modify/replace arbitrary hardware parts. The required scenario to make the initramfs rescue shell an additional attack vector is indeed very special: locked down hardware, password protected BIOS and bootloader but still local keyboard (or serial console) access are required at least. Hector and Ismael argue that the default should be changed for enhanced security:
[...] But then Linux is used in more hostile environments, this helpful (but naive) recovery services shall not be the default option.
For the reasons explained about, I tend to disagree to Hectors and Ismaels opinion here. And after discussing this topic with several people I find my opinion reconfirmed: the Debian Security Team disputes the security impact of the issue and others agree. But leaving the disputable opinion on a sane default aside, I don't think that the cryptsetup package is the right place to change the default, if at all. If you want added security by a locked down initramfs (i.e. no rescue shell spawned), then at least the bootloader (GNU GRUB) needs to be locked down by default as well. To make it clear: if one wants to lock down the boot process, bootloader and initramfs should be locked down together. And the right place to do this would be the configurable behaviour of grub-mkconfig. Here, one can set a password for GRUB and the boot parameter 'panic=1' which disables the spawning of a rescue shell in initramfs. But as mentioned, I don't agree that this would be sane defaults. The vast majority of Debian systems out there don't have any security added by locked down bootloader and initramfs and the benefit of a rescue shell for debugging purposes clearly outrivals the minor security impact in my opinion. For the few setups which require the added security of a locked down bootloader and initramfs, we already have the relevant options documented in the Securing Debian Manual: After discussing the topic with initramfs-tools maintainers today, Guilhem and me (the cryptsetup maintainers) finally decided to not change any defaults and just add a 'sleep 60' after the maximum allowed attempts were reached. 2. tries=n option ignored, local brute-force slightly cheaper Apart from the issue of a rescue shell being spawned, Hector and Ismael also discovered a programming bug in the cryptsetup initramfs integration. This bug in the cryptroot initramfs local-top script allowed endless retries of passphrase input, ignoring the tries=n option of crypttab (and the default of 3). As a result, theoretically unlimited attempts to unlock encrypted disks were possible when processed during initramfs stage. The attack vector here was that local brute-force attacks are a bit cheaper. Instead of having to reboot after max tries were reached, one could go on trying passwords. Even though efficient brute-force attacks are mitigated by the PBKDF2 implementation in cryptsetup, this clearly is a real bug. The reason for the bug was twofold:
  • First, the condition in setup_mapping() responsible for making the function fail when the maximum amount of allowed attempts is reached, was never met:
    setup_mapping()
     
      [...]
      # Try to get a satisfactory password $crypttries times
      count=0                              
    while [ $crypttries -le 0 ] [ $count -lt $crypttries ]; do export CRYPTTAB_TRIED="$count" count=$(( $count + 1 )) [...] done if [ $crypttries -gt 0 ] && [ $count -gt $crypttries ]; then message "cryptsetup: maximum number of tries exceeded for $crypttarget" return 1 fi [...]
    As one can see, the while loop stops when $count -lt $crypttries. Thus the second condition $count -gt $crypttries is never met. This can easily be fixed by decreasing $count by one in case of a successful unlock attempt along with changing the second condition to $count -ge $crypttries:
    setup_mapping()
     
      [...]
      while [ $crypttries -le 0 ]   [ $count -lt $crypttries ]; do
          [...]
          # decrease $count by 1, apparently last try was successful.
          count=$(( $count - 1 ))
          [...]
      done
      if [ $crypttries -gt 0 ] && [ $count -ge $crypttries ]; then
          [...]
      fi
      [...]
     
    
    Christian Lamparter already spotted this bug back in October 2011 and provided a (incomplete) patch, but back then I even managed to merge the patch in an improper way, making it even more useless: The patch by Christian forgot to decrease $count by one in case of a successful unlock attempt, resulting in warnings about maximum tries exceeded even for successful attemps in some circumstances. But instead of adding the decrease myself and keeping the (almost correct) condition $count -eq $crypttries for detection of exceeded maximum tries, I changed back the condition to the wrong original $count -gt $crypttries that again was never met. Apparently I didn't test the fix properly back then. I definitely should do better in future!
  • Second, back in December 2013, I added a cryptroot initramfs local-block script as suggested by Goswin von Brederlow in order to fix bug #678692. The purpose of the cryptroot initramfs local-block script is to invoke the cryptroot initramfs local-top script again and again in a loop. This is required to support complex block device stacks. In fact, the numberless options of stacked block devices are one of the biggest and most inglorious reasons that the cryptsetup initramfs integration scripts became so complex over the years. After all we need to support setups like rootfs on top of LVM with two separate encrypted PVs or rootfs on top of LVM on top of dm-crypt on top of MD raid. The problem with the local-block script is that exiting the setup_mapping() function merely triggers a new invocation of the very same function. The guys who discovered the bug suggested a simple and good solution to this bug: When maximum attempts are detected (by second condition from above), the script sleeps for 60 seconds. This mitigates the brute-force attack options for local attackers - even rebooting after max attempts should be faster.

About disclosure, wording and clickbaiting I'm happy that Hector and Ismael brought up the topic and made their argument about the security impacts of an initramfs rescue shell, even though I have to admit that I was rather astonished about the fact that they got a CVE assigned. Nevertheless I'm very happy that they informed the Security Teams of Debian and Ubuntu prior to publishing their findings, which put me in the loop in turn. Also Hector and Ismael were open and responsive when it came to discussing their proposed fixes. But unfortunately the way they advertised their finding was not very helpful. They announced a speech about this topic at the DeepSec 2016 in Vienna with the headline Abusing LUKS to Hack the System. Honestly, this headline is missleading - if not wrong - in several ways:
  • First, the whole issue is not about LUKS, neither is it about cryptsetup itself. It's about Debians integration of cryptsetup into the initramfs, which is a compeletely different story.
  • Second, the term hack the system suggests that an exploit to break into the system is revealed. This is not true. The device encryption is not endangered at all.
  • Third - as shown above - very special prerequisites need to be met in order to make the mere existance of a LUKS encrypted device the relevant fact to be able to spawn a rescue shell during initramfs.
Unfortunately, the way this issue was published lead to even worse articles in the tech news press. Topics like Major security hole found in Cryptsetup script for LUKS disk encryption or Linux Flaw allows Root Shell During Boot-Up for LUKS Disk-Encrypted Systems suggest that a major security vulnerabilty was revealed and that it compromised the protection that cryptsetup respective LUKS offer. If these articles/news did anything at all, then it was causing damage to the cryptsetup project, which is not affected by the whole issue at all. After the cat was out of the bag, Marco and Ismael aggreed that the way the news picked up the issue was suboptimal, but I cannot fight the feeling that the over-exaggeration was partly intended and that clickbaiting is taking place here. That's a bit sad.

3 December 2016

Vincent Bernat: Build-time dependency patching for Android

This post shows how to patch an external dependency for an Android project at build-time with Gradle. This leverages the Transform API and Javassist, a Java bytecode manipulation tool.
buildscript  
    dependencies  
        classpath 'com.android.tools.build:gradle:2.2.+'
        classpath 'com.android.tools.build:transform-api:1.5.+'
        classpath 'org.javassist:javassist:3.21.+'
        classpath 'commons-io:commons-io:2.4'
     
 
Disclaimer: I am not a seasoned Android programmer, so take this with a grain of salt.

Context This section adds some context to the example. Feel free to skip it. Dashkiosk is an application to manage dashboards on many displays. It provides an Android application you can install on one of those cheap Android sticks. Under the table, the application is an embedded webview backed by the Crosswalk Project web runtime which brings an up-to-date web engine, even for older versions of Android1. Recently, a security vulnerability has been spotted in how invalid certificates were handled. When a certificate cannot be verified, the webview defers the decision to the host application by calling the onReceivedSslError() method:
Notify the host application that an SSL error occurred while loading a resource. The host application must call either callback.onReceiveValue(true) or callback.onReceiveValue(false). Note that the decision may be retained for use in response to future SSL errors. The default behavior is to pop up a dialog.
The default behavior is specific to Crosswalk webview: the Android builtin one just cancels the load. Unfortunately, the fix applied by Crosswalk is different and, as a side effect, the onReceivedSslError() method is not invoked anymore2. Dashkiosk comes with an option to ignore TLS errors3. The mentioned security fix breaks this feature. The following example will demonstrate how to patch Crosswalk to recover the previous behavior4.

Simple method replacement Let s replace the shouldDenyRequest() method from the org.xwalk.core.internal.SslUtil class with this version:
// In SslUtil class
public static boolean shouldDenyRequest(int error)  
    return false;
 

Transform registration Gradle Transform API enables the manipulation of compiled class files before they are converted to DEX files. To declare a transform and register it, include the following code in your build.gradle:
import com.android.build.api.transform.Context
import com.android.build.api.transform.QualifiedContent
import com.android.build.api.transform.Transform
import com.android.build.api.transform.TransformException
import com.android.build.api.transform.TransformInput
import com.android.build.api.transform.TransformOutputProvider
import org.gradle.api.logging.Logger
class PatchXWalkTransform extends Transform  
    Logger logger = null;
    public PatchXWalkTransform(Logger logger)  
        this.logger = logger
     
    @Override
    String getName()  
        return "PatchXWalk"
     
    @Override
    Set<QualifiedContent.ContentType> getInputTypes()  
        return Collections.singleton(QualifiedContent.DefaultContentType.CLASSES)
     
    @Override
    Set<QualifiedContent.Scope> getScopes()  
        return Collections.singleton(QualifiedContent.Scope.EXTERNAL_LIBRARIES)
     
    @Override
    boolean isIncremental()  
        return true
     
    @Override
    void transform(Context context,
                   Collection<TransformInput> inputs,
                   Collection<TransformInput> referencedInputs,
                   TransformOutputProvider outputProvider,
                   boolean isIncremental) throws IOException, TransformException, InterruptedException  
        // We should do something here
     
 
// Register the transform
android.registerTransform(new PatchXWalkTransform(logger))
The getInputTypes() method should return the set of types of data consumed by the transform. In our case, we want to transform classes. Another possibility is to transform resources. The getScopes() method should return a set of scopes for the transform. In our case, we are only interested by the external libraries. It s also possible to transform our own classes. The isIncremental() method returns true because we support incremental builds. The transform() method is expected to take all the provided inputs and copy them (with or without modifications) to the location supplied by the output provider. We didn t implement this method yet. This causes the removal of all external dependencies from the application.

Noop transform To keep all external dependencies unmodified, we must copy them:
@Override
void transform(Context context,
               Collection<TransformInput> inputs,
               Collection<TransformInput> referencedInputs,
               TransformOutputProvider outputProvider,
               boolean isIncremental) throws IOException, TransformException, InterruptedException  
    inputs.each  
        it.jarInputs.each  
            def jarName = it.name
            def src = it.getFile()
            def dest = outputProvider.getContentLocation(jarName, 
                                                         it.contentTypes, it.scopes,
                                                         Format.JAR);
            def status = it.getStatus()
            if (status == Status.REMOVED)   //  
                logger.info("Remove $ src ")
                FileUtils.delete(dest)
              else if (!isIncremental   status != Status.NOTCHANGED)   //  
                logger.info("Copy $ src ")
                FileUtils.copyFile(src, dest)
             
         
     
 
We also need two additional imports:
import com.android.build.api.transform.Status
import org.apache.commons.io.FileUtils
Since we are handling external dependencies, we only have to manage JAR files. Therefore, we only iterate on jarInputs and not on directoryInputs. There are two cases when handling incremental build: either the file has been removed ( ) or it has been modified ( ). In all other cases, we can safely assume the file is already correctly copied.

JAR patching When the external dependency is the Crosswalk JAR file, we also need to modify it. Here is the first part of the code (replacing ):
if ("$ src " ==~ ".*/org.xwalk/xwalk_core.*/classes.jar")  
    def pool = new ClassPool()
    pool.insertClassPath("$ src ")
    def ctc = pool.get('org.xwalk.core.internal.SslUtil') //  
    def ctm = ctc.getDeclaredMethod('shouldDenyRequest')
    ctc.removeMethod(ctm) //  
    ctc.addMethod(CtNewMethod.make("""
public static boolean shouldDenyRequest(int error)  
    return false;
 
""", ctc)) //  
    def sslUtilBytecode = ctc.toBytecode() //  
    // Write back the JAR file
    //  
  else  
    logger.info("Copy $ src ")
    FileUtils.copyFile(src, dest)
 
We also need the following additional imports to use Javassist:
import javassist.ClassPath
import javassist.ClassPool
import javassist.CtNewMethod
Once we have located the JAR file we want to modify, we add it to our classpath and retrieve the class we are interested in ( ). We locate the appropriate method and delete it ( ). Then, we add our custom method using the same name ( ). The whole operation is done in memory. We retrieve the bytecode of the modified class in . The remaining step is to rebuild the JAR file:
def input = new JarFile(src)
def output = new JarOutputStream(new FileOutputStream(dest))
//  
input.entries().each  
    if (!it.getName().equals("org/xwalk/core/internal/SslUtil.class"))  
        def s = input.getInputStream(it)
        output.putNextEntry(new JarEntry(it.getName()))
        IOUtils.copy(s, output)
        s.close()
     
 
//  
output.putNextEntry(new JarEntry("org/xwalk/core/internal/SslUtil.class"))
output.write(sslUtilBytecode)
output.close()
We need the following additional imports:
import java.util.jar.JarEntry
import java.util.jar.JarFile
import java.util.jar.JarOutputStream
import org.apache.commons.io.IOUtils
There are two steps. In , all classes are copied to the new JAR, except the SslUtil class. In , the modified bytecode for SslUtil is added to the JAR. That s all! You can view the complete example on GitHub.

More complex method replacement In the above example, the new method doesn t use any external dependency. Let s suppose we also want to replace the sslErrorFromNetErrorCode() method from the same class with the following one:
import org.chromium.net.NetError;
import android.net.http.SslCertificate;
import android.net.http.SslError;
// In SslUtil class
public static SslError sslErrorFromNetErrorCode(int error,
                                                SslCertificate cert,
                                                String url)  
    switch(error)  
        case NetError.ERR_CERT_COMMON_NAME_INVALID:
            return new SslError(SslError.SSL_IDMISMATCH, cert, url);
        case NetError.ERR_CERT_DATE_INVALID:
            return new SslError(SslError.SSL_DATE_INVALID, cert, url);
        case NetError.ERR_CERT_AUTHORITY_INVALID:
            return new SslError(SslError.SSL_UNTRUSTED, cert, url);
        default:
            break;
     
    return new SslError(SslError.SSL_INVALID, cert, url);
 
The major difference with the previous example is that we need to import some additional classes.

Android SDK import The classes from the Android SDK are not part of the external dependencies. They need to be imported separately. The full path of the JAR file is:
androidJar = "$ android.getSdkDirectory().getAbsolutePath() /platforms/" +
             "$ android.getCompileSdkVersion() /android.jar"
We need to load it before adding the new method into SslUtil class:
def pool = new ClassPool()
pool.insertClassPath(androidJar)
pool.insertClassPath("$ src ")
def ctc = pool.get('org.xwalk.core.internal.SslUtil')
def ctm = ctc.getDeclaredMethod('sslErrorFromNetErrorCode')
ctc.removeMethod(ctm)
pool.importPackage('android.net.http.SslCertificate');
pool.importPackage('android.net.http.SslError');
//  

External dependency import We must also import org.chromium.net.NetError and therefore, we need to put the appropriate JAR in our classpath. The easiest way is to iterate through all the external dependencies and add them to the classpath.
def pool = new ClassPool()
pool.insertClassPath(androidJar)
inputs.each  
    it.jarInputs.each  
        def jarName = it.name
        def src = it.getFile()
        def status = it.getStatus()
        if (status != Status.REMOVED)  
            pool.insertClassPath("$ src ")
         
     
 
def ctc = pool.get('org.xwalk.core.internal.SslUtil')
def ctm = ctc.getDeclaredMethod('sslErrorFromNetErrorCode')
ctc.removeMethod(ctm)
pool.importPackage('android.net.http.SslCertificate');
pool.importPackage('android.net.http.SslError');
pool.importPackage('org.chromium.net.NetError');
ctc.addMethod(CtNewMethod.make(" "))
// Then, rebuild the JAR...
Happy hacking!

  1. Before Android 4.4, the webview was severely outdated. Starting from Android 5, the webview is shipped as a separate component with updates. Embedding Crosswalk is still convenient as you know exactly which version you can rely on.
  2. I hope to have this fixed in later versions.
  3. This may seem harmful and you are right. However, if you have an internal CA, it is currently not possible to provide its own trust store to a webview. Moreover, the system trust store is not used either. You also may want to use TLS for authentication only with client certificates, a feature supported by Dashkiosk.
  4. Crosswalk being an opensource project, an alternative would have been to patch Crosswalk source code and recompile it. However, Crosswalk embeds Chromium and recompiling the whole stuff consumes a lot of resources.

20 August 2016

Francois Marier: Remplacer un disque RAID d fectueux

Traduction de l'article original anglais https://feeding.cloud.geek.nz/posts/replacing-a-failed-raid-drive/. Voici la proc dure que j'ai suivi pour remplacer un disque RAID d fectueux sur une machine Debian.

Remplacer le disque Apr s avoir remarqu que /dev/sdb a t expuls de mon RAID, j'ai utilis smartmontools pour identifier le num ro de s rie du disque retirer :
smartctl -a /dev/sdb
Cette information en main, j'ai ferm l'ordinateur, retir le disque d fectueux et mis un nouveau disque vide la place.

Initialiser le nouveau disque Apr s avoir d marr avec le nouveau disque vide, j'ai copi la table de partitions avec parted. Premi rement, j'ai examin la table de partitions sur le disque dur non-d fectueux :
$ parted /dev/sda
unit s
print
et cr une nouvelle table de partitions sur le disque de remplacement :
$ parted /dev/sdb
unit s
mktable gpt
Ensuite j'ai utilis la commande mkpart pour mes 4 partitions et je leur ai toutes donn la m me taille que les partitions quivalentes sur /dev/sda. Finalement, j'ai utilis les commandes toggle 1 bios_grub (partition d'amorce) et toggle X raid (o X est le num ro de la partition) pour toutes les partitions RAID, avant de v rifier avec la commande print que les deux tables de partitions sont maintenant identiques.

Resynchroniser/recr er les RAID Pour synchroniser les donn es du bon disque (/dev/sda) vers celui de remplacement (/dev/sdb), j'ai ex cut les commandes suivantes sur mes partitions RAID1 :
mdadm /dev/md0 -a /dev/sdb2
mdadm /dev/md2 -a /dev/sdb4
et j'ai gard un oeil sur le statut de la synchronisation avec :
watch -n 2 cat /proc/mdstat
Pour acc l rer le processus, j'ai utilis le truc suivant :
blockdev --setra 65536 "/dev/md0"
blockdev --setra 65536 "/dev/md2"
echo 300000 > /proc/sys/dev/raid/speed_limit_min
echo 1000000 > /proc/sys/dev/raid/speed_limit_max
Ensuite, j'ai recr ma partition swap RAID0 comme suit :
mdadm /dev/md1 --create --level=0 --raid-devices=2 /dev/sda3 /dev/sdb3
mkswap /dev/md1
Par que la partition swap est toute neuve (il n'est pas possible de restorer une partition RAID0, il faut la re-cr er compl tement), j'ai d faire deux choses:
  • remplacer le UUID pour swap dans /etc/fstab, avec le UUID donn par la commande mkswap (ou bien en utilisant la command blkid et en prenant le UUID pour /dev/md1)
  • remplacer le UUID de /dev/md1 dans /etc/mdadm/mdadm.conf avec celui retourn pour /dev/md1 par la commande mdadm --detail --scan

S'assurer que l'on peut d marrer avec le disque de remplacement Pour tre certain de bien pouvoir d marrer la machine avec n'importe quel des deux disques, j'ai r install le boot loader grub sur le nouveau disque :
grub-install /dev/sdb
avant de red marrer avec les deux disques connect s. Ceci confirme que ma configuration fonctionne bien. Ensuite, j'ai d marr sans le disque /dev/sda pour m'assurer que tout fonctionnerait bien si ce disque d cidait de mourir et de me laisser seulement avec le nouveau (/dev/sdb). Ce test brise videmment la synchronisation entre les deux disques, donc j'ai d red marrer avec les deux disques connect s et puis r -ajouter /dev/sda tous les RAID1 :
mdadm /dev/md0 -a /dev/sda2
mdadm /dev/md2 -a /dev/sda4
Une fois le tout fini, j'ai red marrer nouveau avec les deux disques pour confirmer que tout fonctionne bien :
cat /proc/mdstat
et j'ai ensuite ex cuter un test SMART complet sur le nouveau disque :
smartctl -t long /dev/sdb

Francois Marier: Remplacer un disque RAID d fectueux

Traduction de l'article original anglais https://feeding.cloud.geek.nz/posts/replacing-a-failed-raid-drive/. Voici la proc dure que j'ai suivi pour remplacer un disque RAID d fectueux sur une machine Debian.

Remplacer le disque Apr s avoir remarqu que /dev/sdb a t expuls de mon RAID, j'ai utilis smartmontools pour identifier le num ro de s rie du disque retirer :
smartctl -a /dev/sdb
Cette information en main, j'ai ferm l'ordinateur, retir le disque d fectueux et mis un nouveau disque vide la place.

Initialiser le nouveau disque Apr s avoir d marr avec le nouveau disque vide, j'ai copi la table de partitions avec parted. Premi rement, j'ai examin la table de partitions sur le disque dur non-d fectueux :
$ parted /dev/sda
unit s
print
et cr une nouvelle table de partitions sur le disque de remplacement :
$ parted /dev/sdb
unit s
mktable gpt
Ensuite j'ai utilis la commande mkpart pour mes 4 partitions et je leur ai toutes donn la m me taille que les partitions quivalentes sur /dev/sda. Finalement, j'ai utilis les commandes toggle 1 bios_grub (partition d'amorce) et toggle X raid (o X est le num ro de la partition) pour toutes les partitions RAID, avant de v rifier avec la commande print que les deux tables de partitions sont maintenant identiques.

Resynchroniser/recr er les RAID Pour synchroniser les donn es du bon disque (/dev/sda) vers celui de remplacement (/dev/sdb), j'ai ex cut les commandes suivantes sur mes partitions RAID1 :
mdadm /dev/md0 -a /dev/sdb2
mdadm /dev/md2 -a /dev/sdb4
et j'ai gard un oeil sur le statut de la synchronisation avec :
watch -n 2 cat /proc/mdstat
Pour acc l rer le processus, j'ai utilis le truc suivant :
blockdev --setra 65536 "/dev/md0"
blockdev --setra 65536 "/dev/md2"
echo 300000 > /proc/sys/dev/raid/speed_limit_min
echo 1000000 > /proc/sys/dev/raid/speed_limit_max
Ensuite, j'ai recr ma partition swap RAID0 comme suit :
mdadm /dev/md1 --create --level=0 --raid-devices=2 /dev/sda3 /dev/sdb3
mkswap /dev/md1
Par que la partition swap est toute neuve (il n'est pas possible de restorer une partition RAID0, il faut la re-cr er compl tement), j'ai d faire deux choses:
  • remplacer le UUID pour swap dans /etc/fstab, avec le UUID donn par la commande mkswap (ou bien en utilisant la command blkid et en prenant le UUID pour /dev/md1)
  • remplacer le UUID de /dev/md1 dans /etc/mdadm/mdadm.conf avec celui retourn pour /dev/md1 par la commande mdadm --detail --scan

S'assurer que l'on peut d marrer avec le disque de remplacement Pour tre certain de bien pouvoir d marrer la machine avec n'importe quel des deux disques, j'ai r install le boot loader grub sur le nouveau disque :
grub-install /dev/sdb
avant de red marrer avec les deux disques connect s. Ceci confirme que ma configuration fonctionne bien. Ensuite, j'ai d marr sans le disque /dev/sda pour m'assurer que tout fonctionnerait bien si ce disque d cidait de mourir et de me laisser seulement avec le nouveau (/dev/sdb). Ce test brise videmment la synchronisation entre les deux disques, donc j'ai d red marrer avec les deux disques connect s et puis r -ajouter /dev/sda tous les RAID1 :
mdadm /dev/md0 -a /dev/sda2
mdadm /dev/md2 -a /dev/sda4
Une fois le tout fini, j'ai red marrer nouveau avec les deux disques pour confirmer que tout fonctionne bien :
cat /proc/mdstat
et j'ai ensuite ex cuter un test SMART complet sur le nouveau disque :
smartctl -t long /dev/sdb

31 July 2016

Hideki Yamane: another apt proxy tool: "go-apt-cacher" and "go-apt-mirror"

Recently I've attended Tokyo Debian meeting at Cybozu, Inc., Nihonbashi, Tokyo.


And people from Cybozu introduced their product named "go-apt-cacher" and "go-apt-mirror".

apt-cacher-ng and apt-mirror have some problems and their product solve it, they said. They put them into their production environment (with thousands of Ubuntu servers) and it works well, so some people uses apt proxy tools may be interested to it (ping Vasudev Kamath :-) .

If it would be interesting for you, please give a comment via Twitter (@ymmt2005) or at their GitHub repo. (or help to package them and put into official repo :-)


23 July 2016

Francois Marier: Replacing a failed RAID drive

Here's the complete procedure I followed to replace a failed drive from a RAID array on a Debian machine.

Replace the failed drive After seeing that /dev/sdb had been kicked out of my RAID array, I used smartmontools to identify the serial number of the drive to pull out:
smartctl -a /dev/sdb
Armed with this information, I shutdown the computer, pulled the bad drive out and put the new blank one in.

Initialize the new drive After booting with the new blank drive in, I copied the partition table using parted. First, I took a look at what the partition table looks like on the good drive:
$ parted /dev/sda
unit s
print
and created a new empty one on the replacement drive:
$ parted /dev/sdb
unit s
mktable gpt
then I ran mkpart for all 4 partitions and made them all the same size as the matching ones on /dev/sda. Finally, I ran toggle 1 bios_grub (boot partition) and toggle X raid (where X is the partition number) for all RAID partitions, before verifying using print that the two partition tables were now the same.

Resync/recreate the RAID arrays To sync the data from the good drive (/dev/sda) to the replacement one (/dev/sdb), I ran the following on my RAID1 partitions:
mdadm /dev/md0 -a /dev/sdb2
mdadm /dev/md2 -a /dev/sdb4
and kept an eye on the status of this sync using:
watch -n 2 cat /proc/mdstat
In order to speed up the sync, I used the following trick:
blockdev --setra 65536 "/dev/md0"
blockdev --setra 65536 "/dev/md2"
echo 300000 > /proc/sys/dev/raid/speed_limit_min
echo 1000000 > /proc/sys/dev/raid/speed_limit_max
Then, I recreated my RAID0 swap partition like this:
mdadm /dev/md1 --create --level=0 --raid-devices=2 /dev/sda3 /dev/sdb3
mkswap /dev/md1
Because the swap partition is brand new (you can't restore a RAID0, you need to re-create it), I had to update two things:
  • replace the UUID for the swap mount in /etc/fstab, with the one returned by mkswap (or running blkid and looking for /dev/md1)
  • replace the UUID for /dev/md1 in /etc/mdadm/mdadm.conf with the one returned for /dev/md1 by mdadm --detail --scan

Ensuring that I can boot with the replacement drive In order to be able to boot from both drives, I reinstalled the grub boot loader onto the replacement drive:
grub-install /dev/sdb
before rebooting with both drives to first make sure that my new config works. Then I booted without /dev/sda to make sure that everything would be fine should that drive fail and leave me with just the new one (/dev/sdb). This test obviously gets the two drives out of sync, so I rebooted with both drives plugged in and then had to re-add /dev/sda to the RAID1 arrays:
mdadm /dev/md0 -a /dev/sda2
mdadm /dev/md2 -a /dev/sda4
Once that finished, I rebooted again with both drives plugged in to confirm that everything is fine:
cat /proc/mdstat
Then I ran a full SMART test over the new replacement drive:
smartctl -t long /dev/sdb

20 July 2016

Daniel Pocock: How many mobile phone accounts will be hijacked this summer?

Summer vacations have been getting tougher in recent years. Airlines cut into your precious vacation time with their online check-in procedures and a dozen reminder messages, there is growing concern about airport security and Brexit has already put one large travel firm into liquidation leaving holidaymakers in limbo. If that wasn't all bad enough, now there is a new threat: while you are relaxing in the sun, scammers fool your phone company into issuing a replacement SIM card or transferring your mobile number to a new provider and then proceed to use it to take over all your email, social media, Paypal and bank accounts. The same scam has been appearing around the globe, from Britain to Australia and everywhere in between. Many of these scams were predicted in my earlier blog SMS logins: an illusion of security (April 2014) but they are only starting to get publicity now as more aspects of our lives are at risk, scammers are ramping up their exploits and phone companies are floundering under the onslaught. With the vast majority of Internet users struggling to keep their passwords out of the wrong hands, many organizations have started offering their customers the option of receiving two-factor authentication codes on their mobile phone during login. Rather than making people safer, this has simply given scammers an incentive to seize control of telephones, usually by tricking the phone company to issue a replacement SIM or port the number. It also provides a fresh incentive for criminals to steal phones while cybercriminals have been embedding code into many "free" apps to surreptitiously re-route the text messages and gather other data they need for an identity theft sting. Sadly, telephone networks were never designed for secure transactions. Telecoms experts have made this clear numerous times. Some of the largest scams in the history of financial services exploited phone verification protocols as the weakest link in the chain, including a $150 million heist reminiscent of Ocean's 11. For phone companies, SMS messaging came as a side-effect of digital communications for mobile handsets. It is less than one percent of their business. SMS authentication is less than one percent of that. Phone companies lose little or nothing when SMS messages are hijacked so there is little incentive for them to secure it. Nonetheless, like insects riding on an elephant, numerous companies have popped up with a business model that involves linking websites to the wholesale telephone network and dressing it up as a "security" solution. These companies are able to make eye-watering profits by "purchasing" text messages for $0.01 and selling them for $0.02 (one hundred percent gross profit), but they also have nothing to lose when SIM cards are hijacked and therefore minimal incentive to take any responsibility. Companies like Google, Facebook and Twitter have thrown more fuel on the fire by encouraging and sometimes even demanding users provide mobile phone numbers to "prove they are human" or "protect" their accounts. Through these antics, these high profile companies have given a vast percentage of the population a false sense of confidence in codes delivered by mobile phone, yet the real motivation for these companies does not appear to be security at all: they have worked out that the mobile phone number is the holy grail in cross-referencing vast databases of users and customers from different sources for all sorts of creepy purposes. As most of their services don't involve any financial activity, they have little to lose if accounts are compromised and everything to gain by accurately gathering mobile phone numbers from as many users as possible.
Can you escape your mobile phone while on vacation? Just how hard is it to get a replacement SIM card or transfer/port a user's phone number while they are on vacation? Many phone companies will accept instructions through a web form or a phone call. Scammers need little more than a user's full name, home address and date of birth: vast lists of these private details are circulating on the black market, sourced from social media, data breaches (99% of which are never detected or made public), marketing companies and even the web sites that encourage your friends to send you free online birthday cards. Every time a company has asked me to use mobile phone authentication so far, I've opted out and I'll continue to do so. Even if somebody does hijack my phone account while I'm on vacation, the consequences for me are minimal as it will not give them access to any other account or service, can you and your family members say the same thing? What can be done?
  • Opt-out of mobile phone authentication schemes.
  • Never give the mobile phone number to web sites unless there is a real and pressing need for them to call you.
  • Tell firms you don't have a mobile phone or that you share your phone with your family and can't use it for private authentication.
  • If you need to use two-factor authentication, only use technical solutions such as smart cards or security tokens that have been engineered exclusively for computer security. Leave them in a locked drawer or safe while on vacation. Be wary of anybody who insists on SMS and doesn't offer these other options.
  • Rather than seeking to "protect" accounts, simply close some or all social media accounts to reduce your exposure and eliminate the effort of keeping them "secure" and updating "privacy" settings.
  • If your bank provides a relationship manager or other personal contact, this
    can also provide a higher level of security as they get to know you.
Previous blogs on SMS messaging, security and two factor authentication, including my earlier blog SMS Logins: an illusion of security.

22 June 2016

Andrew Cater: Why I must use Free Software - and why I tell others to do so

My work colleagues know me well as a Free/Libre software zealot, constantly pointing out to them how people should behave, how FLOSS software trumps commercial software and how this is the only way forward. This for the last 20 odd years. It's a strain to argue this repeatedly: at various times, I have been asked to set out more clearly why I use FLOSS, what the advantages are, why and how to contribute to FLOSS software.

"We are creating a world that all may enter without privilege or prejudice accorded by race, economic power, military force, or station of birth.
We are creating a world where anyone, anywhere may express his or her beliefs, no matter how singular, without fear of being coerced into silence or conformity.
Your legal concepts of property, expression, identity, movement, and context do not apply to us. They are all based on matter, and there is no matter here
...
In our world, whatever the human mind may create can be reproduced and distributed infinitely at no cost. The global conveyance of thought no longer requires your factories to accomplish."
[John Perry Barlow - Declaration of the independence of cyberspace 1996 https://www.eff.org/cyberspace-independence]

That's some of it right there: I was seduced by a modem and the opportunities it gave. I've lived in this world since 1994, come to appreciate it and never really had the occasion to regret it.

I'm involved in the Debian community - which is very much a "do-ocracy" - and I've lived with Debian GNU Linux since 1995 and not had much cause to regret that either, though I do regret that force of circumstance has meant that I can't contribute as much as I'd like. Pretty much every machine I touch ends up running Debian, one way or the other, or should do if I had my way.
Digging through my emails since then on the various mailing lists - some of them are deeply technical, though fewer these days: some are Debian political: most are trying to help people with problems / report successes or, occasionally thanks and social chit chat. Most people in the project have never met me - though that's not unusual in an organisation with a thousand developers spread worldwide - and so the occasional chance to talk to people in real life is invaluable.

The crucial thing is that there is common purpose and common intelligence - however crazy mailing list flame wars can get sometimes - and committed, caring people. Some of us may be crazy zealots, some picky and argumentative - Debian is what we have in common, pretty much.

It doesn't depend on physical ability. Espy (Joel Klecker) was one of our best and brightest until his death at age 21: almost nobody knew he was dying until after his death. My own physical limitations are pretty much irrelevant provided I can type.

It does depend on collaboration and the strange, dysfunctional family that is our community and the wider FLOSS community in which we share and in which some of us have multiple identities in working with different projects.
This is going to end up too long for Planet Debian - I'll end this post here and then continue with some points on how to contribute and why employers should let their employers work on FLOSS.




Next.

Previous.