Search Results: "sysv"

14 July 2020

Russell Coker: Debian PPC64EL Emulation

In my post on Debian S390X Emulation [1] I mentioned having problems booting a Debian PPC64EL kernel under QEMU. Giovanni commented that they had PPC64EL working and gave a link to their site with Debian QEMU images for various architectures [2]. I tried their image which worked then tried mine again which also worked it seemed that a recent update in Debian/Unstable fixed the bug that made QEMU not work with the PPC64EL kernel. Here are the instructions on how to do it. First you need to create a filesystem in an an image file with commands like the following:
truncate -s 4g /vmstore/ppc
mkfs.ext4 /vmstore/ppc
mount -o loop /vmstore/ppc /mnt/tmp
Then visit the Debian Netinst page [3] to download the PPC64EL net install ISO. Then loopback mount it somewhere convenient like /mnt/tmp2. The package qemu-system-ppc has the program for emulating a PPC64LE system, the qemu-user-static package has the program for emulating PPC64LE for a single program (IE a statically linked program or a chroot environment), you need this to run debootstrap. The following commands should be most of what you need.
apt install qemu-system-ppc qemu-user-static
update-binfmts --display
# qemu ppc64 needs exec stack to solve "Could not allocate dynamic translator buffer"
# so enable that on SE Linux systems
setsebool -P allow_execstack 1
debootstrap --foreign --arch=ppc64el --no-check-gpg buster /mnt/tmp file:///mnt/tmp2
chroot /mnt/tmp /debootstrap/debootstrap --second-stage
cat << END > /mnt/tmp/etc/apt/sources.list
deb http://mirror.internode.on.net/pub/debian/ buster main
deb http://security.debian.org/ buster/updates main
END
echo "APT::Install-Recommends False;" > /mnt/tmp/etc/apt/apt.conf
echo ppc64 > /mnt/tmp/etc/hostname
# /usr/bin/awk: error while loading shared libraries: cannot restore segment prot after reloc: Permission denied
# only needed for chroot
setsebool allow_execmod 1
chroot /mnt/tmp apt update
# why aren't they in the default install?
chroot /mnt/tmp apt install perl dialog
chroot /mnt/tmp apt dist-upgrade
chroot /mnt/tmp apt install bash-completion locales man-db openssh-server build-essential systemd-sysv ifupdown vim ca-certificates gnupg
# install kernel last because systemd install rebuilds initrd
chroot /mnt/tmp apt install linux-image-ppc64el
chroot /mnt/tmp dpkg-reconfigure locales
chroot /mnt/tmp passwd
cat << END > /mnt/tmp/etc/fstab
/dev/vda / ext4 noatime 0 0
#/dev/vdb none swap defaults 0 0
END
mkdir /mnt/tmp/root/.ssh
chmod 700 /mnt/tmp/root/.ssh
cp ~/.ssh/id_rsa.pub /mnt/tmp/root/.ssh/authorized_keys
chmod 600 /mnt/tmp/root/.ssh/authorized_keys
rm /mnt/tmp/vmlinux* /mnt/tmp/initrd*
mkdir /boot/ppc64
cp /mnt/tmp/boot/[vi]* /boot/ppc64
# clean up
umount /mnt/tmp
umount /mnt/tmp2
# setcap binary for starting bridged networking
setcap cap_net_admin+ep /usr/lib/qemu/qemu-bridge-helper
# afterwards set the access on /etc/qemu/bridge.conf so it can only
# be read by the user/group permitted to start qemu/kvm
echo "allow all" > /etc/qemu/bridge.conf
Here is an example script for starting kvm. It can be run by any user that can read /etc/qemu/bridge.conf.
#!/bin/bash
set -e
KERN="kernel /boot/ppc64/vmlinux-4.19.0-9-powerpc64le -initrd /boot/ppc64/initrd.img-4.19.0-9-powerpc64le"
# single network device, can have multiple
NET="-device e1000,netdev=net0,mac=02:02:00:00:01:04 -netdev tap,id=net0,helper=/usr/lib/qemu/qemu-bridge-helper"
# random number generator for fast start of sshd etc
RNG="-object rng-random,filename=/dev/urandom,id=rng0 -device virtio-rng-pci,rng=rng0"
# I have lockdown because it does no harm now and is good for future kernels
# I enable SE Linux everywhere
KERNCMD="net.ifnames=0 noresume security=selinux root=/dev/vda ro lockdown=confidentiality"
kvm -drive format=raw,file=/vmstore/ppc64,if=virtio $RNG -nographic -m 1024 -smp 2 $KERN -curses -append "$KERNCMD" $NET

7 July 2020

Noah Meyerhans: Setting environment variables for gnome-session

Am I missing something obvious? When did this get so hard? In the old days, you configured your desktop session on a Linux system by editing the .xsession file in your home directory. The display manager (login screen) would invoke the system-wide xsession script, which would either defer to your personal .xsession script or set up a standard desktop environment. You could put whatever you want in the .xsession script, and it would be executed. If you wanted a specific window manager, you d run it from .xsession. Start emacs or a browser or an xterm or two? .xsession. It was pretty easy, and super flexible. For the past 25 years or so, I ve used X with an environment started via .xsession. Early on it was fvwm with some programs, then I replaced fvwm with Window Maker (before that was even its name!), then switched to KDE. More recently (OK, like 10 years ago) I gradually replaced KDE with awesome and various custom widgets. Pretty much everything was based on a .xsession script, and that was fine. One particularly nice thing about it was that I could keep .xsession and any related helper programs in a git repository and manage changes over time. More recently I decided to give Wayland and GNOME an honest look. This has mostly been fine, but everything I ve been doing in .xsession is suddenly useless. OK, fine, progress is good. I ll just use whatever new mechanisms exist. How hard can it be? OK, so here we go. I am running GNOME. This isn t so bad. Alt+F2 brings up the Run Command dialog. It s a different keystroke than what I m used to, but I can adapt. (Obviously I can reconfigure the key binding, and maybe someday I will, but that s not the point here.) I have some executables in ~/bin. Oops, the run command dialog can t find them. No problem, I just need to update the PATH variable that it sees. Hmmm So how does one do that, anyway? GNOME has a help system, but searching that doesn t doesn t reveal anything. But that s fine, maybe it s inherited from the parent process. But there s no xsession script equivalent, since this isn t X anymore at all. The familiar stuff in /etc/X11/Xsession is no longer used. What s the equivalent in Wayland? Turns out, there isn t a shell script at all anymore, at least not in how Wayland and GNOME interact in Debian s configuration, which seems fairly similar to how anybody else would set this up. The GNOME session runs from a systemd-managed user session. Digging in to some web search results suggests that systemd provides a mechanism for setting some environment variables for services started by the user instance of the system. OK, so let s create some files in ~/.config/environment.d and we should be good. Except no, this isn t working. I can set some variables, but something is overriding PATH. I can create this file:
$ cat ~/.config/environment.d/01_path.conf
USER_INITIAL_PATH=$ PATH 
PATH=$ HOME /bin:$ HOME /go/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin
USER_CUSTOM_PATH=$ PATH 
After logging in, the Run a command dialog still doesn t see my PATH. So I use Alt+F2 and sh -c "env > /tmp/env" to capture the environment, and this is what I see:
USER_INITIAL_PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin
PATH=/usr/local/bin:/usr/bin:/bin:/usr/games
USER_CUSTOM_PATH=/home/noahm/bin:/home/noahm/go/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin
So, my environment.d file is there, and it s getting looked at, but something else is clobbering my PATH later in the startup process. But what? Where? Why? The systemd docs don t indicate that there s anything special about PATH, and nothing in /lib/systemd/user-environment-generators/ seems to treat it specially. The string PATH doesn t appear in /lib/systemd/user/ either. Looking for the specific value that s getting assigned to PATH in /etc shows the only occurrence of it being in /etc/zsh/zshenv, so maybe that s where it s coming from? But that should only get set there if it s otherwise unset or otherwise very minimally set. So I still have no idea where it s coming from. OK, so ignoring where my custom value is getting overridden, maybe what s configured in /lib/systemd/user will point me in the right direction. systemd --user status suggests that the interesting part of my session is coming from gnome-shell-wayland.service. Can we use a standard systemd drop-in as documented in systemd.unit(5)? It turns out that we can. This file sets things up the way I want:
$ cat .config/systemd/user/gnome-shell-wayland.service.d/path.conf
[Service]
Environment=PATH=%h/bin:%h/go/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin
Is that right? It really doesn t feel ideal to me. Systemd s Environment directive can t reference existing environment variables, and I can t use conditionals to do things like add a directory to the PATH only if it exists, so it s still a functional regression from what we had before. But at least it s a text file, edited by hand, trackable in git, so that s not too bad. There are some people out there who hate systemd, and will cite this as an illustration of why. However, I m not one of those people, and I very much like systemd as an init system. I d be happy to throw away sysvinit scripts forever, but I m not quite so happy with the state of .xsession s replacements. Despite the similarities, I don t think .xsession is entirely the same as SysV-style init scripts. The services running on a system are vastly more important than my personal .xsession, and systemd is far better at managing them than the pile of shell scripts used to set things up under sysvinit. Further, systemd the init system maintains compatibility with init scripts, so if you really want to keep using them, you can. As far as I can tell, though, systemd the user session manager does not seem to maintain compatibility with .xsession scripts, and that s unfortunate. I still haven t figured out what was overriding the ~/.config/environment.d/ setting. Any ideas?

6 April 2020

Joachim Breitner: A Telegram bot in Haskell on Amazon Lambda

I just had a weekend full of very successful serious geekery. On a whim I thought: Wouldn't it be nice if people could interact with my game Kaleidogen also via a telegram bot? This led me to learn about how I write a Telegram bot in Haskell and how I can deploy such a Haskell program to Amazon Lambda. In particular the latter bit might be interesting to some of my readers, so here is how went about it.

Kaleidogen Kaleidogen is a little contemplative game (or toy game where, starting from just unicolored disks, you combine abstract circular patterns to breed more interesting patterns. See my FARM 2019 talk for more details, or check out the source repository. BTW, I am looking for help turning it into an Android app!
KaleidogenBot in action

KaleidogenBot in action

Amazon Lambda Amazon Lambda is the Function as a service offering of Amazon Web Services. The idea is that you don t rent a server, where you have to deal with managing the whole system and that you are paying for constantly, but you just upload the code that responds to outside requests, and AWS takes care of the rest: Starting and stopping instances, providing a secure base system etc. When nobody is using the service, no cost occurs. This sounds ideal for hosting a toy Telegram bot: Most of the time nobody will be using it, and I really don't want to have to baby sit yet another service on my server. On Amazon Lambda, I can probably just forget about it. But Haskell is not one of the officially supported languages on Amazon Lambda. So to run Haskell on Lambda, one has to solve two problems:
  • how to invoke the Haskell code on the server, and
  • how to build Haskell so that it runs on the Amazon Linux distribution

A Haskell runtime for Lambda For the first we need a custom runtime. While this sounds complicated, it is actually a pretty simple concept: A runtime is an executable called bootstrap that queries the Lambda Runtime Interface for the next request to handle. The Lambda documentation is phrased as if this runtime has to be a dispatcher that calls the separate function s handler. But it could just do all the things directly. I found the Haskell package aws-lambda-haskell-runtime which provides precisely that: A function
runLambda :: (LambdaOptions -> IO (Either String LambdaResult)) -> IO ()
that talks to the Lambda Runtime API and invokes its argument on each message. The package also provides Template Haskell magic to collect handlers of any JSON-able type and generates a dispatcher, like you might expect from other, more dynamic languages. But that was too much magic for me, so I ignored that and just wrote the handler manually:
main :: IO ()
main = runLambda run
  where
   run ::  LambdaOptions -> IO (Either String LambdaResult)
   run opts = do
    result <- handler (decodeObj (eventObject opts)) (decodeObj (contextObject opts))
    either (pure . Left . encodeObj) (pure . Right . LambdaResult . encodeObj) result
data Event = Event
    path :: T.Text
  , body :: Maybe T.Text
    deriving (Generic, FromJSON)
data Response = Response
    statusCode :: Int
  , headers :: Value
  , body :: T.Text
  , isBase64Encoded :: Bool
    deriving (Generic, ToJSON)
handler :: Event -> Context -> IO (Either String Response)
handler Event body, path  context =
 
I expose my Lambda function to the world via Amazon s API Gateway, configured to just proxy the HTTP requests. This means that my code receives a JSON data structure describing the HTTP request (here called Event, listing only the fields I care about), and it will respond with a Response, again as JSON. The handler can then simply pattern-match on the path to decide what to do. For example this code handles URLs like /img/CAFFEEFACE.png, and responds with an image.
handler :: TC -> Event -> Context -> IO (Either String Response)
handler Event body, path  context
      Just bytes <- isImgPath path >>= T.decodeHex = do
        let pngData = genPurePNG bytes
        pure $ Right Response
              statusCode = 200
            , headers = object [ "Content-Type" .= ("image/png" :: String) ]
            , isBase64Encoded = True
            , body = T.decodeUtf8 $ LBS.toStrict $ Base64.encode pngData
             
     
isImgPath :: T.Text -> Maybe T.Text
isImgPath  = T.stripPrefix "/img/" >=> T.stripSuffix ".png"
If this program would grow more, then one should probably use something more structured for routing here; maybe servant, or bridging towards wai apps (amost like wai-lamda, but that still assumes an existing runtime, instead of simply being the runtime). But for my purposes, no extra layers of indirection or abstraction are needed!

Deploying Haskell to Lambda Building Haskell locally and deploying to different machiens is notoriously tricky; you often end up depending on a shared library that is not available on the other platform. The aws-lambda-haskell-runtime package, and similar projects like serverless-haskell, solve this using stack and Docker two technologies that are probably great, but I never warmed up to them. So instead adding layers and complexities, can I solve this instead my making things simpler? If I compiler my bootstrap into a static Linux binary, it should run on any Linux, including Amazon Linux. Unfortunately, building Haskell programs statically is also notoriously tricky. But it is made much simpler by the work of Niklas Hamb chen and others in the context of the Nix package manager, coordinated in the static-haskell-nix project. The promise here is that once you have set up building your project with Nix, then getting a static version is just one flag away. The support is not completely upstreamed into nixpkgs proper yet, but their repository has a nix file that contains a nixpkgs set with their patches:
let pkgs = (import (sources.nixpkgs-static + "/survey/default.nix")  ).pkgs; in
This, plus a fairly standard nix setup to build the package, yields what I was hoping for:
$ nix-build -A kaleidogen
/nix/store/ppwyq4d964ahd6k56wsklh93vzw07ln0-kaleidogen-0.1.0.0
$ file result/bin/kaleidogen-amazon-lambda
result/bin/kaleidogen-amazon-lambda: ELF 64-bit LSB executable, x86-64, version 1 (SYSV), statically linked, stripped
$ ls -sh result/bin/kaleidogen-amazon-lambda
6,7M result/bin/kaleidogen-amazon-lambda
If we put this file, named bootstrap, into a zip file and upload it to Amazon Lambda, then it just works! Creating the zip file is easily scripted using nix:
  function-zip = pkgs.runCommandNoCC "kaleidogen-lambda"  
    buildInputs = [ pkgs.zip ];
    ''
    mkdir -p $out
    cp $ kaleidogen /bin/kaleidogen-amazon-lambda bootstrap
    zip $out/function.zip bootstrap
  '';
So to upload this, I use this one-liner (line-wrapped for your convenience):
nix-build -A function-zip &&
aws lambda update-function-code --function-name kaleidogen \
  --zip-file fileb://result/function.zip
Thanks to how Nix pins all dependencies, I am fairly confident that I can return to this project in 4 months and still be able to build it. Of course, I want continuous integration and deployment. So I build the project with GitHub Actions, using a cachix nix cache to significantly speed up the build, and auto-deploy to Lambda using aws-lambda-deploy; see my workflow file for details.

The Telegram part The above allows me to run basically any stateless service, and a Telegram bot is nothing else: When configured to act as a WebHook, Telegram will send a request with a message to our Lambda function, where we can react on it. The telegram-api package provides bindigs for the Telegram Bot API (although I had to use the repository version, as the version on Hackage has some bitrot). Slightly simplified I can write a handler for an Update:
handleUpdate :: Update -> TelegramClient ()
handleUpdate Update  message = Just m   = do
  let c = ChatId (chat_id (chat m))
  liftIO $ printf "message from %s: %s\n" (maybe "?" user_first_name (from m)) (maybe "" T.unpack (text m))
  if "/start"  T.isPrefixOf  fromMaybe "" (text m)
  then do
    rm <- sendMessageM $ sendMessageRequest c "Hi! I am @KaleidogenBot.  "
    return ()
  else do
    m1 <- sendMessageM $ sendMessageRequest c "One moment "
    withPNGFile  $ \pngFN -> do
      m2 <- uploadPhotoM $ uploadPhotoRequest c
        (FileUpload (Just "image/png") (FileUploadFile pngFN))
      return ()
handleUpdate _ u =
  liftIO $ putStrLn $ "Unhandled message: " ++ show u
and call this from the handler that I wrote above:
     
      path == "/telegram" =
      case eitherDecode (LBS.fromStrict (T.encodeUtf8 (fromMaybe "" body))) of
        Left err ->  
        Right update -> do
          runTelegramClient token manager $ handleUpdate Nothing update
          pure $ Right Response
              statusCode = 200
            , headers = object [ "Content-Type" .= ("text/plain" :: String) ]
            , isBase64Encoded = False
            , body = "Done"
             
     
Note that the Lambda code receives the request as JSON data structure with a body that contains the original HTTP request body. Which, in this case, is itself JSON, so we have to decode that. All that is left to do is to tell Telegram where this code lives:
curl --request POST \
  --url https://api.telegram.org/bot<token>/setWebhook
  --header 'content-type: application/json'
  --data ' "url": "https://api.kaleidogen.nomeata.de/telegram" '
As a little add on, I also created a Telegram game for Kaleidogen. A Telegram game is nothing but a webpage that runs inside Telegram, so it wasn t much work to wrap the Web version of Kaleidogen that way, but the resulting Telegram game (which you can access via https://core.telegram.org/bots/games) still looks pretty neat.

No /dev/dri/renderD128 I am mostly happy with this setup: My game is now available to more people in more ways. I don t have to maintain any infrastructure. When nobody is using this bot no resources are wasted, and the costs of the service are neglectible -- this is unlikely to go beyond the free tier, and even if it would, the cost per generated image is roughly USD 0.000021. There is one slight disappointment, though. What I find most intersting about Kaleidogen from a technical point of view is that when you play it in the browser, the images are not generated by my code. Instead, my code creates a WebGL shader program on the fly, and that program generates the image on your graphics card. I even managed to make the GL rendering code work headlessly, i.e. from a command line program, using EGL and libgbm and a helper written in C. But it needs access to a graphics card via /dev/dri/renderD128. Amazon does not provide that to Lambda code, and neither do the other big Function-as-a-service providers. So I had to swallow my pride and reimplement the rendering in pure Haskell. So if you think the bot is kinda slow, then that s why. Despite properly optimizing the pure implementation (the inner loop does not do allocations and deals only with unboxed Double# values), the GL shader version is still three times as fast. Maybe in a few years GPU access will be so ubiquitous that it s even on Amazon Lambda; then I can easily use that.

31 October 2017

Chris Lamb: Free software activities in October 2017

Here is my monthly update covering what I have been doing in the free software world in October 2017 (previous month):
Reproducible builds

Whilst anyone can inspect the source code of free software for malicious flaws, most software is distributed pre-compiled to end users. The motivation behind the Reproducible Builds effort is to allow verification that no flaws have been introduced either maliciously or accidentally during this compilation process by promising identical results are always generated from a given source, thus allowing multiple third-parties to come to a consensus on whether a build was compromised. I have generously been awarded a grant from the Core Infrastructure Initiative to fund my work in this area. This month I:


I also made the following changes to our tooling:
diffoscope

diffoscope is our in-depth and content-aware diff utility that can locate and diagnose reproducibility issues.

  • Improve names in output of "internal" binwalk members. (#877525).
  • Don't crash on malformed md5sums files. (#877473).
  • Omit misleading "any of" prefix when only complaining about a single module on import. [...]
  • Adjust tests as ps2ascii now varies its output on timezone. [...]

strip-nondeterminism

strip-nondeterminism is our tool to remove specific non-deterministic results from a completed build.

  • Clojure considers .class file to be stale if it shares the same timestamp of the .clj. We thus adjust the timestamps of the .clj to always be younger. (#877418).
  • Print a message in --verbose mode if no canonical time was specified. [...]

buildinfo.debian.net

buildinfo.debian.net is my experiment into how to process, store and distribute .buildinfo files after the Debian archive software has processed them.

  • Always show SHA-256 checksums, regardless of the browser viewport size. [...]
  • Add an API endpoint to fetch specific .buildinfo files for a certain package/version/architecture. [...]


Debian My activities as the current Debian Project Leader are covered in my "Bits from the DPL" email to the debian-devel-announce mailing list.
Patches contributed
  • devscripts: Please print the actual arguments debuild makes to Lintian. (#880124)
  • hw-detect: Drop reference to floppy disks; it's almost 2018. (#880122)
  • debci:
    • Use deb.debian.org over http.debian.net. (#879654)
    • Document how to use an alternative mirror. (#879655)

Debian LTS

This month I have been paid to work 18 hours on Debian Long Term Support (LTS). In that time I did the following:
  • "Frontdesk" duties, triaging CVEs, etc.
  • Followed up on a large number of upstream "pings" that have been left dormant.
  • Issued DLA 1121-1 to fix an out-of-bounds read vulnerability in curl where a malicious FTP server could abuse this to prevent clients from interacting with it.
  • Issued DLA 1123-1 for the "Go" programming language where an attacker could generate a MIME request such that the server ran out of file descriptors.
  • Issued DLA 1126-1 for the libxfont font selection and rasterisation library, correcting two vulnerabilities, both involving the library being tricked into reading invalid/random memory.
  • Issued DLA 1134-1 for sdl-image1.2, an image loading library. A maliciously-crafted .xcf file could cause a stack-based buffer overflow resulting in potential code execution.

Uploads
  • python-django:
    • 2.0~beta1-1 New upstream 2.x release.
    • 1.11.6-1 New upstream bugfix release.
  • gunicorn (19.6.0-10+deb9u1) Prepared a release for stable to avoid a runtime dependency on a compiler. (#877722)
  • redis:
    • 4:4.0.2-3:
      • Drop the Debian-specific /etc/redis/redis-server.pre-up.d (etc.) hooks and remove them if unchanged.
      • Include systemd redis-server@.service and redis-sentinel@.service template files to easily run multiple Redis instances. (#877702)
      • Patch redis.conf and sentinel.conf with quilt instead of maintaining our own versions under debian/.
    • 4:4.0.2-4:
      • Add input validity checking to cluster config slot numbers to fix CVE-2017-15047. (#878076)
      • Drop debian/bin/generate-parts now we aren't calling it.
      • Correct Bash-ism in NEWS file.
    • 4:4.0.2-5: Replace the existing patch for CVE-2017-15047 with an upstream-blessed version that covers another case.
  • redisearch (0.21.3-5) Initial release.
  • docbook2man (2.0.0-40) Correct spelling mistakes in binaries and other misc packaging tidying.
  • python-redis (2.10.6-1) New upstream release.
  • bfs (1.1.3-1) New upstream release.

FTP Team

As a Debian FTP assistant I ACCEPTed 103 packages: amcheck, argagg, binutils, blockui, bro-pkg, chkservice, citus, django-axes, docker-containerd, doctest, dtkwidget, duktape, feed2exec, fontforge, fonttools, gcc-8, gcc-8-cross, generator-scripting-language, gitgraph.js, haskell-uri-encode, hoel, iniparser, its, jquery-areyousure, kodi, libcatmandu-mods-perl, libcatmandu-template-perl, libcatmandu-xml-perl, libcatmandu-xsd-perl, libcode-tidyall-plugin-sortlines-naturally-perl, libgdamm5.0, libinfinity, libmods-record-perl, libreoffice-dictionaries, libset-intervaltree-perl, libsodium, linux, linux-grsec, ltsp-manager, lxqt-themes, mailman3-core, measurement-kit, mini-buildd, musescore, node-babel, node-babel-eslint, node-babel-loader, node-babel-plugin-add-module-exports, node-babel-plugin-transform-define, node-gulp-newer, node-regenerate-unicode-properties, node-regexpu-core, node-regjsparser, node-unicode-data, node-unicode-loose-match, openjdk-9, orafce, pgaudit, pgsql-ogr-fdw, pk4, postgresql-mysql-fdw, powa-archivist, python-azure-devtools, python-colormap, python-darkslide, python-dotenv, python-karborclient, python-logfury, python-lupa, python-marshmallow, python-murano-pkg-check, python-octaviaclient, python-pathspec, python-pgpy, python-pydub, python-randomize, python-sabyenc, python-searchlightclient, python-stestr, python-subunit2sql, python-twitter, python-utils, python-wsgilog, r-cran-bindr, r-cran-desc, r-cran-hms, r-cran-readstata13, r-cran-rprojroot, r-cran-wikidatar, r-cran-wikipedir, r-cran-wikitaxa, repmgr, requests-file, resteasy3.0, sdl-kitchensink, stardicter, systemd-el, thunderbird, tomcat8.0, uwsgi-plugin-luajit, uwsgi-plugin-mongo, uwsgi-plugin-php & uwsgi-plugin-v8. I additionally filed 3 RC bugs against packages that had incomplete debian/copyright files against: fonttools, generator-scripting-language & libsodium.

3 October 2017

Dimitri John Ledkov: An interesting bug - network-manager, glibc, dpkg-shlibdeps, systemd, and finally binutils

Not so long ago I went to effectively recompile NetworkManager and fix up minor bug in it. It built fine across all architectures, was considered to be installable etc. And I was expecting it to just migrate across. At the time, glibc was at 2.26 in artful-proposed and NetworkManager was built against it. However release pocket was at glibc 2.24. In Ubuntu we have a ProposedMigration process in place which ensures that newly built packages do not regress in the number of architectures built for; installable on; and do not regress themselves or any reverse dependencies at runtime.

Thus before my build of NetworkManager was considered for migration, it was tested in the release pocket against packages in the release pocket. Specifically, since package metadata only requires glibc 2.17 NetworkManager was tested against glibc currently in the release pocket, which should just work fine....
autopkgtest [21:47:38]: test nm: [-----------------------
test_auto_ip4 (__main__.ColdplugEthernet)
ethernet: auto-connection, IPv4 ... FAIL ----- NetworkManager.log -----
NetworkManager: /lib/x86_64-linux-gnu/libc.so.6: version GLIBC_2.25' not found (required by NetworkManager)
At first I only saw failing tests, which I thought is transient failure. Thus they were retried a few times. Then I looked at the autopkgtest log and saw above error messages. Perplexed, I have started a lxd container with ubuntu artful, enabled proposed and installed just network-manager from artful-proposed and indeed a simple NetworkManager --help failed with above error from linker.

I am too young to know what dependency-hell means, since ever since I used Linux (Ubuntu 7.04) all glibc symbols were versioned, and dpkg-shlibdeps would generate correct minimum dependencies for a package. Alas in this case readelf confirmed that indeed /usr/sbin/NetworkManager requires 2.25 and dpkg depends is >= 2.17.

Further reading readelf output I checked that all of the glibc symbols used are 2.17 or lower, and only the "Version needs section '.gnu.version_r'" referenced GLIBC_2.25 symbol. Inspecting dpkg-shlibdeps code I noticed that it does not parse that section and only searches through the dynamic symbols used to establish the minimum required version.

Things started to smell fishy. On one hand, I trust dpkg-shlibdeps to generate the right dependencies. On the other hand I also trust linker to not tell lies either. Hence I opened a Debian BTS bug report about this issue.

At this point, I really wanted to figure out where the reference to 2.25 comes from. Clearly it was not from any private symbols as then the reference would be on 2.26. Checking glibc abi lists I found there were only a handful of symbols marked as 2.25
$ grep 2.25 ./sysdeps/unix/sysv/linux/x86_64/64/libc.abilist
GLIBC_2.25 GLIBC_2.25 A
GLIBC_2.25 __explicit_bzero_chk F
GLIBC_2.25 explicit_bzero F
GLIBC_2.25 getentropy F
GLIBC_2.25 getrandom F
GLIBC_2.25 strfromd F
GLIBC_2.25 strfromf F
GLIBC_2.25 strfroml F
Blindly grepping for these in network-manager source tree I found following:
$ grep explicit_bzero -r configure.ac src/
configure.ac: explicit_bzero],
src/systemd/src/basic/string-util.h:void explicit_bzero(void *p, size_t l);
src/systemd/src/basic/string-util.c:void explicit_bzero(void *p, size_t l)
src/systemd/src/basic/string-util.c: explicit_bzero(x, strlen(x));
First of all it seems like network-manager includes a partial embedded copy of systemd. Secondly that code is compiled into a temporary library and has autconf detection logic to use explicit_bzero. It also has an embedded implementation of explicit_bzero when it is not available in libc, however it does not have FORTIFY_SOURCES implementation of said function (__explicit_bzero_chk) as was later pointed out to me. And whilst this function is compiled into an intermediary noinst library, no functions that use explicit_bzero are used in the end by NetworkManger binary. To proof this, I've dropped all code that uses explicit_bzero, rebuild the package against glibc 2.26, and voila it only had Version reference on glibc 2.17 as expected from the end-result usage of shared symbols.

At this point toolchain bug was a suspect. It seems like whilst explicit_bzero shared symbol got optimised out, the version reference on 2.25 persisted to the linked binaries. At this point in the archive a snapshot version of binutils was in use. And in fact forcefully downgrading bintuils resulted in correct compilation / versions table referencing only glibc 2.17.

Mathias then took over a tarball of object files and filed upstream bug report against bintuils: "[2.29 Regression] ld.bfd keeps a version reference in .gnu.version_r for symbols which are optimized out". The discussion in that bug report is a bit beyond me as to me binutils is black magic. All I understood there was "we moved sweep and pass to another place due to some bugs", doing that introduced this bug, thus do multiple sweep and passes to make sure we fix old bugs and don't regress this either. Or something like that. Comments / Better description of the bintuils fix are welcomed.

Binutils got fixed by upstream developers, cherry-picked into debian, and ubuntu, network-manager got rebuild and everything is wonderful now. However, it does look like unused / deadend code paths tripped up optimisations in the toolchain which managed to slip by distribution package dependency generation and needless require a higher up version of glibc. I guess the lesson here is do not embed/compile unused code. Also I'm not sure why network-manager uses networkd internals like this, and maybe systemd should expose more APIs or serialise more state into /run, as most other things query things over dbus, private socket, or by establishing watches on /run/systemd/netif. I'll look into that another day.

Thanks a lot to Guillem Jover, Matthias Klose, Alan Modra, H.J. Lu, and others for getting involved. I would not be able to raise, debug, or fix this issue all by myself.

22 June 2017

John Goerzen: First Experiences with Stretch

I ve done my first upgrades to Debian stretch at this point. The results have been overall good. On the laptop my kids use, I helped my 10-year-old do it, and it worked flawlessly. On my workstation, I got a kernel panic on boot. Hmm. Unfortunately, my system has to use the nv drivers, which leaves me with an 80 25 text console. It took some finagling (break=init in grub, then manually insmoding the appropriate stuff based on modules.dep for nouveau), but finally I got a console so I could see what was breaking. It appeared that init was crashing because it couldn t find liblz4. A little digging shows that liblz4 is in /usr, and /usr wasn t mounted. I ve filed the bug on systemd-sysv for this. I run root on ZFS, and further digging revealed that I had datasets named like this: This used to be fine. The mountpoint property of the usr dataset put it at /usr without incident. But it turns out that this won t work now, unless I set ZFS_INITRD_ADDITIONAL_DATASETS in /etc/default/zfs for some reason. So I renamed them so usr was under ROOT, and then the system booted. Then I ran samba not liking something in my bind interfaces line (to be fair, it did still say eth0 instead of br0). rpcbind was failing in postinst, though a reboot seems to have helped that. More annoying was that I had trouble logging into my system because resolv.conf was left empty (despite dns-* entries in /etc/network/interfaces and the presence of resolvconf). I eventually repaired that, and found that it kept removing my search line. Eventually I removed resolvconf. Then mariadb s postinst was silently failing. I eventually discovered it was sending info to syslog (odd), and /etc/init.d/apparmor teardown let it complete properly. It seems like there may have been an outdated /etc/apparmor.d/cache/usr.sbin.mysql out there for some reason. Then there was XFCE. I use it with xmonad, and the session startup was really wonky. I had to zap my sessions, my panel config, etc. and start anew. I am still not entirely sure I have it right, but I at do have a usable system now.

9 June 2017

John Goerzen: Fixing the Problems with Docker Images

I recently wrote about the challenges in securing Docker container contents, and in particular with keeping up-to-date with security patches from all over the Internet. Today I want to fix that. Besides security, there is a second problem: the common way of running things in Docker pretends to provide a traditional POSIX API and environment, but really doesn t. This is a big deal. Before diving into that, I want to explain something: I have often heard it said the Docker provides single-process containers. This is unambiguously false in almost every case. Any time you have a shell script inside Docker that calls cp or even ls, you are running a second process. Web servers from Apache to whatever else use processes or threads of various types to service multiple connections at once. Many Docker containers are single-application, but a process is a core part of the POSIX API, and very little software would work if it was limited to a single process. So this is my little plea for more precise language. OK, soapbox mode off. Now then, in a traditional Linux environment, besides your application, there are other key components of the system. These are usually missing in Docker containers. So today, I will fix this also. In my docker-debian-base images, I have prepared a system that still has only 11MB RAM overhead, makes minimal changes on top of Debian, and yet provides a very complete environment and API. Here s what you get: The above goes into my minimal image. Additional images add layers on top of it, and here are some of the features they add: All of the above, including the optional features, has an 11MB overhead on start. Not bad for so much, right? From here, you can layer on top all your usual Dockery things. You can still run one application per container. But you can now make sure your disk doesn t fill up from logs, run your database vacuuming commands at will, have your blog download its RSS feeds every few minutes, etc all from within the container, as it should be. Furthermore, you don t have to reinvent the wheel, because Debian already ships with things to take care of a lot of this out of the box and now those tools will just work. There is some popular work done in this area already by phusion s baseimage-docker. However, I made my own for these reasons: Finally a word on the choice to use sysvinit. It would have been simpler to use systemd here, since it is the default in Debian these days. Unfortunately, systemd requires you to poke some holes in the Docker security model, as well as mount a cgroups filesystem from the host. I didn t consider this acceptable, and sysvinit ran without these workarounds, so I went with it. With all this, Docker becomes a viable replacement for KVM for various services on my internal networks. I ll be writing about that later.

3 June 2017

Norbert Preining: TeX Live 2017 released

TeX Live 2017 has been released! CTAN mirrors are busy updating. Get out bottles of good wine, and enjoy a good long download
Besides the huge amount of package updates due to the 2 month testing hiatus, I want to pick a few changes before copying the complete changelog entry: Additional TEXMF trees tlmgr got a new functionality to easily add and remove additional TEXMF trees to the search path. MikTeX had this features since long, and it was often requested. As it turned out, it is a rather trivial thing to achieve by some texmf.cnf lines. The rest is just front end in tlmgr. Here a few invocations (not very intelligent usage, though):
$ tlmgr conf auxtrees show
tlmgr: no auxiliary texmf trees defined.
$ tlmgr conf auxtrees add /projects/book-abc
$ tlmgr conf auxtrees
List of auxiliary texmf trees:
  /projects/book-abc
$ tlmgr conf auxtrees remove /projects/book-abc
$ tlmgr conf auxtrees show
tlmgr: no auxiliary texmf trees defined.
TeX Live Manager interactive shell tlmgr also got an interactive shell now, that can also be used for scripting. Available commands are all the usual command line actions, plus a few more (see documentation). Again, here a simple session:
$ tlmgr shell
protocol 1
tlmgr> load local
OK
tlmgr> load remote
tlmgr: package repositories
	main = /home/norbert/public_html/tlnet (verified)
	koma = http://www.komascript.de/repository/texlive/2017 (verified)
OK
tlmgr> update --list
tlmgr: package repositories
	main = /home/norbert/public_html/tlnet (verified)
	koma = http://www.komascript.de/repository/texlive/2017 (verified)
tlmgr: saving backups to /home/norbert/tl/2017/tlpkg/backups
tlmgr: no updates available
OK
tlmgr> byebye
$
User versus System mode for updmap and fmtutil I have reported on this during BachoTeX/TUG 2017, here are the slides, that the two central configuration programs updmap and fmtutil will change their operation mode slightly by disallowing invocations without mde specification. That is, one either needs to call updmap -sys (or updmap-sys, nothing changed here from previous years) or updmap -user (or updmap-user, new in TeX Live 2017). We hope by this and the accompanying web page of recommendations to help users not to shoot themselves to often by calling updmap without knowing the consequences. The upcoming proceedings of the BachoTeX will also comtain an article fully documenting both updmap and fmtutil including the most recent changes.
These were only a few changes where I had my fingers in the development. For the full list, read on below. Now get ready for partying, and also for the end of the peace, because daily package updates will restart in the next days. Enjoy Changes since TeX Live 2016 The following list of changes is directly from the TeX Live documentation.

26 May 2017

Michael Prokop: The #newinstretch game: dbgsym packages in Debian/stretch

Debug packages include debug symbols and so far were usually named <package>-dbg in Debian. Those packages are essential if you ve to debug failing (especially: crashing) programs. Since December 2015 Debian has automatic dbgsym packages, being built by default. Those packages are available as <package>-dbgsym, so starting with Debian/stretch you should no longer look for -dbg packages but for -dbgsym instead. Currently there are 13.369 dbgsym packages available for the amd64 architecture of Debian/stretch, comparing this to the 2.250 packages which I counted being available for Debian/jessie this is really a huge improvement. (If you re interested in the details of dbgsym packages as a package maintainer take a look at the Automatic Debug Packages page in the Debian wiki.) The dbgsym packages are NOT provided by the usual Debian archive though (which is good thing, since those packages are quite disk space consuming, e.g. just the amd64 stretch mirror of debian-debug consumes 47GB). Instead there s a new archive called debian-debug. To get access to the dbgsym packages via the debian-debug suite on your Debian/stretch system include the following entry in your apt s sources.list configuration (replace deb.debian.org with whatever mirror you prefer):
deb http://deb.debian.org/debian-debug/ stretch-debug main
If you re not yet familiar with usage of such debug packages let me give you a short demo. Let s start with sending SIGILL (Illegal Instruction) to a running sha256sum process, causing it to generate a so called core dump file:
% sha256sum /dev/urandom &
[1] 1126
% kill -4 1126
% 
[1]+  Illegal instruction     (core dumped) sha256sum /dev/urandom
% ls
core
$ file core
core: ELF 64-bit LSB core file x86-64, version 1 (SYSV), SVR4-style, from 'sha256sum /dev/urandom', real uid: 1000, effective uid: 1000, real gid: 1000, effective gid: 1000, execfn: '/usr/bin/sha256sum', platform: 'x86_64'
Now we can run the GNU Debugger (gdb) on this core file, executing:
% gdb sha256sum core
[...]
Type "apropos word" to search for commands related to "word"...
Reading symbols from sha256sum...(no debugging symbols found)...done.
[New LWP 1126]
Core was generated by  sha256sum /dev/urandom'.
Program terminated with signal SIGILL, Illegal instruction.
#0  0x000055fe9aab63db in ?? ()
(gdb) bt
#0  0x000055fe9aab63db in ?? ()
#1  0x000055fe9aab8606 in ?? ()
#2  0x000055fe9aab4e5b in ?? ()
#3  0x000055fe9aab42ea in ?? ()
#4  0x00007faec30872b1 in __libc_start_main (main=0x55fe9aab3ae0, argc=2, argv=0x7ffc512951f8, init=<optimized out>, fini=<optimized out>, rtld_fini=<optimized out>, stack_end=0x7ffc512951e8) at ../csu/libc-start.c:291
#5  0x000055fe9aab4b5a in ?? ()
(gdb) 
As you can see by the several ?? question marks, the bt command (short for backtrace) doesn t provide useful information.
So let s install the according debug package, which is coreutils-dbgsym in this case (since the sha256sum binary which generated the core file is part of the coreutils package). Then let s rerun the same gdb steps:
% gdb sha256sum core
[...]
Type "apropos word" to search for commands related to "word"...
Reading symbols from sha256sum...Reading symbols from /usr/lib/debug/.build-id/a4/b946ef7c161f2d215518ca38d3f0300bcbdbb7.debug...done.
done.
[New LWP 1126]
Core was generated by  sha256sum /dev/urandom'.
Program terminated with signal SIGILL, Illegal instruction.
#0  0x000055fe9aab63db in sha256_process_block (buffer=buffer@entry=0x55fe9be95290, len=len@entry=32768, ctx=ctx@entry=0x7ffc51294eb0) at lib/sha256.c:526
526     lib/sha256.c: No such file or directory.
(gdb) bt
#0  0x000055fe9aab63db in sha256_process_block (buffer=buffer@entry=0x55fe9be95290, len=len@entry=32768, ctx=ctx@entry=0x7ffc51294eb0) at lib/sha256.c:526
#1  0x000055fe9aab8606 in sha256_stream (stream=0x55fe9be95060, resblock=0x7ffc51295080) at lib/sha256.c:230
#2  0x000055fe9aab4e5b in digest_file (filename=0x7ffc51295f3a "/dev/urandom", bin_result=0x7ffc51295080 "\001", missing=0x7ffc51295078, binary=<optimized out>) at src/md5sum.c:624
#3  0x000055fe9aab42ea in main (argc=<optimized out>, argv=<optimized out>) at src/md5sum.c:1036
As you can see it s reading the debug symbols from /usr/lib/debug/.build-id/a4/b946ef7c161f2d215518ca38d3f0300bcbdbb7.debug and this is what we were looking for.
gdb now also tells us that we don t have lib/sha256.c available. For even better debugging it s useful to have the according source code available. This is also just an apt-get source coreutils ; cd coreutils-8.26/ away:
~/coreutils-8.26 % gdb sha256sum ~/core
[...]
Type "apropos word" to search for commands related to "word"...
Reading symbols from sha256sum...Reading symbols from /usr/lib/debug/.build-id/a4/b946ef7c161f2d215518ca38d3f0300bcbdbb7.debug...done.
done.
[New LWP 1126]
Core was generated by  sha256sum /dev/urandom'.
Program terminated with signal SIGILL, Illegal instruction.
#0  0x000055fe9aab63db in sha256_process_block (buffer=buffer@entry=0x55fe9be95290, len=len@entry=32768, ctx=ctx@entry=0x7ffc51294eb0) at lib/sha256.c:526
526           R( h, a, b, c, d, e, f, g, K(25), M(25) );
(gdb) bt
#0  0x000055fe9aab63db in sha256_process_block (buffer=buffer@entry=0x55fe9be95290, len=len@entry=32768, ctx=ctx@entry=0x7ffc51294eb0) at lib/sha256.c:526
#1  0x000055fe9aab8606 in sha256_stream (stream=0x55fe9be95060, resblock=0x7ffc51295080) at lib/sha256.c:230
#2  0x000055fe9aab4e5b in digest_file (filename=0x7ffc51295f3a "/dev/urandom", bin_result=0x7ffc51295080 "\001", missing=0x7ffc51295078, binary=<optimized out>) at src/md5sum.c:624
#3  0x000055fe9aab42ea in main (argc=<optimized out>, argv=<optimized out>) at src/md5sum.c:1036
(gdb) 
Now we re ready for all the debugging magic. :) Thanks to everyone who was involved in getting us the automatic dbgsym package builds in Debian!

19 May 2017

Michael Prokop: Debian stretch: changes in util-linux #newinstretch

We re coming closer to the Debian/stretch stable release and similar to what we had with #newinwheezy and #newinjessie it s time for #newinstretch! Hideki Yamane already started the game by blogging about GitHub s Icon font, fonts-octicons and Arturo Borrero Gonzalez wrote a nice article about nftables in Debian/stretch. One package that isn t new but its tools are used by many of us is util-linux, providing many essential system utilities. We have util-linux v2.25.2 in Debian/jessie and in Debian/stretch there will be util-linux >=v2.29.2. There are many new options available and we also have a few new tools available. Tools that have been taken over from other packages New tools New features/options addpart (show or change the real-time scheduling attributes of a process):
--reload reload prompts on running agetty instances
blkdiscard (discard the content of sectors on a device):
-p, --step <num>    size of the discard iterations within the offset
-z, --zeroout       zero-fill rather than discard
chrt (show or change the real-time scheduling attributes of a process):
-d, --deadline            set policy to SCHED_DEADLINE
-T, --sched-runtime <ns>  runtime parameter for DEADLINE
-P, --sched-period <ns>   period parameter for DEADLINE
-D, --sched-deadline <ns> deadline parameter for DEADLINE
fdformat (do a low-level formatting of a floppy disk):
-f, --from <N>    start at the track N (default 0)
-t, --to <N>      stop at the track N
-r, --repair <N>  try to repair tracks failed during the verification (max N retries)
fdisk (display or manipulate a disk partition table):
-B, --protect-boot            don't erase bootbits when creating a new label
-o, --output <list>           output columns
    --bytes                   print SIZE in bytes rather than in human readable format
-w, --wipe <mode>             wipe signatures (auto, always or never)
-W, --wipe-partitions <mode>  wipe signatures from new partitions (auto, always or never)
New available columns (for -o):
 gpt: Device Start End Sectors Size Type Type-UUID Attrs Name UUID
 dos: Device Start End Sectors Cylinders Size Type Id Attrs Boot End-C/H/S Start-C/H/S
 bsd: Slice Start End Sectors Cylinders Size Type Bsize Cpg Fsize
 sgi: Device Start End Sectors Cylinders Size Type Id Attrs
 sun: Device Start End Sectors Cylinders Size Type Id Flags
findmnt (find a (mounted) filesystem):
-J, --json             use JSON output format
-M, --mountpoint <dir> the mountpoint directory
-x, --verify           verify mount table content (default is fstab)
    --verbose          print more details
flock (manage file locks from shell scripts):
-F, --no-fork            execute command without forking
    --verbose            increase verbosity
getty (open a terminal and set its mode):
--reload               reload prompts on running agetty instances
hwclock (query or set the hardware clock):
--get            read hardware clock and print drift corrected result
--update-drift   update drift factor in /etc/adjtime (requires --set or --systohc)
ldattach (attach a line discipline to a serial line):
-c, --intro-command <string>  intro sent before ldattach
-p, --pause <seconds>         pause between intro and ldattach
logger (enter messages into the system log):
-e, --skip-empty         do not log empty lines when processing files
    --no-act             do everything except the write the log
    --octet-count        use rfc6587 octet counting
-S, --size <size>        maximum size for a single message
    --rfc3164            use the obsolete BSD syslog protocol
    --rfc5424[=<snip>]   use the syslog protocol (the default for remote);
                           <snip> can be notime, or notq, and/or nohost
    --sd-id <id>         rfc5424 structured data ID
    --sd-param <data>    rfc5424 structured data name=value
    --msgid <msgid>      set rfc5424 message id field
    --socket-errors[=<on off auto>] print connection errors when using Unix sockets
losetup (set up and control loop devices):
-L, --nooverlap               avoid possible conflict between devices
    --direct-io[=<on off>]    open backing file with O_DIRECT 
-J, --json                    use JSON --list output format
New available --list column:
DIO  access backing file with direct-io
lsblk (list information about block devices):
-J, --json           use JSON output format
New available columns (for --output):
HOTPLUG  removable or hotplug device (usb, pcmcia, ...)
SUBSYSTEMS  de-duplicated chain of subsystems
lscpu (display information about the CPU architecture):
-y, --physical          print physical instead of logical IDs
New available column:
DRAWER  logical drawer number
lslocks (list local system locks):
-J, --json             use JSON output format
-i, --noinaccessible   ignore locks without read permissions
nsenter (run a program with namespaces of other processes):
-C, --cgroup[=<file>]      enter cgroup namespace
    --preserve-credentials do not touch uids or gids
-Z, --follow-context       set SELinux context according to --target PID
rtcwake (enter a system sleep state until a specified wakeup time):
--date <timestamp>   date time of timestamp to wake
--list-modes         list available modes
-r, --reorder <dev>  fix partitions order (by start offset)
sfdisk (display or manipulate a disk partition table):
New Commands:
-J, --json <dev>                  dump partition table in JSON format
-F, --list-free [<dev> ...]       list unpartitioned free areas of each device
-r, --reorder <dev>               fix partitions order (by start offset)
    --delete <dev> [<part> ...]   delete all or specified partitions
--part-label <dev> <part> [<str>] print or change partition label
--part-type <dev> <part> [<type>] print or change partition type
--part-uuid <dev> <part> [<uuid>] print or change partition uuid
--part-attrs <dev> <part> [<str>] print or change partition attributes
New Options:
-a, --append                   append partitions to existing partition table
-b, --backup                   backup partition table sectors (see -O)
    --bytes                    print SIZE in bytes rather than in human readable format
    --move-data[=<typescript>] move partition data after relocation (requires -N)
    --color[=<when>]           colorize output (auto, always or never)
                               colors are enabled by default
-N, --partno <num>             specify partition number
-n, --no-act                   do everything except write to device
    --no-tell-kernel           do not tell kernel about changes
-O, --backup-file <path>       override default backup file name
-o, --output <list>            output columns
-w, --wipe <mode>              wipe signatures (auto, always or never)
-W, --wipe-partitions <mode>   wipe signatures from new partitions (auto, always or never)
-X, --label <name>             specify label type (dos, gpt, ...)
-Y, --label-nested <name>      specify nested label type (dos, bsd)
Available columns (for -o):
 gpt: Device Start End Sectors Size Type Type-UUID Attrs Name UUID
 dos: Device Start End Sectors Cylinders Size Type Id Attrs Boot End-C/H/S Start-C/H/S
 bsd: Slice Start  End Sectors Cylinders Size Type Bsize Cpg Fsize
 sgi: Device Start End Sectors Cylinders Size Type Id Attrs
 sun: Device Start End Sectors Cylinders Size Type Id Flags
swapon (enable devices and files for paging and swapping):
-o, --options <list>     comma-separated list of swap options
New available columns (for --show):
UUID   swap uuid
LABEL  swap label
unshare (run a program with some namespaces unshared from the parent):
-C, --cgroup[=<file>]                              unshare cgroup namespace
    --propagation slave shared private unchanged   modify mount propagation in mount namespace
-s, --setgroups allow deny                         control the setgroups syscall in user namespaces
Deprecated / removed options sfdisk (display or manipulate a disk partition table):
-c, --id                  change or print partition Id
    --change-id           change Id
    --print-id            print Id
-C, --cylinders <number>  set the number of cylinders to use
-H, --heads <number>      set the number of heads to use
-S, --sectors <number>    set the number of sectors to use
-G, --show-pt-geometry    deprecated, alias to --show-geometry
-L, --Linux               deprecated, only for backward compatibility
-u, --unit S              deprecated, only sector unit is supported

11 May 2017

Arturo Borrero Gonz lez: Debunk some Debian myths

Debian CUSL 11 Debian has many years of history, about 25 years already. With such a long travel over the continous field of developing our Universal Operating System, some myths, false accusations and bad reputation has arisen. Today I had the opportunity to discuss this topic, I was invited to give a Debian talk in the 11 Concurso Universitario de Software Libre , a Spanish contest for students to develop and dig a bit into free-libre open source software (and hardware). In this talk, I walked through some of the most common Debian myths, and I would like to summarize here some of them, with a short explanation of why I think they should be debunked. Picture of the talk myth #1: Debian is old software Please, use testing or stable-backports. If you use Debian stable your system will in fact be stable and that means: updates contain no new software but only fixes. myth #2: Debian is slow We compile and build most of our packages with industry-standard compilers and options. I don t see a significant difference on how fast linux kernel or mysql run in a CentOS or in Debian. myth #3: Debian is difficult I already discussed about this issue back in Jan 2017, Debian is a puzzle: difficult. myth #4: Debian has no graphical environment This is, simply put, false. We have gnome, kde, xfce and more. The basic Debian installer asks you what do you want at install time. myth #5: since Debian isn t commercial, the quality is poor Did you know that most of our package developers are experts in their packages and in their upstream code? Not all, but most of them. Besides, many package developers get paid to do their Debian job. Also, there are external companies which do indeed offer support for Debian (see freexian for example). myth #6: I don t trust Debian Why? Did we do something to gain this status? If so, please let us know. You don t trust how we build or configure our packages? You don t trust how we work? Anyway, I m sorry, you have to trust someone if you want to use any kind of computer. Supervising every single bit of your computer isn t practical for you. Please trust us, we do our best. myth #7: nobody uses Debian I don t agree. Many people use Debian. They even run Debian in the International Space Station. Do you count derivatives, such as Ubuntu? I believe this myth is just pointless, but some people out there really think nobody uses Debian. myth #8: Debian uses systemd Well, this is true. But you can run sysvinit if you want. I prefer and recommend systemd though :-) myth #9: Debian is only for servers No. See myths #1, #2 and #4. You may download my slides in PDF and in ODP format (only in Spanish, sorry for English readers).

1 April 2017

Antoine Beaupr : My free software activities, February and March 2017

Looking into self-financing Before I begin, I should mention that I started tracking my time working on free software more systematically. I spend a lot of time on the computer, as regular readers of this blog might remember so I wanted to know exactly how much time was paid vs free work. I was already using org-mode's time clock system to keep track of my work hours, so I just extended this to my regular free software contributions, which also helps in writing those reports. It turns out that over 60% of my computer time is spent working on free software. That's huge! I was expecting something more along the range of 20 to 40% of my time. So I started thinking about ways of financing this work. I created a Patreon page but I'm hesitant into launching such a campaign: the only thing worse than "no patreon page" is "a patreon page with failed goals and no one financing it". So before starting such an effort, I'd like to get a feeling of what other people's experience with it are. I know that joeyh is close to achieving his goals, but I can't compare with the guy that invented git-annex or debhelper, so I'm concerned I wouldn't be able to raise the same level of funding. So any advice you have, feel free to contact me in private or in the comments. If you would be ready to fund my work, I'd love to know about it, obviously, but I guess I wouldn't get real numbers until I actually open up such a page... Now, onto the regular report.

Wallabako I spent a good chunk of time completing most of the things I had in mind for Wallabako, which I mentioned quickly in the previous report. Wallabako is now much easier to installed, with clearer instructions, an easier to use configuration file, more reliable synchronization and read status propagation. As usual the Wallabako README file has all the details. I've also looked at better integration with Koreader, the free software e-reader that forms the basis of the okreader free software distribution which has been able to port Debian to the Kobo e-readers, a project I am really excited about. This project has the potential of supporting Kobo readers beyond the lifetime that upstream grants it and removes a lot of proprietary software and spyware that ships with the Kobo readers. So I have made a few contributions to okreader and also on koreader, the ebook reader okreader is based on.

Stressant I rewrote stressant, my simple burn-in and stress-testing tool. After struggling in turn with Debirf, live-build, vmdebootstrap and even FAI, I just figured maybe it wasn't the best idea to try and reinvent that particular wheel: instead of reinventing how to build yet another Debian system build tool, maybe I should just reuse what's already there. It turns out there's a well known, succesful and fairly complete recovery system called Grml. It is a Debian Derivative, so all I needed to do was to stop procrastinating and actually write the actual stressant tool instead of just creating a distribution with a bunch of random tools shipped in. This allowed me to focus on which tools were the best to stress test different components. This selection ended up being: fio can also be used to overwrite disk drives with the proper options (--overwrite and --size=100%), although grml also ships with nwipe for wiping old spinning disks and hdparm to do a secure erase of SSD disks (whatever that's worth). Stressant still needs to be shipped with grml for this transition to be complete. In the meantime, I was able to configure the excellent public Gitlab CI service to provide ISO images with Stressant built-in as a stopgap measure. I also need to figure out a way to automate starting stressant from a boot menu to automate deployments on a larger scale, although because I have little need for the feature at this moment in time, this will likely wait for a sponsor to show up for this to be implemented. Still, stressant has useful features like the capability of sending logs by email using a fresh new implementation of the Python SMTPHandler (BufferedSMTPHandler) which waits for logging to complete before sending a single email. Another interesting piece of code in there is the NegateAction argparse handler that enables the use of "toggle flags" (e.g. --flag / --no-flag). I'm so happy with the code that I figure I could just share it here directly:
class NegateAction(argparse.Action):
    '''add a toggle flag to argparse

    this is similar to 'store_true' or 'store_false', but allows
    arguments prefixed with --no to disable the default. the default
    is set depending on the first argument - if it starts with the
    negative form (define by default as '--no'), the default is False,
    otherwise True.
    '''
    negative = '--no'
    def __init__(self, option_strings, *args, **kwargs):
        '''set default depending on the first argument'''
        default = not option_strings[0].startswith(self.negative)
        super(NegateAction, self).__init__(option_strings, *args,
                                           default=default, nargs=0, **kwargs)
    def __call__(self, parser, ns, values, option):
        '''set the truth value depending on whether
        it starts with the negative form'''
        setattr(ns, self.dest, not option.startswith(self.negative))
Short and sweet. I wonder why stuff like this is not in the standard library yet - maybe just because no one bothered yet? It'd be great to get feedback of more experienced Pythonistas on this one. I hope that my work on Stressant is complete. I get zero funding for this work, and have little use for it myself: I manage only a few machines and such a tool really shines when you regularly put new hardware online, which is (fortunately?) not my case anymore. I'd be happy, of course, to accompany organisations and people that wish to further develop and use such a tool. A short demo of stressant as well as detailed description of how it works is of course available in its README file.

Standard third party repositories After looking at improvements for the grml repository instructions, I realized there was no real "best practices" document on how to configure an Apt repository. Sure, there are tools like reprepro and others, but those hardly qualify as policy: they are very flexible and there are lots of ways to create insecure repositories or curl sh style instructions, which we of course generally want to avoid. While the larger problem of Unstrusted Debian packages remain generally unsolved (e.g. when you install any .deb file, it can get root on your system), it seemed to me one critical part of this problem was how to add a random third-party repository to your machine while limiting, as much as possible, what possible attackers could do with such a repository. In other words, to solve the more general problem of insecure .deb files, we also need to solve the distribution problem, otherwise fixing the .deb files themselves will be useless. This lead to the creation of standardized repository instructions that define:
  1. how to distribute the repository's public signing key (ie. over HTTPS)
  2. how to name suites and components (e.g. use stable and main unless you have a good reason, and explain yourself)
  3. recommend a healthy does of apt preferences pinning
  4. how to distribute keys (e.g. with a derive-archive-keyring package)
I've seen so many third party repositories get this wrong. For example, a lot of repositories recommend this type of command to intialize the OpenPGP trust path:
curl http://example.com/key.asc   apt-key add -
This has the following problems:
  • the key is transfered in plaintext and can easily be manipulated by an active attacker (e.g. a router on your path to the server or a neighbor in a Wifi cafe)
  • the key is added to the main trust root, which allows the key to authentify as the real Debian archive, therefore giving it all rights over all packages
  • since it's part of the global archive, it's difficult for a package to remove/add the key when a key rollover is necessary (and repositories generally don't provide a deriv-archive-keyring to do that process anyways)
An example of this are the Docker install instructions that, at least, manage to do this over HTTPS. Some other repositories don't even bother teaching people about the proper way of adding those keys. We settled for:
wget -O /usr/share/keyrings/deriv-archive-keyring.gpg https://deriv.example.net/debian/deriv-archive-keyring.gpg
That location was explicitly chosen to be out of the main trust directory, so that it needs to be explicitly added to the sources.list as well:
deb [signed-by=/usr/share/keyrings/deriv-archive-keyring.gpg] https://deriv.example.net/debian/ stable main
Similarly, we highly recommend users setup "apt pinning" to restrict what a given repository can do. Since pinning is so confusing, most people don't actually bother even configuring it and I have yet to see a single repo advise its users to configure those preferences, which are essential to limit what a repository can do. To keep configuration simple, we recommend this:
Package: *
Pin: origin deriv.example.net
Pin-Priority: 100
Obviously, for a single-package repository, the actual package name should be listed, e.g.:
Package: foo
Pin: origin deriv.example.net
Pin-Priority: 100
And the priority should probably be set to 1 unless you want to allow automatic upgrades. It is my hope that this design will get more traction in the years to come and become a de-facto standard that will be a key part in safely adding third party repositories. There is obviously much more work to be done to improve security when installing untrusted .deb files, and I encourage Debian developers to consider contributing to the UntrustedDebs discussions and particularly to the Teams/Dpkg/Spec/DeclarativePackaging work.

Signal R&D I spent a significant amount of time this month struggling with the Signal project on my phone. I'm still ambivalent on Signal: it's a centralized designed, too dependent on phone numbers, but I must admit they get a lot of things right and it's the only free-software platform that allows for easy-to-use, multi-platform videoconferencing that my family can use. I've been following Signal for a while: up until now, I had been using the LibreSignal rebuild of the official client, as it is distributed on a F-Droid repository. Because I try to avoid Google (proprietary) software on my phone, it's basically the only way I could even install Signal. Unfortunately, the repository is out of date and introduces another point of trust in the distribution model: now you not only need to trust the Signal authors to do the right thing, you also need to trust that F-Droid repo not to inject nasty code on your phone. I've therefore started a discussion about how Signal could be distributed outside of the Google Play Store. I'd like to think it's one of the things that led the Signal people to distribute an official copy of Signal outside of the playstore. After much struggling, I was able to upgrade to this official client and will be able to upgrade easily by just downloading the APK. (Do note that I ended up reinstalling and re-registering Signal, which unfortunately changed my secret keys.) I do hope Signal enters F-Droid one day, but it could take a while because it still doesn't work without Google services and barely works with MicroG, the free software alternative to the Google services clients. Moxie also set a list of requirements like crash reporting and statistics that need to be implemented on F-Droid's side before he agrees to the deployment, so this could take a while. I've also participated in the, ahem, discussion on the JWZ blog regarding a supposed vulnerability in Signal where it would leak previously unknown phone numbers to third parties. I reviewed the way the phone number is uploaded and, while it's possible to create a rainbow table of phone numbers (which are hashed with a truncated SHA-1 checksum), I couldn't verify the claims of other participants in the thread. For me, Signal still does the right thing with contacts, although I do question the way "read status" notifications get transmitted, but that belong in another bug report / blog post.

Debian Long Term Support (LTS) It's been more than a year working on Debian LTS, started by Raphael Hertzog at Freexian. I didn't work much in February so I had a lot of hours to catchup with, and was unfortunately unable to do so, partly because I was busy with other projects, and partly because my colleagues are doing a great job at resolving the most important issues. So one my concerns this month was finding work. It seemed that all the hard packages were either taken (e.g. my usual favorites, tiff and imagemagick, we done by others) or just too challenging (e.g. I don't feel quite comfortable tackling the LTS branch of the Linux kernel yet). I spent quite a bit of time trying to figure out what was wrong with pcre3, only to realise the "32" in the report was not about the architecture, but about the character width. Because of thise, I marked 4 CVEs (CVE-2017-7186, CVE-2017-7244, CVE-2017-7245, CVE-2017-7246) as "not-affected", since the 32-bith character support wasn't enabled in wheezy (or jessie, for that matter). I still spent some time trying to reproduce the issues, which require a compiler with an AddressSanitizer, something that was introduced in both Clang and GCC after Wheezy was released, which makes reproducing this fairly complicated... This allowed me to experiment more with Vagrant, however, and I have provided the Debian cloud team with a 32-bit Vagrant box that was merged in shortly after, although it doesn't show up yet in the official list of Debian images. Then I looked at the apparmor situation (CVE-2017-6507), Debian bug #858768). That was one tricky bug as well, since it's not a security issue in apparmor per se, but more an issue with things that assume a certain behavior from apparmor. I have concluded that Wheezy was not affected because there are no assumptions of proper isolation there - which are provided only starting from LXC 1.0 - and Docker is not in Wheezy. I also couldn't reproduce the issue on Jessie, but, as it turns out, the issue was sysvinit-specific, which is why I couldn't reproduce it under the default systemd configuration shipped with Jessie. I also looked at the various binutils security issues: as I reported on the mailing list, I didn't see anything serious enough in there to warrant a a security release and followed the lead of both the stable and Red Hat security teams by marking this "no-dsa". I similiarly reviewed the mp3splt security issues (specifically CVE-2017-5666) and was fairly puzzled by that issue, which seems to be triggered only the same address sanitization extensions than PCRE, although there was some pretty wild interplay with debugging flags in there. All in all, it seems we can't reproduce that issue in wheezy, but I do not feel confident enough in the results to push that issue aside for now. I finally uploaded the pending graphicsmagick issue (DLA-547-2), a regression update to fix a crash that was introduced in the previous release (DLA-547-1, mistakenly named DLA-574-1). Hopefully that release should clear up some of the confusion and fix the regression. I also released DLA-879-1 for the CVE-2017-6369 in firebird2.5 which was an interesting experiment: I couldn't reproduce the issue in a local VM. After following the Ubuntu setup tutorial, as I wasn't too familiar with the Firebird database until now (hint: the default username and password is sysdba/masterkey), I ended up assuming we were vulnerable and just backporting the patch after seeing the jessie folks push out a release just in case. I also looked at updating the ca-certificates package to deal with the pending WoSign/Startcom removal: I made an explicit list of the CAs that need to be removed after reviewing the Mozilla list. I also sent a patch for an unrelated issue where ca-certificates is writing to /usr/local (!!) in Debian bug #843722. I have also done some "meta" work in starting a discussion about fixing the missing DLA links in the tracker, as you will notice all of the above links lead to nowhere. Thanks to pabs, there are now some links but unfortunately there are about 500 DLAs missing from the website. We also discussed ways to Debian bug #859123, something which is currently a manual process. This is now in the hands of the excellent webmaster team. I have also filed a few missing security bugs (Debian bug #859135, Debian bug #859136), partly because I wanted to help the security team. But it turned out that I felt the script needed some improvements, so I submitted a patch to improve the script so it is easier to run.

Other projects As usual, there's the usual mixed bags of chaos: More stuff on Github...

28 March 2017

Axel Beckert: System Tray Icon to Monitor a Linux Software RAID Locally

About a year ago I bought a new workstation computer for myself at home. It s a Tuxedo XUX_Cube which is advertised as gaming PC. But I ordered a slightly atypical non-gamer configuration: Of course the box runs Debian. To be more precise, it runs Debian Sid with sysvinit-core as init system and i3 as window manager. As I usually have no monitoring clients on my laptops and private workstations, I rather often felt the urge to do a cat /proc/mdstat on that box. So at some point I wanted something like smart-notifier, but for Linux Software (MD) RAIDs. And since I found nothing, I did what Open Source guys usually do in such cases: I wrote it myself of course in Perl and called it systray-mdstat. First I wondered about which build system would be most suitable for that task, but in the end I once again went with Dist::Zilla for the upstream build system and hence dh-dist-zilla for the Debian packaging. Ideas for the actual implementation were taken from Wouter s fdpowermon for the systray icon framework in Perl and Myon s mdstat Xymon plugin for an already proven logic to parse /proc/mdstat. (Both, Wouter and Myon have stated in a GnuPG-signed e-mail that I copied less code than would validate their copyrights, so I was able to license it under a single license, namely GNU GPL version 3.) As of now, systray-mdstat is also available as package in Debian Unstable. It won t make it to Stretch as its first line of code has been written after the soft-freeze for Stretch was already in place.

14 February 2017

Arturo Borrero Gonz lez: About process limits

Graphs The other day I had to deal with an outage in one of our LDAP servers, which is running the old Debian Wheezy (yeah, I know, we should update it). We are running openldap, the slapd daemon. And after searching the log files, the cause of the outage was obvious:
[...]
slapd[7408]: warning: cannot open /etc/hosts.allow: Too many open files
slapd[7408]: warning: cannot open /etc/hosts.deny: Too many open files
slapd[7408]: warning: cannot open /etc/hosts.allow: Too many open files
slapd[7408]: warning: cannot open /etc/hosts.deny: Too many open files
slapd[7408]: warning: cannot open /etc/hosts.allow: Too many open files
slapd[7408]: warning: cannot open /etc/hosts.deny: Too many open files
[...]
[Please read About process limits, round 2 for updated info on this issue] I couldn t believe that openldap is using tcp_wrappers (or libwrap), an ancient software piece that hasn t been updated for years, replaced in many other ways by more powerful tools (like nftables). I was blinded by this and ran to open a Debian bug agains openldap: #854436 (openldap: please don t use tcp-wrappers with slapd). The reply from Steve Langasek was clear:
If people are hitting open file limits trying to open two extra files,
disabling features in the codebase is not the correct solution.
Obvoursly, the problem was somewhere else. I started investigating about system limits, which seems to have 2 main components: According to my searchings, my slapd daemon was being hit by the latter. I reviewed the default system-wide limits and they seemed Ok. So, let s change the other limits. Most of the documentantion around the internet points you to a /etc/security/limits.conf file, which is then read by pam_limits. You can check current limits using the ulimit bash builtin. In the case of my slapd:
arturo@debian:~% sudo su openldap -s /bin/bash
openldap@debian:~% ulimit -a
core file size          (blocks, -c) 0
data seg size           (kbytes, -d) unlimited
scheduling priority             (-e) 0
file size               (blocks, -f) unlimited
pending signals                 (-i) 7915
max locked memory       (kbytes, -l) 64
max memory size         (kbytes, -m) unlimited
open files                      (-n) 1024
pipe size            (512 bytes, -p) 8
POSIX message queues     (bytes, -q) 819200
real-time priority              (-r) 0
stack size              (kbytes, -s) 8192
cpu time               (seconds, -t) unlimited
max user processes              (-u) 2000
virtual memory          (kbytes, -v) unlimited
file locks                      (-x) unlimited
This seems to suggest that the openldap user is constrained to 1024 openfiles (and some more if we check the hard limit). The 1024 limit seems low for a rather busy service. According to most of the internet docs, I m supposed to put this in /etc/security/limits.conf:
[...]
#<domain>      <type>  <item>         <value>
openldap	soft	nofile		1000000
openldap	hard	nofile		1000000
[...]
I should check as well that pam_limits is loaded, in /etc/pam.d/other:
[...]
session		required	pam_limits.so
[...]
After reloading the openldap session, you can check that, indeed, limits are changed as reported by ulimit. But at some point, the slapd daemon starts to drop connections again. Thing start to turn weird here. The changes we made until now don t work, probably because when the slapd daemon is spawned at bootup (by root, sysvinit in this case) no pam mechanisms are triggered. So, I was forced to learn a new thing: process limits. You can check the limits for a given process this way:
arturo@debian:~% cat /proc/$(pgrep slapd)/limits
Limit                     Soft Limit           Hard Limit           Units
Max cpu time              unlimited            unlimited            seconds
Max file size             unlimited            unlimited            bytes
Max data size             unlimited            unlimited            bytes
Max stack size            8388608              unlimited            bytes
Max core file size        0                    unlimited            bytes
Max resident set          unlimited            unlimited            bytes
Max processes             16000                16000                processes
Max open files            1024                 4096                 files
Max locked memory         65536                65536                bytes
Max address space         unlimited            unlimited            bytes
Max file locks            unlimited            unlimited            locks
Max pending signals       16000                16000                signals
Max msgqueue size         819200               819200               bytes
Max nice priority         0                    0
Max realtime priority     0                    0
Max realtime timeout      unlimited            unlimited            us
Good, seems we have some more limits attached to our slapd daemon process. If we search the internet to know how to change process limits, most of the docs points to a tool known as prlimit. According to the manpage, this is a tool to get and set process resource limits, which is just what I was looking for. According to the docs, the prlimit system call is supported since Linux 2.6.36, and I m running 3.2, so no problem here. Things looks promising. But yes, more problems. The prlimit tool is not included in the Debian Wheezy release. A simple call to a single system call was not going to stop me now, so I searched more the web until I found this useful manpage: getrlimit(2). There is a sample C code included in the manpage, in which we only need to replace RLIMIT_CPU with RLIMIT_NOFILE:
#define _GNU_SOURCE
#define _FILE_OFFSET_BITS 64
#include <stdio.h>
#include <time.h>
#include <stdlib.h>
#include <unistd.h>
#include <sys/resource.h>
#define errExit(msg) do   perror(msg); exit(EXIT_FAILURE); \
                          while (0)
int
main(int argc, char *argv[])
 
    struct rlimit old, new;
    struct rlimit *newp;
    pid_t pid;
    if (!(argc == 2   argc == 4))  
        fprintf(stderr, "Usage: %s <pid> [<new-soft-limit> "
                "<new-hard-limit>]\n", argv[0]);
        exit(EXIT_FAILURE);
     
    pid = atoi(argv[1]);        /* PID of target process */
    newp = NULL;
    if (argc == 4)  
        new.rlim_cur = atoi(argv[2]);
        new.rlim_max = atoi(argv[3]);
        newp = &new;
     
    /* Set CPU time limit of target process; retrieve and display
       previous limit */
    if (prlimit(pid, RLIMIT_NOFILE, newp, &old) == -1)
        errExit("prlimit-1");
    printf("Previous limits: soft=%lld; hard=%lld\n",
            (long long) old.rlim_cur, (long long) old.rlim_max);
    /* Retrieve and display new CPU time limit */
    if (prlimit(pid, RLIMIT_NOFILE, NULL, &old) == -1)
        errExit("prlimit-2");
    printf("New limits: soft=%lld; hard=%lld\n",
            (long long) old.rlim_cur, (long long) old.rlim_max);
    exit(EXIT_FAILURE);
 
And them compile it like this:
arturo@debian:~% gcc limits.c -o limits
We can then call this new binary like this:
arturo@debian:~% sudo limits $(pgrep slapd) 1000000 1000000
Finally, the limit seems OK:
arturo@debian:~% cat /proc/$(pgrep slapd)/limits
Limit                     Soft Limit           Hard Limit           Units
Max cpu time              unlimited            unlimited            seconds
Max file size             unlimited            unlimited            bytes
Max data size             unlimited            unlimited            bytes
Max stack size            8388608              unlimited            bytes
Max core file size        0                    unlimited            bytes
Max resident set          unlimited            unlimited            bytes
Max processes             16000                16000                processes
Max open files            1000000              1000000              files
Max locked memory         65536                65536                bytes
Max address space         unlimited            unlimited            bytes
Max file locks            unlimited            unlimited            locks
Max pending signals       16000                16000                signals
Max msgqueue size         819200               819200               bytes
Max nice priority         0                    0
Max realtime priority     0                    0
Max realtime timeout      unlimited            unlimited            us
Don t forget to apply this change every time the slapd daemon starts. Nobody found this issue before? really?

7 January 2017

Simon Richter: Crossgrading Debian in 2017

So, once again I had a box that had been installed with the kind-of-wrong Debian architecture, in this case, powerpc (32 bit, bigendian), while I wanted ppc64 (64 bit, bigendian). So, crossgrade time. If you want to follow this, be aware that I use sysvinit. I doubt this can be done this way with systemd installed, because systemd has a lot more dependencies for PID 1, and there is also a dbus daemon involved that cannot be upgraded without a reboot. To make this a bit more complicated, ppc64 is an unofficial port, so it is even less synchronized across architectures than sid normally is (I would have used jessie, but there is no jessie for ppc64). Step 1: Be Prepared To work around the archive synchronisation issues, I installed pbuilder and created 32 and 64 bit base.tgz archives:
pbuilder --create --basetgz /var/cache/pbuilder/powerpc.tgz
pbuilder --create --basetgz /var/cache/pbuilder/ppc64.tgz \
    --architecture ppc64 \
    --mirror http://ftp.ports.debian.org/debian-ports \
    --debootstrapopts --keyring=/usr/share/keyrings/debian-ports-archive-keyring.gpg \
    --debootstrapopts --include=debian-ports-archive-keyring
Step 2: Gradually Heat the Water so the Frog Doesn't Notice Then, I added the sources to sources.list, and added the architecture to dpkg:
deb [arch=powerpc] http://ftp.debian.org/debian sid main
deb [arch=ppc64] http://ftp.ports.debian.org/debian-ports sid main
deb-src http://ftp.debian.org/debian sid main
dpkg --add-architecture ppc64
apt update
Step 3: Time to Go Wild
apt install dpkg:ppc64
Obviously, that didn't work, in my case because libattr1 and libacl1 weren't in sync, so there was no valid way to install powerpc and ppc64 versions in parallel, so I used pbuilder to compile the current version from sid for the architecture that wasn't up to date (IIRC, one for powerpc, and one for ppc64). Manually installed the libraries, then tried again:
apt install dpkg:ppc64
Woo, it actually wants to do that. Now, that only half works, because apt calls dpkg twice, once to remove the old version, and once to install the new one. Your options at this point are
apt-get download dpkg:ppc64
dpkg -i dpkg_*_ppc64.deb
or if you didn't think far enough ahead, cursing followed by
cd /tmp
ar x /var/cache/apt/archives/dpkg_*_ppc64.deb
cd /
tar -xJf /tmp/data.tar.xz
dpkg -i /var/cache/apt/archives/dpkg_*_ppc64.deb
Step 4: Automate That Now, I'd like to get this a bit more convenient, so I had to repeat the same dance with apt and aptitude and their dependencies. Thanks to pbuilder, this wasn't too bad. With the aptitude resolver, it was then simple to upgrade a test package
aptitude install coreutils:ppc64 coreutils:powerpc-
The resolver did its thing, and asked whether I really wanted to remove an Essential package. I did, and it replaced the package just fine. So I asked dpkg for a list of all powerpc packages installed (since it's a ppc64 dpkg, it will report powerpc as foreign), massage that into shape with grep and sed, and give the result to aptitude as a command line. Some time later, aptitude finished, and I had a shiny 64 bit system. Crossgrade through an ssh session that remained open all the time, and without a reboot. After closing the ssh session, the last 32 bit binary was deleted as it was no longer in use. There were a few minor hiccups during the process where dpkg refused to overwrite "shared" files with different versions, but these could be solved easily by manually installing the offending package with
dpkg --force-overwrite -i ...
and then resuming what aptitude was doing, using
aptitude install
So, in summary, this still works fairly well.

20 October 2016

H ctor Or n Mart nez: Build a Debian package against Debian 8.0 using Download On Demand (DoD) service

In the previous post Open Build Service software architecture has been overviewed. In the current blog post, a tutorial on setting up a package build with OBS from Debian packages is presented. Steps: Generate a test environment by creating Stretch/SID VM Really, use whatever suits you best, but please create an untrusted test environment for this one. In the current tutorial it assumes $hostname is stretch , which should be stretch or sid suite. Be aware that copy & paste configuration files from current post might lead you into broken characters (i.e. ). Debian Stretch weekly netinst CD Enable experimental repository
# echo "deb http://httpredir.debian.org/debian experimental main" >> /etc/apt/sources.list.d/experimental.list
# apt-get update
Install and setup OBS server, api, worker and osc CLI packages
# apt-get install obs-server obs-api obs-worker osc
In the install process mysql database is needed, therefore if mysql server is not setup, a password needs to be provided.
When OBS API database obs-api is created, we need to pick a password for it, provide opensuse . The obs-api package will configure apache2 https webserver (creating a dummy certificate for stretch ) to serve OBS webui.
Add stretch and obs aliases to localhost entry in your /etc/hosts file.
Enable worker by setting ENABLED=1 in /etc/default/obsworker
Try to connect to the web UI https://stretch/
Login into OBS webui, default login credentials: Admin/opensuse).
From command line tool, try to list projects in OBS
 $ osc -A https://stretch ls
Accept dummy certificate and provide credentials (defaults: Admin/opensuse)
If the install proceeds as expected follow to the next step. Ensure all OBS services are running
# backend services
obsrun     813  0.0  0.9 104960 20448 ?        Ss   08:33   0:03 /usr/bin/perl -w /usr/lib/obs/server/bs_dodup
obsrun     815  0.0  1.5 157512 31940 ?        Ss   08:33   0:07 /usr/bin/perl -w /usr/lib/obs/server/bs_repserver
obsrun    1295  0.0  1.6 157644 32960 ?        S    08:34   0:07  \_ /usr/bin/perl -w /usr/lib/obs/server/bs_repserver
obsrun     816  0.0  1.8 167972 38600 ?        Ss   08:33   0:08 /usr/bin/perl -w /usr/lib/obs/server/bs_srcserver
obsrun    1296  0.0  1.8 168100 38864 ?        S    08:34   0:09  \_ /usr/bin/perl -w /usr/lib/obs/server/bs_srcserver
memcache   817  0.0  0.6 346964 12872 ?        Ssl  08:33   0:11 /usr/bin/memcached -m 64 -p 11211 -u memcache -l 127.0.0.1
obsrun     818  0.1  0.5  78548 11884 ?        Ss   08:33   0:41 /usr/bin/perl -w /usr/lib/obs/server/bs_dispatch
obsserv+   819  0.0  0.3  77516  7196 ?        Ss   08:33   0:05 /usr/bin/perl -w /usr/lib/obs/server/bs_service
mysql      851  0.0  0.0   4284  1324 ?        Ss   08:33   0:00 /bin/sh /usr/bin/mysqld_safe
mysql     1239  0.2  6.3 1010744 130104 ?      Sl   08:33   1:31  \_ /usr/sbin/mysqld --basedir=/usr --datadir=/var/lib/mysql --plugin-dir=/usr/lib/mysql/plugin --log-error=/var/log/mysql/error.log --pid-file=/var/run/mysqld/mysqld.pid --socket=/var/run/mysqld/mysqld.sock --port=3306
# web services
root      1452  0.0  0.1 110020  3968 ?        Ss   08:34   0:01 /usr/sbin/apache2 -k start
root      1454  0.0  0.1 435992  3496 ?        Ssl  08:34   0:00  \_ Passenger watchdog
root      1460  0.3  0.2 651044  5188 ?        Sl   08:34   1:46      \_ Passenger core
nobody    1465  0.0  0.1 444572  3312 ?        Sl   08:34   0:00      \_ Passenger ust-router
www-data  1476  0.0  0.1 855892  2608 ?        Sl   08:34   0:09  \_ /usr/sbin/apache2 -k start
www-data  1477  0.0  0.1 856068  2880 ?        Sl   08:34   0:09  \_ /usr/sbin/apache2 -k start
www-data  1761  0.0  4.9 426868 102040 ?       Sl   08:34   0:29 delayed_job.0
www-data  1767  0.0  4.8 425624 99888 ?        Sl   08:34   0:30 delayed_job.1
www-data  1775  0.0  4.9 426516 101708 ?       Sl   08:34   0:28 delayed_job.2
nobody    1788  0.0  5.7 496092 117480 ?       Sl   08:34   0:03 Passenger RubyApp: /usr/share/obs/api
nobody    1796  0.0  4.9 488888 102176 ?       Sl   08:34   0:00 Passenger RubyApp: /usr/share/obs/api
www-data  1814  0.0  4.5 282576 92376 ?        Sl   08:34   0:22 delayed_job.1000
www-data  1829  0.0  4.4 282684 92228 ?        Sl   08:34   0:22 delayed_job.1010
www-data  1841  0.0  4.5 282932 92536 ?        Sl   08:34   0:22 delayed_job.1020
www-data  1855  0.0  4.9 427988 101492 ?       Sl   08:34   0:29 delayed_job.1030
www-data  1865  0.2  5.0 492500 102964 ?       Sl   08:34   1:09 clockworkd.clock
www-data  1899  0.0  0.0  87100  1400 ?        S    08:34   0:00 /usr/bin/searchd --pidfile --config /usr/share/obs/api/config/production.sphinx.conf
www-data  1900  0.1  0.4 161620  8276 ?        Sl   08:34   0:51  \_ /usr/bin/searchd --pidfile --config /usr/share/obs/api/config/production.sphinx.conf
# OBS worker
root      1604  0.0  0.0  28116  1492 ?        Ss   08:34   0:00 SCREEN -m -d -c /srv/obs/run/worker/boot/screenrc
root      1605  0.0  0.9  75424 18764 pts/0    Ss+  08:34   0:06  \_ /usr/bin/perl -w ./bs_worker --hardstatus --root /srv/obs/worker/root_1 --statedir /srv/obs/run/worker/1 --id stretch:1 --reposerver http://obs:5252 --jobs 1
Create an OBS project for Download on Demand (DoD) Create a meta project file:
$ osc -A https://stretch:443 meta prj Debian:8 -e
<project name= Debian:8 >
<title>Debian 8 DoD</title>
<description>Debian 8 DoD</description>
<person userid= Admin role= maintainer />
<repository name= main >
<download arch= x86_64 url= http://deb.debian.org/debian/jessie/main repotype= deb />
<arch>x86_64</arch>
</repository>
</project>
Visit webUI to check project configuration Create a meta project configuration file:
$ osc -A https://stretch:443 meta prjconf Debian:8 -e
Add the following file, as found at build.opensuse.org
Repotype: debian
# create initial user
Preinstall: base-passwd
Preinstall: user-setup
# required for preinstall images
Preinstall: perl
# preinstall essentials + dependencies
Preinstall: base-files base-passwd bash bsdutils coreutils dash debconf
Preinstall: debianutils diffutils dpkg e2fslibs e2fsprogs findutils gawk
Preinstall: gcc-4.9-base grep gzip hostname initscripts insserv libacl1
Preinstall: libattr1 libblkid1 libbz2-1.0 libc-bin libc6 libcomerr2 libdb5.3
Preinstall: libgcc1 liblzma5 libmount1 libncurses5 libpam-modules
Preinstall: libpcre3 libsmartcols1
Preinstall: libpam-modules-bin libpam-runtime libpam0g libreadline6
Preinstall: libselinux1 libsemanage-common libsemanage1 libsepol1 libsigsegv2
Preinstall: libslang2 libss2 libtinfo5 libustr-1.0-1 libuuid1 login lsb-base
Preinstall: mount multiarch-support ncurses-base ncurses-bin passwd perl-base
Preinstall: readline-common sed sensible-utils sysv-rc sysvinit sysvinit-utils
Preinstall: tar tzdata util-linux zlib1g
Runscripts: base-passwd user-setup base-files gawk
VMinstall: libdevmapper1.02.1
Order: user-setup:base-files
# Essential packages (this should also pull the dependencies)
Support: base-files base-passwd bash bsdutils coreutils dash debianutils
Support: diffutils dpkg e2fsprogs findutils grep gzip hostname libc-bin 
Support: login mount ncurses-base ncurses-bin perl-base sed sysvinit 
Support: sysvinit-utils tar util-linux
# Build-essentials
Required: build-essential
Prefer: build-essential:make
# build script needs fakeroot
Support: fakeroot
# lintian support would be nice, but breaks too much atm
#Support: lintian
# helper tools in the chroot
Support: less kmod net-tools procps psmisc strace vim
# everything below same as for Debian:6.0 (apart from the version macros ofc)
# circular dependendencies in openjdk stack
Order: openjdk-6-jre-lib:openjdk-6-jre-headless
Order: openjdk-6-jre-headless:ca-certificates-java
Keep: binutils cpp cracklib file findutils gawk gcc gcc-ada gcc-c++
Keep: gzip libada libstdc++ libunwind
Keep: libunwind-devel libzio make mktemp pam-devel pam-modules
Keep: patch perl rcs timezone
Prefer: cvs libesd0 libfam0 libfam-dev expect
Prefer: gawk locales default-jdk
Prefer: xorg-x11-libs libpng fam mozilla mozilla-nss xorg-x11-Mesa
Prefer: unixODBC libsoup glitz java-1_4_2-sun gnome-panel
Prefer: desktop-data-SuSE gnome2-SuSE mono-nunit gecko-sharp2
Prefer: apache2-prefork openmotif-libs ghostscript-mini gtk-sharp
Prefer: glib-sharp libzypp-zmd-backend mDNSResponder
Prefer: -libgcc-mainline -libstdc++-mainline -gcc-mainline-c++
Prefer: -libgcj-mainline -viewperf -compat -compat-openssl097g
Prefer: -zmd -OpenOffice_org -pam-laus -libgcc-tree-ssa -busybox-links
Prefer: -crossover-office -libgnutls11-dev
# alternative pkg-config implementation
Prefer: -pkgconf
Prefer: -openrc
Prefer: -file-rc
Conflict: ghostscript-library:ghostscript-mini
Ignore: sysvinit:initscripts
Ignore: aaa_base:aaa_skel,suse-release,logrotate,ash,mingetty,distribution-release
Ignore: gettext-devel:libgcj,libstdc++-devel
Ignore: pwdutils:openslp
Ignore: pam-modules:resmgr
Ignore: rpm:suse-build-key,build-key
Ignore: bind-utils:bind-libs
Ignore: alsa:dialog,pciutils
Ignore: portmap:syslogd
Ignore: fontconfig:freetype2
Ignore: fontconfig-devel:freetype2-devel
Ignore: xorg-x11-libs:freetype2
Ignore: xorg-x11:x11-tools,resmgr,xkeyboard-config,xorg-x11-Mesa,libusb,freetype2,libjpeg,libpng
Ignore: apache2:logrotate
Ignore: arts:alsa,audiofile,resmgr,libogg,libvorbis
Ignore: kdelibs3:alsa,arts,pcre,OpenEXR,aspell,cups-libs,mDNSResponder,krb5,libjasper
Ignore: kdelibs3-devel:libvorbis-devel
Ignore: kdebase3:kdebase3-ksysguardd,OpenEXR,dbus-1,dbus-1-qt,hal,powersave,openslp,libusb
Ignore: kdebase3-SuSE:release-notes
Ignore: jack:alsa,libsndfile
Ignore: libxml2-devel:readline-devel
Ignore: gnome-vfs2:gnome-mime-data,desktop-file-utils,cdparanoia,dbus-1,dbus-1-glib,krb5,hal,libsmbclient,fam,file_alteration
Ignore: libgda:file_alteration
Ignore: gnutls:lzo,libopencdk
Ignore: gnutls-devel:lzo-devel,libopencdk-devel
Ignore: pango:cairo,glitz,libpixman,libpng
Ignore: pango-devel:cairo-devel
Ignore: cairo-devel:libpixman-devel
Ignore: libgnomeprint:libgnomecups
Ignore: libgnomeprintui:libgnomecups
Ignore: orbit2:libidl
Ignore: orbit2-devel:libidl,libidl-devel,indent
Ignore: qt3:libmng
Ignore: qt-sql:qt_database_plugin
Ignore: gtk2:libpng,libtiff
Ignore: libgnomecanvas-devel:glib-devel
Ignore: libgnomeui:gnome-icon-theme,shared-mime-info
Ignore: scrollkeeper:docbook_4,sgml-skel
Ignore: gnome-desktop:libgnomesu,startup-notification
Ignore: python-devel:python-tk
Ignore: gnome-pilot:gnome-panel
Ignore: gnome-panel:control-center2
Ignore: gnome-menus:kdebase3
Ignore: gnome-main-menu:rug
Ignore: libbonoboui:gnome-desktop
Ignore: postfix:pcre
Ignore: docbook_4:iso_ent,sgml-skel,xmlcharent
Ignore: control-center2:nautilus,evolution-data-server,gnome-menus,gstreamer-plugins,gstreamer,metacity,mozilla-nspr,mozilla,libxklavier,gnome-desktop,startup-notification
Ignore: docbook-xsl-stylesheets:xmlcharent
Ignore: liby2util-devel:libstdc++-devel,openssl-devel
Ignore: yast2:yast2-ncurses,yast2-theme-SuSELinux,perl-Config-Crontab,yast2-xml,SuSEfirewall2
Ignore: yast2-core:netcat,hwinfo,wireless-tools,sysfsutils
Ignore: yast2-core-devel:libxcrypt-devel,hwinfo-devel,blocxx-devel,sysfsutils,libstdc++-devel
Ignore: yast2-packagemanager-devel:rpm-devel,curl-devel,openssl-devel
Ignore: yast2-devtools:perl-XML-Writer,libxslt,pkgconfig
Ignore: yast2-installation:yast2-update,yast2-mouse,yast2-country,yast2-bootloader,yast2-packager,yast2-network,yast2-online-update,yast2-users,release-notes,autoyast2-installation
Ignore: yast2-bootloader:bootloader-theme
Ignore: yast2-packager:yast2-x11
Ignore: yast2-x11:sax2-libsax-perl
Ignore: openslp-devel:openssl-devel
Ignore: java-1_4_2-sun:xorg-x11-libs
Ignore: java-1_4_2-sun-devel:xorg-x11-libs
Ignore: kernel-um:xorg-x11-libs
Ignore: tetex:xorg-x11-libs,expat,fontconfig,freetype2,libjpeg,libpng,ghostscript-x11,xaw3d,gd,dialog,ed
Ignore: yast2-country:yast2-trans-stats
Ignore: susehelp:susehelp_lang,suse_help_viewer
Ignore: mailx:smtp_daemon
Ignore: cron:smtp_daemon
Ignore: hotplug:syslog
Ignore: pcmcia:syslog
Ignore: avalon-logkit:servlet
Ignore: jython:servlet
Ignore: ispell:ispell_dictionary,ispell_english_dictionary
Ignore: aspell:aspel_dictionary,aspell_dictionary
Ignore: smartlink-softmodem:kernel,kernel-nongpl
Ignore: OpenOffice_org-de:myspell-german-dictionary
Ignore: mediawiki:php-session,php-gettext,php-zlib,php-mysql,mod_php_any
Ignore: squirrelmail:mod_php_any,php-session,php-gettext,php-iconv,php-mbstring,php-openssl
Ignore: simias:mono(log4net)
Ignore: zmd:mono(log4net)
Ignore: horde:mod_php_any,php-gettext,php-mcrypt,php-imap,php-pear-log,php-pear,php-session,php
Ignore: xerces-j2:xml-commons-apis,xml-commons-resolver
Ignore: xdg-menu:desktop-data
Ignore: nessus-libraries:nessus-core
Ignore: evolution:yelp
Ignore: mono-tools:mono(gconf-sharp),mono(glade-sharp),mono(gnome-sharp),mono(gtkhtml-sharp),mono(atk-sharp),mono(gdk-sharp),mono(glib-sharp),mono(gtk-sharp),mono(pango-sharp)
Ignore: gecko-sharp2:mono(glib-sharp),mono(gtk-sharp)
Ignore: vcdimager:libcdio.so.6,libcdio.so.6(CDIO_6),libiso9660.so.4,libiso9660.so.4(ISO9660_4)
Ignore: libcdio:libcddb.so.2
Ignore: gnome-libs:libgnomeui
Ignore: nautilus:gnome-themes
Ignore: gnome-panel:gnome-themes
Ignore: gnome-panel:tomboy
Substitute: utempter
%ifnarch s390 s390x ppc ia64
Substitute: java2-devel-packages java-1_4_2-sun-devel
%else
 %ifnarch s390x
Substitute: java2-devel-packages java-1_4_2-ibm-devel
 %else
Substitute: java2-devel-packages java-1_4_2-ibm-devel xorg-x11-libs-32bit
 %endif
%endif
Substitute: yast2-devel-packages docbook-xsl-stylesheets doxygen libxslt perl-XML-Writer popt-devel sgml-skel update-desktop-files yast2 yast2-devtools yast2-packagemanager-devel yast2-perl-bindings yast2-testsuite
#
# SUSE compat mappings
#
Substitute: gcc-c++ gcc
Substitute: libsigc++2-devel libsigc++-2.0-dev
Substitute: glibc-devel-32bit
Substitute: pkgconfig pkg-config
%ifarch %ix86
Substitute: kernel-binary-packages kernel-default kernel-smp kernel-bigsmp kernel-debug kernel-um kernel-xen kernel-kdump
%endif
%ifarch ia64
Substitute: kernel-binary-packages kernel-default kernel-debug
%endif
%ifarch x86_64
Substitute: kernel-binary-packages kernel-default kernel-smp kernel-xen kernel-kdump
%endif
%ifarch ppc
Substitute: kernel-binary-packages kernel-default kernel-kdump kernel-ppc64 kernel-iseries64
%endif
%ifarch ppc64
Substitute: kernel-binary-packages kernel-ppc64 kernel-iseries64
%endif
%ifarch s390
Substitute: kernel-binary-packages kernel-s390
%endif
%ifarch s390x
Substitute: kernel-binary-packages kernel-default
%endif
%define debian_version 800
Macros:
%debian_version 800
Visit webUI to check project configuration Create an OBS project linked to DoD
$ osc -A https://stretch:443 meta prj test -e
<project name= test >
<title>test</title>
<description>test</description>
<person userid= Admin role= maintainer />
<repository name= Debian_8.0 >
<path project= Debian:8 repository= main />
<arch>x86_64</arch>
</repository>
</project>
Visit webUI to check project configuration Adding a package to the project
$ osc -A https://stretch:443 co test ; cd test
$ mkdir hello ; cd hello ; apt-get source -d hello ; cd - ; 
$ osc add hello 
$ osc ci -m "New import" hello
The package should go to dispatched state then get in blocked state while it downloads build dependencies from DoD link, eventually it should start building. Please check the journal logs to check if something went wrong or gets stuck. Visit webUI to check hello package build state OBS logging to the journal Check in the journal logs everything went fine:
$ sudo journalctl -u obsdispatcher.service -u obsdodup.service -u obsscheduler@x86_64.service -u obsworker.service -u obspublisher.service
Troubleshooting Currently we are facing few issues with web UI: And there are more issues that have not been reported, please do reportbug obs-api .

30 September 2016

Chris Lamb: Free software activities in September 2016

Here is my monthly update covering what I have been doing in the free software world (previous month):
Reproducible builds

Whilst anyone can inspect the source code of free software for malicious flaws, most Linux distributions provide binary (or "compiled") packages to end users. The motivation behind the Reproducible Builds effort is to allow verification that no flaws have been introduced either maliciously and accidentally during this compilation process by promising identical binary packages are always generated from a given source. My work in the Reproducible Builds project was also covered in our weekly reports #71, #72, #71 & #74. I made the following improvements to our tools:

diffoscope

diffoscope is our "diff on steroids" that will not only recursively unpack archives but will transform binary formats into human-readable forms in order to compare them.

  • Added a global Progress object to track the status of the comparison process allowing for graphical and machine-readable status indicators. I also blogged about this feature in more detail.
  • Moved the global Config object to a more Pythonic "singleton" pattern and ensured that constraints are checked on every change.

disorderfs

disorderfs is our FUSE filesystem that deliberately introduces nondeterminism into the results of system calls such as readdir(3).

  • Display the "disordered" behaviour we intend to show on startup. (#837689)
  • Support relative paths in command-line parameters (previously only absolute paths were permitted).

strip-nondeterminism

strip-nondeterminism is our tool to remove specific information from a completed build.

  • Fix an issue where temporary files were being left on the filesystem and add a test to avoid similar issues in future. (#836670)
  • Print an error if the file to normalise does not exist. (#800159)
  • Testsuite improvements:
    • Set the timezone in tests to avoid a FTBFS and add a File::StripNondeterminism::init method to the API to to set tzset everywhere. (#837382)
    • "Smoke test" the strip-nondeterminism(1) and dh_strip_nondeterminism(1) scripts to prevent syntax regressions.
    • Add a testcase for .jar file ordering and normalisation.
    • Check the stripping process before comparing file attributes to make it less confusing on failure.
    • Move to a lookup table for descriptions of stat(1) indices and use that for nicer failure messages.
    • Don't uselessly test whether the inode number has changed.
  • Run perlcritic across the codebase and adopt some of its prescriptions including explicitly using oct(..) for integers with leading zeroes, avoiding mixing high and low-precedence booleans, ensuring subroutines end with a return statement, etc.

I also submitted 4 patches to fix specific reproducibility issues in golang-google-grpc, nostalgy, python-xlib & torque.


Debian https://lamby-www.s3.amazonaws.com/yadt/blog.Image/image/original/28.jpeg

Patches contributed

Debian LTS

This month I have been paid to work 12.75 hours on Debian Long Term Support (LTS). In that time I did the following:
  • "Frontdesk" duties, triaging CVEs, etc.
  • Issued DLA 608-1 for mailman fixing a CSRF vulnerability.
  • Issued DLA 611-1 for jsch correcting a path traversal vulnerability.
  • Issued DLA 620-1 for libphp-adodb patching a SQL injection vulnerability.
  • Issued DLA 631-1 for unadf correcting a buffer underflow issue.
  • Issued DLA 634-1 for dropbear fixing a buffer overflow when parsing ASN.1 keys.
  • Issued DLA 635-1 for dwarfutils working around an out-of-bounds read issue.
  • Issued DLA 638-1 for the SELinux policycoreutils, patching a sandbox escape issue.
  • Enhanced Brian May's find-work --unassigned switch to take an optional "except this user" argument.
  • Marked matrixssl and inspircd as being unsupported in the current LTS version.

Uploads
  • python-django 1:1.10.1-1 New upstream release and ensure that django-admin startproject foo creates files with the correct shebang under Python 3.
  • gunicorn:
    • 19.6.0-5 Don't call chown(2) if it would be a no-op to avoid failure under snap.
    • 19.6.0-6 Remove now-obsolete conffiles and logrotate scripts; they should have been removed in 19.6.0-3.
  • redis:
    • 3.2.3-2 Call ulimit -n 65536 by default from SysVinit scripts to normalise the behaviour with systemd. I also bumped the Debian package epoch as the "2:" prefix made it look like we are shipping version 2.x. I additionaly backported this upload to Debian Jessie.
    • 3.2.4-1 New upstream release, add missing -ldl for dladdr(3) & add missing dependency on lsb-base.
  • python-redis (2.10.5-2) Bump python-hiredis to Suggests to sync with Ubuntu and move to a machine-readable debian/copyright. I also backported this upload to Debian Jessie.
  • adminer (4.2.5-3) Move mysql-server dependencies to default-mysql-server. I also backported this upload to Debian Jessie.
  • gpsmanshp (1.2.3-5) on behalf of the QA team:
    • Move to "minimal" debhelper style, making the build reproducible. (#777446 & #792991)
    • Reorder linker command options to build with --as-needed (#729726) and add hardening flags.
    • Move to machine-readable copyright file, add missing #DEBHELPER# tokens to postinst and prerm scripts, tidy descriptions & other debian/control fields and other smaller changes.

I sponsored the upload of 5 packages from other developers:

I also NMU'd:



FTP Team

As a Debian FTP assistant I ACCEPTed 147 packages: alljoyn-services-1604, android-platform-external-doclava, android-platform-system-tools-aidl, aufs, bcolz, binwalk, bmusb, bruteforce-salted-openssl, cappuccino, captagent, chrome-gnome-shell, ciphersaber, cmark, colorfultabs, cppformat, dnsrecon, dogtag-pki, dxtool, e2guardian, flask-compress, fonts-mononoki, fwknop-gui, gajim-httpupload, glbinding, glewmx, gnome-2048, golang-github-googleapis-proto-client-go, google-android-installers, gsl, haskell-hmatrix-gsl, haskell-relational-query, haskell-relational-schemas, haskell-secret-sharing, hindsight, i8c, ip4r, java-string-similarity, khal, khronos-opencl-headers, liblivemedia, libshell-config-generate-perl, libshell-guess-perl, libstaroffice, libxml2, libzonemaster-perl, linux, linux-grsec-base, linux-signed, lua-sandbox, lua-torch-trepl, mbrola-br2, mbrola-br4, mbrola-de1, mbrola-de2, mbrola-de3, mbrola-ir1, mbrola-lt1, mbrola-lt2, mbrola-mx1, mimeo, mimerender, mongo-tools, mozilla-gnome-keyring, munin, node-grunt-cli, node-js-yaml, nova, open-build-service, openzwave, orafce, osmalchemy, pgespresso, pgextwlist, pgfincore, pgmemcache, pgpool2, pgsql-asn1oid, postbooks-schema, postgis, postgresql-debversion, postgresql-multicorn, postgresql-mysql-fdw, postgresql-unit, powerline-taskwarrior, prefix, pycares, pydl, pynliner, pytango, pytest-cookies, python-adal, python-applicationinsights, python-async-timeout, python-azure, python-azure-storage, python-blosc, python-can, python-canmatrix, python-chartkick, python-confluent-kafka, python-jellyfish, python-k8sclient, python-msrestazure, python-nss, python-pytest-benchmark, python-tenacity, python-tmdbsimple, python-typing, python-unidiff, python-xstatic-angular-schema-form, python-xstatic-tv4, quilt, r-bioc-phyloseq, r-cran-filehash, r-cran-png, r-cran-testit, r-cran-tikzdevice, rainbow-mode, repmgr, restart-emacs, restbed, ruby-azure-sdk, ruby-babel-source, ruby-babel-transpiler, ruby-diaspora-prosody-config, ruby-haikunator, ruby-license-finder, ruby-ms-rest, ruby-ms-rest-azure, ruby-rails-assets-autosize, ruby-rails-assets-blueimp-gallery, ruby-rails-assets-bootstrap, ruby-rails-assets-bootstrap-markdown, ruby-rails-assets-emojione, ruby-sprockets-es6, ruby-timeliness, rustc, skytools3, slony1-2, snmp-mibs-downloader, syslog-ng, test-kitchen, uctodata, usbguard, vagrant-azure, vagrant-mutate & vim.

27 August 2016

Arturo Borrero Gonz lez: Why conntrackd in Debian is better with systemd


There has been some discussion [0] around my decision to drop sysvinit support in the conntrackd package in Debian [1] (version 1:1.4.4-2).

The rationale I used for such a movement was sent to the debian-devel [2] mailing list, and here it is:


Lots of people have talked to me with both support and disagreement with the change.

Before reading the rest of this blogpost, please note that I'm not interested in the 'systemd vs sysvinit' war, and starting now I will focus mainly on the subject of building a firewall cluster with netfilter technology and the reasons why I think sysvinit here is irrelevant.

I started working with firewall clusters by the time of Debian Squeeze. By then, I used only sysvinit, because it was the Debian default init system and because I did not dive so much in the internals of the firewall cluster itself.

By then, the conntrackd debian package included support for sysvinit by means of two files (the two files that I dropped in 1:1.4.4-2) :


Using both files, you could start/stop conntrackd, and nothing more (for example, a proper status check was not implemented).

The conntrackd daemon is used in HA firewall clusters to replicate connection states between nodes of the cluster, so flow states are known in all nodes and they can properly perform stateful firewalling.

When you build HA clusters, there are 2 basic states of the cluster you may check and adjust: failover and failback.

With conntrackd, the failover situation is straight forward: the other node has the state information already from the dying node.
The failback situation is different: depending on the configuration of both the cluster and conntrackd, the new node may ask for a complete synchronisation to the other node when it become alive again (at boot time, for example) . This is the case if you are building a multi-master firewall cluster (i.e: both nodes are seeing traffic and filtering at the same time).

Asking for a complete synchronisation is done by means of a new instance of conntrackd, which communicates using a UNIX socket to the main conntrackd daemon (the one which is actually communicating with the other node). A failback boot procedure may look like this:

  1. system boots
  2. firewall ruleset loading
  3. networking up
  4. conntrackd up (main sync daemon: using 'conntrackd -d'; UNIX socket is opened)
  5. request sync with other node (using conntrackd -n' which uses the UNIX socket opened in step 4)

In both sysvinit and systemd cases, the step 5 may be implemented using another dummy service which performs the required operations and that depends on the main service representing the step 4.

The point here is that steps 4 and 5 suffer a very bad race condition: the conntrackd daemon may open the UNIX socket (step 4) *after* the we try to use it (step 5).

As you could probably imagine, this is the worst possible scenario: as one of the nodes of the cluster is missing important flow state information the stateful firewall will drop packets and all the network monitoring alarms will start to ring... Ironically, just after a failback operation, when all of our cluster is supposed to be up an running.

To avoid the race conditions, some typical hacks could be implemented:


After a bit of research, I found that the last point could be implemented very easily using libsystemd, so the daemon inform of its internal state to systemd [4].

The solution is elegant, simple and offers more direct benefits:

So the clear winner for conntrackd is to go with systemd, using an unit service file of Type=notify. Using sysvinit is prone to the described race condition. No serious firewall cluster with this architecture would use sysvinit.

It's obvious to me that systemd is a better technology than sysvinit. The sysvinit approach has been OK for a while, and for me it was fascinating when I started developing init scripts and learning how things worked.
But now we have systemd. It's the default in Debian. It's far better.

Conclusions: yes, I will probably reintroduce sysvinit files just to avoid any flamewar.


[0] https://lists.debian.org/debian-devel/2016/08/msg00448.html
[1] https://tracker.debian.org/pkg/conntrack-tools
[2] https://lists.debian.org/debian-devel/2016/08/msg00456.html
[3] https://sources.debian.net/src/systemd/231-4/debian/systemd.NEWS/
[4] http://git.netfilter.org/conntrack-tools/tree/src/systemd.c

15 June 2016

Andrew Shadura: Migrate to systemd without a reboot

Yesterday I was fixing an issue with one of the servers behind kallithea-scm.org: the hook intended to propagage pushes from Our Own Kallithea to Bitbucket stopped working. Until yesterday, that server was using Debian s flavour of System V init and djb s d montools to keep things running. To make the hook asynchronous, I wrote a service to be managed to d montools, so that concurrency issued would be solved by it. However, I didn t implement any timeouts, so when last week wget froze while pulling Weblate s hook, there was nothing to interrupt it, so the hook stopped working since d montools thought it s already running and wouldn t re-trigger it. Killing wget helped, but I decided I need to do something with it to prevent the situation from happening in the future. I ve been using systemd at work for the last year, so I am now confident I m happier with systemd than with d montools, so I decided to switch the server to systemd. Not surprisingly, I prepared unit files in about 5 minutes without having to look into the manuals again, while with d montools I had to check things every time I needed to change something. The tricky thing was the switch itself. It is a virtual server, presumably running in Xen, and I don t have access to the console, so if I bork break something, I need to summon Bradley Kuhn or someone from Conservancy, who s kindly donated the server to the project. In any case, I decided to attempt to upgrade without a reboot, so that I have more options to roll back my changes in the case things go wrong. After studying the manpages of both systemd s init and sysvinit s init, I realised I can install systemd as /sbin/init and ask already running System V init to re-exec. However, systemd s init can t talk to System V init, so before installing systemd I made a backup on it. It s also important to stop all running services (except probably ssh) to make sure systemd doesn t start second instances of each. And then: /tmp/init u and we re running systemd! A couple of additional checks, and it s safe to reboot. Only when I did all that I realised that in the case of systemd not working I d probably not be able to undo my changes if my connection interrupted. So, even though at the end it worked, probably it s not a good idea to perform such manipulations when you don t have an alternative way to connect to the server :)

1 May 2016

Ben Hutchings: 10 years as a Debian Developer

On 1st May 2006 my Debian account was created and I gained the status of Debian Developer. At that time I had already been to several BSPs and one DebConf, and maintained a few applications and Perl library packages. We were working toward the etch release and would soon hold DebConf 6 in Mexico. Ten years later, I still maintain one of those packages (sgt-puzzles) but the rest were either handed over to the Perl team or entirely removed. I wrote, maintained, and then gave away dvswitch all within this period. I have packaged some other applications that I needed to use - kup, ministat, odhcp6c - and I continue to maintain them. I have also made many NMUs, including security uploads, for all kinds of packages including bind9, e2fsprogs, (e)glibc, lvm2, sudo, sysvinit and udev. However, for about the past 7 years most of my work in Debian has been done within the kernel team, working on the Linux kernel and closely related packages - such as crda, ethtool, firmware-nonfree and initramfs-tools. I have also become an upstream developer for several of these projects. I'm proud to have played a part in the etch, lenny, squeeze, wheezy and jessie releases, and I have enjoyed attending 7 more DebConfs and many mini-DebConfs. I'm now looking forward to another great release (stretch) and to attending DebConf 16 in Cape Town this summer winter. I hope to still be active in Debian in 2026, looking back on another 10 years in this amazing project.

Next.