Search Results: "federico"

27 April 2022

Antoine Beaupr : building Debian packages under qemu with sbuild

I've been using sbuild for a while to build my Debian packages, mainly because it's what is used by the Debian autobuilders, but also because it's pretty powerful and efficient. Configuring it just right, however, can be a challenge. In my quick Debian development guide, I had a few pointers on how to configure sbuild with the normal schroot setup, but today I finished a qemu based configuration.

Why I want to use qemu mainly because it provides better isolation than a chroot. I sponsor packages sometimes and while I typically audit the source code before building, it still feels like the extra protection shouldn't hurt. I also like the idea of unifying my existing virtual machine setup with my build setup. My current VM is kind of all over the place: libvirt, vagrant, GNOME Boxes, etc?). I've been slowly converging over libvirt however, and most solutions I use right now rely on qemu under the hood, certainly not chroots... I could also have decided to go with containers like LXC, LXD, Docker (with conbuilder, whalebuilder, docker-buildpackage), systemd-nspawn (with debspawn), unshare (with schroot --chroot-mode=unshare), or whatever: I didn't feel those offer the level of isolation that is provided by qemu. The main downside of this approach is that it is (obviously) slower than native builds. But on modern hardware, that cost should be minimal.

How Basically, you need this:
sudo mkdir -p /srv/sbuild/qemu/
sudo apt install sbuild-qemu
sudo sbuild-qemu-create -o /srv/sbuild/qemu/unstable.img unstable https://deb.debian.org/debian
Then to make this used by default, add this to ~/.sbuildrc:
# run autopkgtest inside the schroot
$run_autopkgtest = 1;
# tell sbuild to use autopkgtest as a chroot
$chroot_mode = 'autopkgtest';
# tell autopkgtest to use qemu
$autopkgtest_virt_server = 'qemu';
# tell autopkgtest-virt-qemu the path to the image
# use --debug there to show what autopkgtest is doing
$autopkgtest_virt_server_options = [ '--', '/srv/sbuild/qemu/%r-%a.img' ];
# tell plain autopkgtest to use qemu, and the right image
$autopkgtest_opts = [ '--', 'qemu', '/srv/sbuild/qemu/%r-%a.img' ];
# no need to cleanup the chroot after build, we run in a completely clean VM
$purge_build_deps = 'never';
# no need for sudo
$autopkgtest_root_args = '';
Note that the above will use the default autopkgtest (1GB, one core) and qemu (128MB, one core) configuration, which might be a little low on resources. You probably want to be explicit about this, with something like this:
# extra parameters to pass to qemu
# --enable-kvm is not necessary, detected on the fly by autopkgtest
my @_qemu_options = ['--ram-size=4096', '--cpus=2'];
# tell autopkgtest-virt-qemu the path to the image
# use --debug there to show what autopkgtest is doing
$autopkgtest_virt_server_options = [ @_qemu_options, '--', '/srv/sbuild/qemu/%r-%a.img' ];
$autopkgtest_opts = [ '--', 'qemu', @qemu_options, '/srv/sbuild/qemu/%r-%a.img'];
This configuration will:
  1. create a virtual machine image in /srv/sbuild/qemu for unstable
  2. tell sbuild to use that image to create a temporary VM to build the packages
  3. tell sbuild to run autopkgtest (which should really be default)
  4. tell autopkgtest to use qemu for builds and for tests
Note that the VM created by sbuild-qemu-create have an unlocked root account with an empty password.

Other useful tasks
  • enter the VM to make test, changes will be discarded (thanks Nick Brown for the sbuild-qemu-boot tip!):
     sbuild-qemu-boot /srv/sbuild/qemu/unstable-amd64.img
    
    That program is shipped only with bookworm and later, an equivalent command is:
     qemu-system-x86_64 -snapshot -enable-kvm -object rng-random,filename=/dev/urandom,id=rng0 -device virtio-rng-pci,rng=rng0,id=rng-device0 -m 2048 -nographic /srv/sbuild/qemu/unstable-amd64.img
    
    The key argument here is -snapshot.
  • enter the VM to make permanent changes, which will not be discarded:
     sudo sbuild-qemu-boot --readwrite /srv/sbuild/qemu/unstable-amd64.img
    
    Equivalent command:
     sudo qemu-system-x86_64 -enable-kvm -object rng-random,filename=/dev/urandom,id=rng0 -device virtio-rng-pci,rng=rng0,id=rng-device0 -m 2048 -nographic /srv/sbuild/qemu/unstable-amd64.img
    
  • update the VM (thanks lavamind):
     sudo sbuild-qemu-update /srv/sbuild/qemu/unstable-amd64.img
    
  • build in a specific VM regardless of the suite specified in the changelog (e.g. UNRELEASED, bookworm-backports, bookworm-security, etc):
     sbuild --autopkgtest-virt-server-opts="-- qemu /var/lib/sbuild/qemu/bookworm-amd64.img"
    
    Note that you'd also need to pass --autopkgtest-opts if you want autopkgtest to run in the correct VM as well:
     sbuild --autopkgtest-opts="-- qemu /var/lib/sbuild/qemu/unstable.img" --autopkgtest-virt-server-opts="-- qemu /var/lib/sbuild/qemu/bookworm-amd64.img"
    
    You might also need parameters like --ram-size if you customized it above.
And yes, this is all quite complicated and could be streamlined a little, but that's what you get when you have years of legacy and just want to get stuff done. It seems to me autopkgtest-virt-qemu should have a magic flag starts a shell for you, but it doesn't look like that's a thing. When that program starts, it just says ok and sits there. Maybe because the authors consider the above to be simple enough (see also bug #911977 for a discussion of this problem).

Live access to a running test When autopkgtest starts a VM, it uses this funky qemu commandline:
qemu-system-x86_64 -m 4096 -smp 2 -nographic -net nic,model=virtio -net user,hostfwd=tcp:127.0.0.1:10022-:22 -object rng-random,filename=/dev/urandom,id=rng0 -device virtio-rng-pci,rng=rng0,id=rng-device0 -monitor unix:/tmp/autopkgtest-qemu.w1mlh54b/monitor,server,nowait -serial unix:/tmp/autopkgtest-qemu.w1mlh54b/ttyS0,server,nowait -serial unix:/tmp/autopkgtest-qemu.w1mlh54b/ttyS1,server,nowait -virtfs local,id=autopkgtest,path=/tmp/autopkgtest-qemu.w1mlh54b/shared,security_model=none,mount_tag=autopkgtest -drive index=0,file=/tmp/autopkgtest-qemu.w1mlh54b/overlay.img,cache=unsafe,if=virtio,discard=unmap,format=qcow2 -enable-kvm -cpu kvm64,+vmx,+lahf_lm
... which is a typical qemu commandline, I'm sorry to say. That gives us a VM with those settings (paths are relative to a temporary directory, /tmp/autopkgtest-qemu.w1mlh54b/ in the above example):
  • the shared/ directory is, well, shared with the VM
  • port 10022 is forward to the VM's port 22, presumably for SSH, but not SSH server is started by default
  • the ttyS1 and ttyS2 UNIX sockets are mapped to the first two serial ports (use nc -U to talk with those)
  • the monitor UNIX socket is a qemu control socket (see the QEMU monitor documentation, also nc -U)
In other words, it's possible to access the VM with:
nc -U /tmp/autopkgtest-qemu.w1mlh54b/ttyS2
The nc socket interface is ... not great, but it works well enough. And you can probably fire up an SSHd to get a better shell if you feel like it.

Nitty-gritty details no one cares about

Fixing hang in sbuild cleanup I'm having a hard time making heads or tails of this, but please bear with me. In sbuild + schroot, there's this notion that we don't really need to cleanup after ourselves inside the schroot, as the schroot will just be delted anyways. This behavior seems to be handled by the internal "Session Purged" parameter. At least in lib/Sbuild/Build.pm, we can see this:
my $is_cloned_session = (defined ($session->get('Session Purged')) &&
             $session->get('Session Purged') == 1) ? 1 : 0;
[...]
if ($is_cloned_session)  
$self->log("Not cleaning session: cloned chroot in use\n");
  else  
if ($purge_build_deps)  
    # Removing dependencies
    $resolver->uninstall_deps();
  else  
    $self->log("Not removing build depends: as requested\n");
 
 
The schroot builder defines that parameter as:
    $self->set('Session Purged', $info-> 'Session Purged' );
... which is ... a little confusing to me. $info is:
my $info = $self->get('Chroots')->get_info($schroot_session);
... so I presume that depends on whether the schroot was correctly cleaned up? I stopped digging there... ChrootUnshare.pm is way more explicit:
$self->set('Session Purged', 1);
I wonder if we should do something like this with the autopkgtest backend. I guess people might technically use it with something else than qemu, but qemu is the typical use case of the autopkgtest backend, in my experience. Or at least certainly with things that cleanup after themselves. Right? For some reason, before I added this line to my configuration:
$purge_build_deps = 'never';
... the "Cleanup" step would just completely hang. It was quite bizarre.

Disgression on the diversity of VM-like things There are a lot of different virtualization solutions one can use (e.g. Xen, KVM, Docker or Virtualbox). I have also found libguestfs to be useful to operate on virtual images in various ways. Libvirt and Vagrant are also useful wrappers on top of the above systems. There are particularly a lot of different tools which use Docker, Virtual machines or some sort of isolation stronger than chroot to build packages. Here are some of the alternatives I am aware of: Take, for example, Whalebuilder, which uses Docker to build packages instead of pbuilder or sbuild. Docker provides more isolation than a simple chroot: in whalebuilder, packages are built without network access and inside a virtualized environment. Keep in mind there are limitations to Docker's security and that pbuilder and sbuild do build under a different user which will limit the security issues with building untrusted packages. On the upside, some of things are being fixed: whalebuilder is now an official Debian package (whalebuilder) and has added the feature of passing custom arguments to dpkg-buildpackage. None of those solutions (except the autopkgtest/qemu backend) are implemented as a sbuild plugin, which would greatly reduce their complexity. I was previously using Qemu directly to run virtual machines, and had to create VMs by hand with various tools. This didn't work so well so I switched to using Vagrant as a de-facto standard to build development environment machines, but I'm returning to Qemu because it uses a similar backend as KVM and can be used to host longer-running virtual machines through libvirt. The great thing now is that autopkgtest has good support for qemu and sbuild has bridged the gap and can use it as a build backend. I originally had found those bugs in that setup, but all of them are now fixed:
  • #911977: sbuild: how do we correctly guess the VM name in autopkgtest?
  • #911979: sbuild: fails on chown in autopkgtest-qemu backend
  • #911963: autopkgtest qemu build fails with proxy_cmd: parameter not set
  • #911981: autopkgtest: qemu server warns about missing CPU features
So we have unification! It's possible to run your virtual machines and Debian builds using a single VM image backend storage, which is no small feat, in my humble opinion. See the sbuild-qemu blog post for the annoucement Now I just need to figure out how to merge Vagrant, GNOME Boxes, and libvirt together, which should be a matter of placing images in the right place... right? See also hosting.

pbuilder vs sbuild I was previously using pbuilder and switched in 2017 to sbuild. AskUbuntu.com has a good comparative between pbuilder and sbuild that shows they are pretty similar. The big advantage of sbuild is that it is the tool in use on the buildds and it's written in Perl instead of shell. My concerns about switching were POLA (I'm used to pbuilder), the fact that pbuilder runs as a separate user (works with sbuild as well now, if the _apt user is present), and setting up COW semantics in sbuild (can't just plug cowbuilder there, need to configure overlayfs or aufs, which was non-trivial in Debian jessie). Ubuntu folks, again, have more documentation there. Debian also has extensive documentation, especially about how to configure overlays. I was ultimately convinced by stapelberg's post on the topic which shows how much simpler sbuild really is...

Who Thanks lavamind for the introduction to the sbuild-qemu package.

22 January 2021

Enrico Zini: Assembling the custom runner

This post is part of a series about trying to setup a gitlab runner based on systemd-nspawn. I published the polished result as nspawn-runner on GitHub. The plan Back to custom runners, here's my plan: The scripts Here are the scripts based on Federico's work: base.sh with definitions sourced by all scripts:
MACHINE="run-$CUSTOM_ENV_CI_JOB_ID"
ROOTFS="/var/lib/gitlab-runner-custom-chroots/buster"
OVERLAY="/var/lib/gitlab-runner-custom-chroots/$MACHINE"
config.sh doing nothing:
#!/bin/sh
exit 0
prepare.sh starting the machine:
#!/bin/bash
source $(dirname "$0")/base.sh
set -eo pipefail
# trap errors as a CI system failure
trap "exit $SYSTEM_FAILURE_EXIT_CODE" ERR
logger "gitlab CI: preparing $MACHINE"
mkdir -p $OVERLAY
systemd-run \
  -p 'KillMode=mixed' \
  -p 'Type=notify' \
  -p 'RestartForceExitStatus=133' \
  -p 'SuccessExitStatus=133' \
  -p 'Slice=machine.slice' \
  -p 'Delegate=yes' \
  -p 'TasksMax=16384' \
  -p 'WatchdogSec=3min' \
  systemd-nspawn --quiet -D $ROOTFS \
    --overlay="$ROOTFS:$OVERLAY:/"
    --machine="$MACHINE" --boot --notify-ready=yes
run.sh running the provided scripts in the machine:
#!/bin/bash
logger "gitlab CI: running $@"
source $(dirname "$0")/base.sh
set -eo pipefail
trap "exit $SYSTEM_FAILURE_EXIT_CODE" ERR
systemd-run --quiet --pipe --wait --machine="$MACHINE" /bin/bash < "$1"
cleanup.sh stopping the machine and removing the writable overlay directory:
#!/bin/bash
logger "gitlab CI: cleanup $@"
source $(dirname "$0")/base.sh
machinectl stop "$MACHINE"
rm -rf $OVERLAY
Trying out the plan I tried a manual invocation of gitlab-runner, and it worked perfectly:
# mkdir /var/lib/gitlab-runner-custom-chroots/build/
# mkdir /var/lib/gitlab-runner-custom-chroots/cache/
# gitlab-runner exec custom \
    --builds-dir /var/lib/gitlab-runner-custom-chroots/build/ \
    --cache-dir /var/lib/gitlab-runner-custom-chroots/cache/ \
    --custom-config-exec /var/lib/gitlab-runner-custom-chroots/config.sh \
    --custom-prepare-exec /var/lib/gitlab-runner-custom-chroots/prepare.sh \
    --custom-run-exec /var/lib/gitlab-runner-custom-chroots/run.sh \
    --custom-cleanup-exec /var/lib/gitlab-runner-custom-chroots/cleanup.sh \
    tests
Runtime platform                                    arch=amd64 os=linux pid=18662 revision=775dd39d version=13.8.0
Running with gitlab-runner 13.8.0 (775dd39d)
Preparing the "custom" executor
Using Custom executor...
Running as unit: run-r1be98e274224456184cbdefc0690bc71.service
executor not supported                              job=1 project=0 referee=metrics
Preparing environment
Getting source from Git repository
Executing "step_script" stage of the job script
WARNING: Starting with version 14.0 the 'build_script' stage will be replaced with 'step_script': https://gitlab.com/gitlab-org/gitlab-runner/-/issues/26426
Job succeeded
Deploy The remaining step is to deploy all this in /etc/gitlab-runner/config.toml:
concurrent = 1
check_interval = 0
[session_server]
  session_timeout = 1800
[[runners]]
  name = "nspawn runner"
  url = "http://gitlab.siweb.local/"
  token = " "
  executor = "custom"
  builds_dir = "/var/lib/gitlab-runner-custom-chroots/build/"
  cache_dir = "/var/lib/gitlab-runner-custom-chroots/cache/"
  [runners.custom_build_dir]
  [runners.cache]
    [runners.cache.s3]
    [runners.cache.gcs]
    [runners.cache.azure]
  [runners.custom]
    config_exec = "/var/lib/gitlab-runner-custom-chroots/config.sh"
    config_exec_timeout = 200
    prepare_exec = "/var/lib/gitlab-runner-custom-chroots/prepare.sh"
    prepare_exec_timeout = 200
    run_exec = "/var/lib/gitlab-runner-custom-chroots/run.sh"
    cleanup_exec = "/var/lib/gitlab-runner-custom-chroots/cleanup.sh"
    cleanup_exec_timeout = 200
    graceful_kill_timeout = 200
    force_kill_timeout = 200
Next steps My next step will be polishing all this in a way that makes deploying and maintaining a runner configuration easy.

Enrico Zini: Exploring nspawn for CIs

This post is part of a series about trying to setup a gitlab runner based on systemd-nspawn. I published the polished result as nspawn-runner on GitHub. Here I try to figure out possible ways of invoking nspawn for the prepare, run, and cleanup steps of gitlab custom runners. The results might be useful invocations beyond Gitlab's scope of application. I begin with a chroot which will be the base for our build environments:
debootstrap --variant=minbase --include=git,build-essential buster workdir
Fully ephemeral nspawn This would be fantastic: set up a reusable chroot, mount readonly, run the CI in a working directory mounted on tmpfs. It sets up quickly, it cleans up after itself, and it would make prepare and cleanup noops:
mkdir workdir/var/lib/gitlab-runner
systemd-nspawn --read-only --directory workdir --tmpfs /var/lib/gitlab-runner "$@"
However, run gets run multiple times, so I need the side effects of run to persist inside the chroot between runs. Also, if the CI uses a large amount of disk space, tmpfs may get into trouble. nspawn with overlay Federico used --overlay to keep the base chroot readonly while allowing persistent writes on a temporary directory on the filesystem. Note that using --overlay requires systemd and systemd-container from buster-backports because of systemd bug #3847. Example:
mkdir -p tmp-overlay
systemd-nspawn --quiet -D workdir \
  --overlay=" pwd /workdir: pwd /tmp-overlay:/"
I can run this twice, and changes in the file system will persist between systemd-nspawn executions. Great! However, any process will be killed at the end of each execution. machinectl I can give a name to systemd-nspawn invocations using --machine, and it allows me to run multiple commands during the machine lifespan using machinectl and systemd-run. In theory machinectl can also fully manage chroots and disk images in /var/lib/machines, but I haven't found a way with machinectl to start multiple machines sharing the same underlying chroot. It's ok, though: I managed to do that with systemd-nspawn invocations. I can use the --machine=name argument to systemd-nspawn to make it visible to machinectl. I can use the --boot argument to systemd-nspawn to start enough infrastructure inside the container to allow machinectl to interact with it. This gives me any number of persistent and named running systems, that share the same underlying chroot, and can cleanup after themselves. I can run commands in any of those systems as I like, and their side effects persist until a system is stopped. The chroot needs systemd and dbus for machinectl to be able to interact with it:
debootstrap --variant=minbase --include=git,systemd,systemd,build-essential buster workdir
Let's boot the machine:
mkdir -p overlay
systemd-nspawn --quiet -D workdir \
    --overlay=" pwd /workdir: pwd /overlay:/"
    --machine=test --boot
Let's try machinectl:
# machinectl list
MACHINE CLASS     SERVICE        OS     VERSION ADDRESSES
test    container systemd-nspawn debian 10      -
1 machines listed.
# machinectl shell --quiet test /bin/ls -la /
total 60
[ ]
To run commands, rather than machinectl shell, I need to use systemd-run --wait --pipe --machine=name, otherwise machined won't forward the exit code. The result however is pretty good, with working stdin/stdout/stderr redirection and forwarded exit code. Good, I'm getting somewhere. The terminal where I ran systemd-nspawn is currently showing a nice getty for the booted system, which is cute, and not what I want for the setup process of a CI. Spawning machines without needing a terminal machinectl uses /lib/systemd/system/systemd-nspawn@.service to start machines. I suppose there's limited magic in there: start systemd-nspawn as a service, use --machine to give it a name, and machinectl manages it as if it started it itself. What if, instead of installing a unit file for each CI run, I try to do the same thing with systemd-run?
systemd-run \
  -p 'KillMode=mixed' \
  -p 'Type=notify' \
  -p 'RestartForceExitStatus=133' \
  -p 'SuccessExitStatus=133' \
  -p 'Slice=machine.slice' \
  -p 'Delegate=yes' \
  -p 'TasksMax=16384' \
  -p 'WatchdogSec=3min' \
  systemd-nspawn --quiet -D  pwd /workdir \
    --overlay=" pwd /workdir: pwd /overlay:/"
    --machine=test --boot
It works! I can interact with it using machinectl, and fine tune DevicePolicy as needed to lock CI machines down. This setup has a race condition where if I try to run a command inside the machine in the short time window before the machine has finished booting, it fails:
# systemd-run [ ] systemd-nspawn [ ] ; machinectl --quiet shell test /bin/ls -la /
Failed to get shell PTY: Protocol error
# machinectl shell test /bin/ls -la /
Connected to machine test. Press ^] three times within 1s to exit session.
total 60
[ ]
systemd-nspawn has the option --notify-ready=yes that solves exactly this problem:
# systemd-run [ ] systemd-nspawn [ ] --notify-ready=yes ; machinectl --quiet shell test /bin/ls -la /
Running as unit: run-r5a405754f3b740158b3d9dd5e14ff611.service
total 60
[ ]
On nspawn's side, I should now have all I need. Next steps My next step will be wrapping it all together in a gitlab runner.

Enrico Zini: Gitlab runners with nspawn

This is a first post in a series about trying to setup a gitlab runner based on systemd-nspawn. I published the polished result as nspawn-runner on GitHub. The goal I need to setup gitlab runners, and I try to not involve docker in my professional infrastructure if I can avoid it. Let's try systemd-nspawn. It's widely available and reasonably reliable. I'm not the first to have this idea: Federico Ceratto made a setup based on custom runners and Josef Kufner one based on ssh runners. I'd like to skip the complication of ssh, and to expand Federico's version to persist not just filesystem changes but also any other side effect of CI commands. For example, one CI command may bring up a server and the next CI command may want to test interfacing with it. Understanding gitlab-runner First step: figuring out gitlab-runner. Test runs of gitlab-runner I found that I can run gitlab-runner manually without needing to go through a push to Gitlab. It needs a local git repository with a .gitlab-ci.yml file:
mkdir test
cd test
git init
cat > .gitlab-ci.yml << EOF
tests:
 script:
  - env   sort
  - pwd
  - ls -la
EOF
git add .gitlab-ci.yml
git commit -am "Created a test repo for gitlab-runner"
Then I can go in the repo and test gitlab-runner:
gitlab-runner exec shell tests
It doesn't seem to use /etc/gitlab-runner/config.toml and it needs all the arguments passed to its command line: I used the shell runner for a simple initial test. Later I'll try to brew a gitlab-runner exec custom invocation that uses nspawn. Basics of custom runners A custom runner runs a few scripts to manage the run: run gets at least one argument which is a path to the script to run. The other scripts get no arguments by default. The runner configuration controls the paths of the scripts to run, and optionally extra arguments to pass to them Next steps My next step will be to figure out possible ways of invoking nspawn for the prepare, run, and cleanup scripts.

12 July 2020

Enrico Zini: Police brutality links

I was a police officer for nearly ten years and I was a bastard. We all were.
We've detected that JavaScript is disabled in your browser. Would you like to proceed to legacy Twitter?
As nationwide protests over the deaths of George Floyd and Breonna Taylor are met with police brutality, John Oliver discusses how the histories of policing ...
La morte di Stefano Cucchi avvenne a Roma il 22 ottobre 2009 mentre il giovane era sottoposto a custodia cautelare. Le cause della morte e le responsabilit sono oggetto di procedimenti giudiziari che hanno coinvolto da un lato i medici dell'ospedale Pertini,[1][2][3][4] dall'altro continuano a coinvolgere, a vario titolo, pi militari dell Arma dei Carabinieri[5][6]. Il caso ha attirato l'attenzione dell'opinione pubblica a seguito della pubblicazione delle foto dell'autopsia, poi riprese da agenzie di stampa, giornali e telegiornali italiani[7]. La vicenda ha ispirato, altres , documentari e lungometraggi cinematografici.[8][9][10]
La morte di Giuseppe Uva avvenne il 14 giugno 2008 dopo che, nella notte tra il 13 e il 14 giugno, era stato fermato ubriaco da due carabinieri che lo portarono in caserma, dalla quale venne poi trasferito, per un trattamento sanitario obbligatorio, nell'ospedale di Varese, dove mor la mattina successiva per arresto cardiaco. Secondo la tesi dell'accusa, la morte fu causata dalla costrizione fisica subita durante l'arresto e dalle successive violenze e torture che ha subito in caserma. Il processo contro i due carabinieri che eseguirono l'arresto e contro altri sei agenti di polizia ha assolto gli imputati dalle accuse di omicidio preterintenzionale e sequestro di persona[1][2][3][4]. Alla vicenda dedicato il documentario Viva la sposa di Ascanio Celestini[1][5].
Il caso Aldrovandi la vicenda giudiziaria causata dall'uccisione di Federico Aldrovandi, uno studente ferrarese, avvenuta il 25 settembre 2005 a seguito di un controllo di polizia.[1][2][3] I procedimenti giudiziari hanno condannato, il 6 luglio 2009, quattro poliziotti a 3 anni e 6 mesi di reclusione, per "eccesso colposo nell'uso legittimo delle armi";[1][4] il 21 giugno 2012 la Corte di cassazione ha confermato la condanna.[1] All'inchiesta per stabilire la cause della morte ne sono seguite altre per presunti depistaggi e per le querele fra le parti interessate.[1] Il caso stato oggetto di grande attenzione mediatica e ha ispirato un documentario, stato morto un ragazzo.[1][5]
Federico Aldrovandi (17 July 1987 in Ferrara 25 September 2005 in Ferrara) was an Italian student, who was killed by four policemen.[1]
24 Giugno 2020

2 November 2017

Antoine Beaupr : October 2017 report: LTS, feed2exec beta, pandoc filters, git mediawiki

Debian Long Term Support (LTS) This is my monthly Debian LTS report. This time I worked on the famous KRACK attack, git-annex, golang and the continuous stream of GraphicsMagick security issues.

WPA & KRACK update I spent most of my time this month on the Linux WPA code, to backport it to the old (~2012) wpa_supplicant release. I first published a patchset based on the patches shipped after the embargo for the oldstable/jessie release. After feedback from the list, I also built packages for i386 and ARM. I have also reviewed the WPA protocol to make sure I understood the implications of the changes required to backport the patches. For example, I removed the patches touching the WNM sleep mode code as that was introduced only in the 2.0 release. Chunks of code regarding state tracking were also not backported as they are part of the state tracking code introduced later, in 3ff3323. Finally, I still have concerns about the nonce setup in patch #5. In the last chunk, you'll notice peer->tk is reset, to_set to negotiate a new TK. The other approach I considered was to backport 1380fcbd9f ("TDLS: Do not modify RNonce for an TPK M1 frame with same INonce") but I figured I would play it safe and not introduce further variations. I should note that I share Matthew Green's observations regarding the opacity of the protocol. Normally, network protocols are freely available and security researchers like me can easily review them. In this case, I would have needed to read the opaque 802.11i-2004 pdf which is behind a TOS wall at the IEEE. I ended up reading up on the IEEE_802.11i-2004 Wikipedia article which gives a simpler view of the protocol. But it's a real problem to see such critical protocols developed behind closed doors like this. At Guido's suggestion, I sent the final patch upstream explaining the concerns I had with the patch. I have not, at the time of writing, received any response from upstream about this, unfortunately. I uploaded the fixed packages as DLA 1150-1 on October 31st.

Git-annex The next big chunk on my list was completing the work on git-annex (CVE-2017-12976) that I started in August. It turns out doing the backport was simpler than I expected, even with my rusty experience with Haskell. Type-checking really helps in doing the right thing, especially considering how Joey Hess implemented the fix: by introducing a new type. So I backported the patch from upstream and notified the security team that the jessie and stretch updates would be similarly easy. I shipped the backport to LTS as DLA-1144-1. I also shared the updated packages for jessie (which required a similar backport) and stretch (which didn't) and those Sebastien Delafond published those as DSA 4010-1.

Graphicsmagick Up next was yet another security vulnerability in the Graphicsmagick stack. This involved the usual deep dive into intricate and sometimes just unreasonable C code to try and fit a round tree in a square sinkhole. I'm always unsure about those patches, but the test suite passes, smoke tests show the vulnerability as fixed, and that's pretty much as good as it gets. The announcement (DLA 1154-1) turned out to be a little special because I had previously noticed that the penultimate announcement (DLA 1130-1) was never sent out. So I made a merged announcement to cover both instead of re-sending the original 3 weeks late, which may have been confusing for our users.

Triage & misc We always do a bit of triage even when not on frontdesk duty, so I: I also did smaller bits of work on: The latter reminded me of the concerns I have about the long-term maintainability of the golang ecosystem: because everything is statically linked, an update to a core library (say the SMTP library as in CVE-2017-15042, thankfully not affecting LTS) requires a full rebuild of all packages including the library in all distributions. So what would be a simple update in a shared library system could mean an explosion of work on statically linked infrastructures. This is a lot of work which can definitely be error-prone: as I've seen in other updates, some packages (for example the Ruby interpreter) just bit-rot on their own and eventually fail to build from source. We would also have to investigate all packages to see which one include the library, something which we are not well equipped for at this point. Wheezy was the first release shipping golang packages but at least it's shipping only one... Stretch has shipped with two golang versions (1.7 and 1.8) which will make maintenance ever harder in the long term.
We build our computers the way we build our cities--over time, without a plan, on top of ruins. - Ellen Ullman

Other free software work This month again, I was busy doing some serious yak shaving operations all over the internet, on top of publishing two of my largest LWN articles to date (2017-10-16-strategies-offline-pgp-key-storage and 2017-10-26-comparison-cryptographic-keycards).

feed2exec beta Since I announced this new project last month I have released it as a beta and it entered Debian. I have also wrote useful plugins like the wayback plugin that saves pages on the Wayback machine for eternal archival. The archive plugin can also similarly save pages to the local filesystem. I also added bash completion, expanded unit tests and documentation, fixed default file paths and a bunch of bugs, and refactored the code. Finally, I also started using two external Python libraries instead of rolling my own code: the pyxdg and requests-file libraries, the latter which I packaged in Debian (and fixed a bug in their test suite). The program is working pretty well for me. The only thing I feel is really missing now is a retry/fail mechanism. Right now, it's a little brittle: any network hiccup will yield an error email, which are readable to me but could be confusing to a new user. Strangely enough, I am particularly having trouble with (local!) DNS resolution that I need to look into, but that is probably unrelated with the software itself. Thankfully, the user can disable those with --loglevel=ERROR to silence WARNINGs. Furthermore, some plugins still have some rough edges. For example, The Transmission integration would probably work better as a distinct plugin instead of a simple exec call, because when it adds new torrents, the output is totally cryptic. That plugin could also leverage more feed parameters to save different files in different locations depending on the feed titles, something would be hard to do safely with the exec plugin now. I am keeping a steady flow of releases. I wish there was a way to see how effective I am at reaching out with this project, but unfortunately GitLab doesn't provide usage statistics... And I have received only a few comments on IRC about the project, so maybe I need to reach out more like it says in the fine manual. Always feels strange to have to promote your project like it's some new bubbly soap... Next steps for the project is a final review of the API and release production-ready 1.0.0. I am also thinking of making a small screencast to show the basic capabilities of the software, maybe with asciinema's upcoming audio support?

Pandoc filters As I mentioned earlier, I dove again in Haskell programming when working on the git-annex security update. But I also have a small Haskell program of my own - a Pandoc filter that I use to convert the HTML articles I publish on LWN.net into a Ikiwiki-compatible markdown version. It turns out the script was still missing a bunch of stuff: image sizes, proper table formatting, etc. I also worked hard on automating more bits of the publishing workflow by extracting the time from the article which allowed me to simply extract the full article into an almost final copy just by specifying the article ID. The only thing left is to add tags, and the article is complete. In the process, I learned about new weird Haskell constructs. Take this code, for example:
-- remove needless blockquote wrapper around some tables
--
-- haskell newbie tips:
--
-- @ is the "at-pattern", allows us to define both a name for the
-- construct and inspect the contents as once
--
--   is the "empty record pattern": it basically means "match the
-- arguments but ignore the args"
cleanBlock (BlockQuote t@[Table  ]) = t
Here the idea is to remove <blockquote> elements needlessly wrapping a <table>. I can't specify the Table type on its own, because then I couldn't address the table as a whole, only its parts. I could reconstruct the whole table bits by bits, but it wasn't as clean. The other pattern was how to, at last, address multiple string elements, which was difficult because Pandoc treats spaces specially:
cleanBlock (Plain (Strong (Str "Notifications":Space:Str "for":Space:Str "all":Space:Str "responses":_):_)) = []
The last bit that drove me crazy was the date parsing:
-- the "GAByline" div has a date, use it to generate the ikiwiki dates
--
-- this is distinct from cleanBlock because we do not want to have to
-- deal with time there: it is only here we need it, and we need to
-- pass it in here because we do not want to mess with IO (time is I/O
-- in haskell) all across the function hierarchy
cleanDates :: ZonedTime -> Block -> [Block]
-- this mouthful is just the way the data comes in from
-- LWN/Pandoc. there could be a cleaner way to represent this,
-- possibly with a record, but this is complicated and obscure enough.
cleanDates time (Div (_, [cls], _)
                 [Para [Str month, Space, Str day, Space, Str year], Para _])
    cls == "GAByline" = ikiwikiRawInline (ikiwikiMetaField "date"
                                           (iso8601Format (parseTimeOrError True defaultTimeLocale "%Y-%B-%e,"
                                                           (year ++ "-" ++ month ++ "-" ++ day) :: ZonedTime)))
                        ++ ikiwikiRawInline (ikiwikiMetaField "updated"
                                             (iso8601Format time))
                        ++ [Para []]
-- other elements just pass through
cleanDates time x = [x]
Now that seems just dirty, but it was even worse before. One thing I find difficult in adapting to coding in Haskell is that you need to take the habit of writing smaller functions. The language is really not well adapted to long discourse: it's more about getting small things connected together. Other languages (e.g. Python) discourage this because there's some overhead in calling functions (10 nanoseconds in my tests, but still), whereas functions are a fundamental and important construction in Haskell that are much more heavily optimized. So I constantly need to remind myself to split things up early, otherwise I can't do anything in Haskell. Other languages are more lenient, which does mean my code can be more dirty, but I feel get things done faster then. The oddity of Haskell makes frustrating to work with. It's like doing construction work but you're not allowed to get the floor dirty. When I build stuff, I don't mind things being dirty: I can cleanup afterwards. This is especially critical when you don't actually know how to make things clean in the first place, as Haskell will simply not let you do that at all. And obviously, I fought with Monads, or, more specifically, "I/O" or IO in this case. Turns out that getting the current time is IO in Haskell: indeed, it's not a "pure" function that will always return the same thing. But this means that I would have had to change the signature of all the functions that touched time to include IO. I eventually moved the time initialization up into main so that I had only one IO function and moved that timestamp downwards as simple argument. That way I could keep the rest of the code clean, which seems to be an acceptable pattern. I would of course be happy to get feedback from my Haskell readers (if any) to see how to improve that code. I am always eager to learn.

Git remote MediaWiki Few people know that there is a MediaWiki remote for Git which allow you to mirror a MediaWiki site as a Git repository. As a disaster recovery mechanism, I have been keeping such a historical backup of the Amateur radio wiki for a while now. This originally started as a homegrown Python script to also convert the contents in Markdown. My theory then was to see if we could switch from Mediawiki to Ikiwiki, but it took so long to implement that I never completed the work. When someone had the weird idea of renaming a page to some impossible long name on the wiki, my script broke. I tried to look at fixing it and then remember I also had a mirror running using the Git remote. It turns out it also broke on the same issue and that got me looking in the remote again. I got lost in a zillion issues, including fixing that specific issue, but I especially looked at the possibility of fetching all namespaces because I realized that the remote fetches only a part of the wiki by default. And that drove me to submit namespace support as a patch to the git mailing list. Finally, the discussion came back to how to actually maintain that contrib: in git core or outside? Finally, it looks like I'll be doing some maintenance that project outside of git, as I was granted access to the GitHub organisation...

Galore Yak Shaving Then there's the usual hodgepodge of fixes and random things I did over the month.
There is no [web extension] only XUL! - Inside joke

25 August 2017

Reproducible builds folks: Reproducible Builds: Weekly report #121

Here's what happened in the Reproducible Builds effort between Sunday August 13 and Saturday August 19 2017: Reproducible Builds finally mandated by Debian Policy "Packages should build reproducibly" was merged into Debian policy! The added text is as follows and has been included into debian-policy 4.1.0.0:
Reproducibility
---------------
Packages should build reproducibly, which for the purposes of this
document [#]_ means that given
- a version of a source package unpacked at a given path;
- a set of versions of installed build dependencies;
- a set of environment variable values;
- a build architecture; and
- a host architecture,
repeatedly building the source package for the build architecture on
any machine of the host architecture with those versions of the build
dependencies installed and exactly those environment variable values
set will produce bit-for-bit identical binary packages.
It is recommended that packages produce bit-for-bit identical binaries
even if most environment variables and build paths are varied.  It is
intended for this stricter standard to replace the above when it is
easier for packages to meet it.
.. [#]
   This is Debian's precisification of the  reproducible-builds.org
   definition  _.
Reproducible work in other projects Bernhard M. Wiedemann's reproducibleopensuse scripts now work on Debian buster on the openSUSE Build Service with the latest versions of osc and obs-build. Toolchain development and fixes #872514 was opened on devscripts by Chris Lamb to add a reproducible-check program to report on the reproducibility status of installed packages. Packages reviewed and fixed, and bugs filed Upstream reports: Debian reports: Debian non-maintainer uploads: Reviews of unreproducible packages 47 package reviews have been added, 58 have been updated and 39 have been removed in this week, adding to our knowledge about identified issues. 4 issue types have been updated: Weekly QA work During our reproducibility testing, FTBFS bugs have been detected and reported by: diffoscope development Development continued in git, including the following contributions: disorderfs development Development continued in git, including the following contributions: reprotest development Development continued in git, including the following contributions: tests.reproducible-builds.org Mattia fixed the script which creates the HTML representation of our database scheme to not append .html twice to the filename. Misc. This week's edition was written by Ximin Luo, Chris Lamb and Holger Levsen & reviewed by a bunch of Reproducible Builds folks on IRC & the mailing lists.

27 July 2016

Norbert Preining: TUG 2016 Day 2 Figures to Fonts

The second day of TUG 2016 was again full of interesting talks spanning from user experiences to highly technical details about astrological chart drawing, and graphical user interfaces to TikZ to the invited talk by Robert Bringhurst on the Palatino family of fonts. tug2016-bringhurst With all these interesting things there is only one thing to compain I cannot get out of the dark basement and enjoy the city After a evening full of sake and a good night s sleep we were ready to dive into the second day of TUG. Kaveh Bazargan A graphical user interface for TikZ The opening speaker of Day 2 was Kaveh. He first gave us a quick run-down on what he is doing for business and what challenges publishers are facing in these times. After that he introduced us to his new development of a command line graphical user interface for TikZ. I wrote command line on purpose, because the editing operations are short commands issued on a kind of command line, which will give an immediate graphical feedback. Basic of the technique is a simplified TikZ-like meta language that is not only easy to write, but also easy to parse. While the amount of supported commands and features of TikZ is still quite small, I think the basic idea is a good one, and there is a good potential in it. Matthew Skala Astrological charts with horoscop and starfont Next up was Matthew who introduced us to the involved task of typesetting astrological charts. He included comparisons with various commercial and open source solutions, where Matthew of course, but me too, felt that his charts came of quite well! As an extra bonus we got some charts of famous singers, as well as the TUG 2016 horoscope. David Tulett Development of an e-textbook using LaTeX and PStricks David reported on his project to develop an e-textbook on decision modeling (lots of math!) using LaTeX and PStricks. His e-book is of course a PDF. There were a lot of very welcoming feedback free (CC-BY-NC-ND) textbooks for sciences are rare and we need more of them. Christian Gagn An Emacs-based writing workflow inspired by TeX and WEB, targeting the Web Christian s talk turned around editing and publishing using org-mode of Emacs and the various levels of macros one can use in this setup. He finished with a largely incomprehensible vision of a future equational logic based notation mode. I have used equational logic in my day-in-day-out job, and I am not completely convinced that this is a good approach for typesetting and publishing but who knows, I am looking forward to a more logic-based approach! Barbara Beeton, Frank Mittelbach In memoriam: Sebastian Rahtz (1955-2016) Frank recalled Sebastian s many contribution to a huge variety of fields, and recalled our much missed colleague with many photos and anecdotes. Jim Hefferon A LaTeX reference manual Jim reported about the current state of a LaTeX reference manual, which tries to provide a documentation orthogonally to the many introduction and user guides available, by providing a straight down-to-earth reference manual with all the technical bells and whistles necessary. As I had to write myself a reference manual for a computer language, it was very interested to see how they dealt with many of the same problems I am facing. Arthur Reutenauer, Mojca Miklavec Hyphenation past and future: hyph-utf8 and patgen Arthur reports about the current statue of the hyphenation pattern project, and in particular the license and usage hell they recently came into with large cooperations simply grabbing the patterns without proper attribution. In a second part he gave a rough sketch of his shot at a reimplementation of patgen. Unfortunately he wrote in rather unreadable hand-writing on a flip-chart, which made only the first line audience to actually see what he was writing. Federico Garcia-De Castro TeXcel? As an artist organizing large festivals Federico has to fight with financial planning and reports. He seemed not content with the abilities of the usual suspects, so he developed a way to do Excel like book-keeping in TeX. Nice idea, I hope I can use that system for the next conference I have to organize! Jennifer Claudio A brief reflection on TeX and end-user needs Last speaker in the morning session was Jennifer who gave us a new and end-user s view onto the TeX environment, and the respective needs. These kind of talks are a very much welcomed contrast to technical talks and hopefully all of us developers take home some of her suggestions. Sungmin Kim, Jaeyoung Choi, Geunho Jeong MFCONFIG: Metafont plug-in module for the Freetype rasterizer Jaeyoung reported about an impressive project to make Metafont fonts available to fontconfig and thus windowing systems. He also explained their development of a new font format Stemfont, which is a Metafont-like system that can work also for CJK fonts, and which they envisage to be built into all kind of mobile devices. Michael Sharpe New font offerings Cochineal, Nimbus15 and LibertinusT1Math Michael reports about his last font projects. The first two being extensions of the half-made half-butchered rereleased URW fonts, as well as his first (?) math font project. I talked to him over lunch one day, and asked him how many man-days he need for these fonts, and his answer was speaking a lot: For the really messed up new URW fonts, like Cochineal, he guessed about 5 man-months of work, while other fonts only needed a few days. I think we all can be deeply thankful to all the work he is investing into all these font projects. Robert Bringhurst The evolution of the Palatino tribe The second invited talk was Robert Bringhurst, famous for his wide contributions to typpography, book culture in general, as well as poetry. He gave a quick historic overview on the development of the Palatino tribe of fonts, with lots of beautiful photos. I was really looking forward to Robert s talk, and my expectations were extremely high. And unfortunately I must say I was quite disappointed. Maybe it is his style of presentation, but the feeling he transfered to me (the audience?) was that he was going through a necessary medical check, not much enjoying the presentation. Also, the content itself was not really full of his own ideas or thoughts, but a rather superficial listing of historical facts. Of course, a person like Robert Bringhurst is so full of anecdotes and background knowledge still was a great pleasure to listen and lots of things to learn, I only hoped for a bit more enthusiasm. TUG Annual General Meeting The afternoon session finished with the TUG Annual General Meeting, reports will be sent out soon to all TUG members. Herbert Schulz Optional workshop: TeXShop tips & tricks After the AGM, Herbert from MacTeX and TeXShop gave an on-the-spot workshop on TeXShop. Since I am not a Mac user, I skipped on that.
Another late afternoon program consisted of an excursion to Eliot s bookshop, where many of us stacked up on great books. This time again I skipped and took a nap. In the evening we had a rather interesting informal dinner in the food court of some building, where only two shops were open and all of us lined up in front of the Japanese Curry shop, and then gulped down from plastic boxes. Hmm, not my style I have to say, not even for informal dinner. But at least I could meet up with a colleague from Debian and get some gpg key signing done. And of course, talking to all kind of people around. The last step for me was in the pub opposite the hotel, with beer and whiskey/scotch selected by specialists in the field.

24 September 2014

Matthew Garrett: My free software will respect users or it will be bullshit

I had dinner with a friend this evening and ended up discussing the FSF's four freedoms. The fundamental premise of the discussion was that the freedoms guaranteed by free software are largely academic unless you fall into one of two categories - someone who is sufficiently skilled in the arts of software development to examine and modify software to meet their own needs, or someone who is sufficiently privileged[1] to be able to encourage developers to modify the software to meet their needs.

The problem is that most people don't fall into either of these categories, and so the benefits of free software are often largely theoretical to them. Concentrating on philosophical freedoms without considering whether these freedoms provide meaningful benefits to most users risks these freedoms being perceived as abstract ideals, divorced from the real world - nice to have, but fundamentally not important. How can we tie these freedoms to issues that affect users on a daily basis?

In the past the answer would probably have been along the lines of "Free software inherently respects users", but reality has pretty clearly disproven that. Unity is free software that is fundamentally designed to tie the user into services that provide financial benefit to Canonical, with user privacy as a secondary concern. Despite Android largely being free software, many users are left with phones that no longer receive security updates[2]. Textsecure is free software but the author requests that builds not be uploaded to third party app stores because there's no meaningful way for users to verify that the code has not been modified - and there's a direct incentive for hostile actors to modify the software in order to circumvent the security of messages sent via it.

We're left in an awkward situation. Free software is fundamental to providing user privacy. The ability for third parties to continue providing security updates is vital for ensuring user safety. But in the real world, we are failing to make this argument - the freedoms we provide are largely theoretical for most users. The nominal security and privacy benefits we provide frequently don't make it to the real world. If users do wish to take advantage of the four freedoms, they frequently do so at a potential cost of security and privacy. Our focus on the four freedoms may be coming at a cost to the pragmatic freedoms that our users desire - the freedom to be free of surveillance (be that government or corporate), the freedom to receive security updates without having to purchase new hardware on a regular basis, the freedom to choose to run free software without having to give up basic safety features.

That's why projects like the GNOME safety and privacy team are so important. This is an example of tying the four freedoms to real-world user benefits, demonstrating that free software can be written and managed in such a way that it actually makes life better for the average user. Designing code so that users are fundamentally in control of any privacy tradeoffs they make is critical to empowering users to make informed decisions. Committing to meaningful audits of all network transmissions to ensure they don't leak personal data is vital in demonstrating that developers fundamentally respect the rights of those users. Working on designing security measures that make it difficult for a user to be tricked into handing over access to private data is going to be a necessary precaution against hostile actors, and getting it wrong is going to ruin lives.

The four freedoms are only meaningful if they result in real-world benefits to the entire population, not a privileged minority. If your approach to releasing free software is merely to ensure that it has an approved license and throw it over the wall, you're doing it wrong. We need to design software from the ground up in such a way that those freedoms provide immediate and real benefits to our users. Anything else is a failure.

(title courtesy of My Feminism will be Intersectional or it will be Bullshit by Flavia Dzodan. While I'm less angry, I'm solidly convinced that free software that does nothing to respect or empower users is an absolute waste of time)

[1] Either in the sense of having enough money that you can simply pay, having enough background in the field that you can file meaningful bug reports or having enough followers on Twitter that simply complaining about something results in people fixing it for you

[2] The free software nature of Android often makes it possible for users to receive security updates from a third party, but this is not always the case. Free software makes this kind of support more likely, but it is in no way guaranteed.

comment count unavailable comments

14 May 2013

Ulrich Dangel: Debian Ireland Meetup Friday 17th of May

Thanks to Federico the Debian Irish User Group celebrates the Wheezy release with some pints this Friday (17.05.2013) at 8 at Mac Turcaill s. For more information and a link to the pub have a look at the mailing list posting from Federico. Oh and by the way: the Irish Debian Community officially launched last year, i.e. debian-dug-ie is #newinwheezy.

1 April 2012

Gregor Herrmann: RC bugs 2012/13

due to some new incoming RC bugs, this week was more devoted to fixing bugs in "our" (= the Debian Perl Group's) packages. here's the list:

7 December 2011

Daniel Stone: why you don't actually want dpi

Inspired by a discussion in #wayland today, here are snippets from three people explaining why X declares its DPI as 96, and why a single 'DPI' sledgehammer isn't actually what basically anyone<super>*</super> wants. Please read them. Thanks. Adam Jackson:
I am clearly going to have to explain this one more time, forever. Let's see if I can't write it authoritatively once and simply answer with a URL from here out.
Matthew Garrett:
But what about the single monitor case? Let's go back to your Vaio. It's got a high DPI screen, so let's adjust to that. Now you're happy. Right up until you plug in an external monitor and now when you run any applications on the external display your fonts are twice the size they should be. WOOHOO GO TEAM of course that won't make us look like amateurs at all. So you need another heuristic to handle that, and of course "heuristic" is an ancient african word meaning "maybe bonghits will make this problem more tractable".
Federico Mena-Quintero:
People who know a bit of typography may know a few factoids:
- Printed books generally use fonts which can be from about 9 to about 12 points in size.
- A point is roughly 1/72 of an inch. For people in civilized countries, this translates to "I have no idea what the fuck a quarter pounder is".
*: Yes, I know you need to have actual point equivalence, and you've had all your displays and printer colour-calibrated for the past ten years too. You're doing all this in the GIMP or some other kind of design tool, so please yell at them to use the display size information that XRandR gives you right now, already, today.

26 September 2011

Gunnar Wolf: e-voting: Something is brewing in Jalisco...

There's something brewing, moving in Jalisco (a state in Mexico's West, where our second largest city, Guadalajara, is located). And it seems we have an opportunity to participate, hopefully to be taken into account for the future. Ten days ago, I was contacted by phone by the staff of UDG Noticias, for an interview on the Universidad de Guadalajara radio station. The topic? Electronic voting. If you are interested in what I said there, you can get the interview from my webpage. I held some e-mail contact with the interviewer, and during the past few days, he sent me some links to notes in the La Jornada de Jalisco newspaper, and asked for my opinion on them: On September 23, a fellow UNAM researcher, C sar Astudillo, claims the experience in three municipalities in Jalisco prove that e-voting is viable in the state, and today (September 26), third generation of an electronic booth is appearingly invulnerable. Of course, I don't agree with the arguments presented (and I'll reproduce the mails I sent to UDG Noticias about it before my second interview just below They are in Spanish, though). However, what I liked here is that it does feel like a dialogue. Their successive texts seem to answer to my questioning. So, even though I cannot yet claim this is a real dialogue (it would be much better to be able to sit down face to face and have a fluid conversation), it feels very nice to actually be listened to from the other side! My answer to the first note:
El tema de las urnas electr nicas sigue dando de qu hablar por ac en Jalisco... nosotros en Medios UDG hemos presentado distintas voces como la del Dr. Gabriel Corona Armenta, que est a favor del voto electr nico, del Dr. Luis Antonio Sobrado, magistrado presidente del tribunal supremo de elecciones de Costa Rica, quien nos habl sobre los 20 MDD que les cuesta implementar el sistema por lo que no lo han logrado hasta el momento, pudimos hablar hasta argentina con Federico Heinz y su rotunda oposici n al voto electr nico y por supuesto la entrevista que le realizamos a usted. Sin embargo este d a La Jornada Jalisco publica la siguiente nota http://www.lajornadajalisco.com.mx/2011/09/23/index.php?section=politica... nos gustar a saber cu l es su punto de vista al respecto, quedo a la espera de su respuesta
Hola, Pues... Bueno, s que el IFE hizo un desarrollo muy interesante y bien hecho hace un par de a os, dise ando desde cero las urnas que propon an emplear, pero no se instrumentaron fuera de pilotos (por cuesti n de costos, hasta donde entiendo). Se me hace triste y peligroso que el IEPC de Jalisco est proponiendo, teniendo ese antecedente, la compra de tecnolog a prefabricada, y confiando en lo que les ofrece un proveedor. Se me hace bastante iluso, directamente, lo que propone el t tulo: comicios en tres municipios prueban la viabilidad del voto electr nico en todo el estado . Pong moslo en estos t rminos: El que no se caiga una choza de l mina con estructura de madera demuestra que podemos construir rascacielos de l mina con estructura de madera? Ahora, un par de p rrafos que me llaman la atenci n de lo que publica esta nota de La Jornada:
la propuesta de realizar la elecci n en todo el estado con urnas electr nicas que desea llevar a cabo el Instituto Electoral y de Participaci n Ciudadana (IEPC) es viable, pues los comicios realizados en tres municipios son pruebas suficientes para demostrar que la urna es fiable
y algunos p rrafos m s adelante,
Cu ntas experiencias m s se necesitan para saber si es confiable, 20, 30, no lo s (...) Pero cuando se tiene un diagn stico real, efectivo y serio de cu ndo t cnicamente procede, se puede tomar la decisi n
Como lo menciono en mi art culo... No podemos confundir a la ausencia de evidencia con la evidencia de ausencia. Esto es, que en un despliegue menor no haya habido irregulares no significa que no pueda haberlas. Que haya pa ses que operan 100% con urnas electr nicas no significa que sea el camino a seguir. Hay algunas -y no pocas- experiencias de fallas en diversos sentidos de urnas electr nicas, y eso demuestra que no puede haber confianza en las implementaciones. Aunque el equipo nos saliera gratis (que no es el caso), hay que invertir recursos en su resguardo y mantenimiento. Aunque se generara un rastro impreso verificado por el votante (que s lo ha sido el caso en una peque a fracci n de las estacione de votaci n), nada asegura que los resultados reportados por el equipo sean siempre consistentes con la realidad. El potencial para mal uso que ofrecen es demasiado. Saludos,
And to September 26th:
Disculpe que lo molestemos otra vez, pero este d a fue publicada otra nota m s sobre el tema de las Urnas electr nicas en Jalisco donde se asegura que la urna es invulnerable. http://www.lajornadajalisco.com.mx/2011/09/26/index.php?section=politica... nos podr a conceder unos minutos para hablar con usted, como la vez pasada, v a telef nica sobre el caso espec fico de Jalisco, en referencia a estas notas publicadas recientemente? si es posible podr a llamarle este d a a las 2 pm? Quedo a la espera de su respuesta agradeci ndole su ayuda, apreciamos mucho esta colaboraci n que est haciendo con nosotros
Hola, ( ) Respecto a esta nota: Nuevamente, ausencia de evidencia no es evidencia de ausencia. Se le permite a un peque o segmento de personas jugar con una m quina. Significa eso que fue una prueba completa, exhaustiva? No, s lo que ante un jugueteo casual no pudieron encontrar fallos obvios y graves. Un verdadero proceso que brindara confianza consistir a en (como lo hicieron en Brasil - Y resultaron vulnerables) convocar a la comunidad de expertos en seguridad en c mputo a hacer las pruebas que juzguen necesarias teniendo un nivel razonable de acceso al equipo. Adem s, la seguridad va m s all de modificar los resultados guardados. Un par de ejemplos que se me ocurren sin darle muchas vueltas:
  • Qu pasa si meto un chicle a la ranura lectora de tarjeta magn tica?
  • Qu pasa si golpeo alguna de las teclas lo suficiente para hacerla un poquito menos sensible sin destruirla por completo? (o, ya entrados en gastos, si la destruyo)
La negaci n de servicio es otro tipo de ataque con el cual tenemos que estar familiarizados. No s lo es posible modificar el sentido de la votaci n, sino que es muy f cil impedir que la poblaci n ejerza su derecho. Qu har an en este caso? Bueno, podr an caer de vuelta a votaci n sobre papel - Sobre hojas de un block, probablemente firmadas por cada uno de los funcionarios, por ejemplo. Pero si un atacante bloque la lectura de la tarjeta magn tica, que es necesaria para que el presidente de casilla la marque como cerrada, despoj de su voto a los usuarios. S , se tienen los votos impresos (que, francamente, me da mucho gusto ver que esta urna los maneja de esta manera). El conteo es posible, aunque un poco m s inc modo que en una votaci n tradicional (porque hay que revisar cu les son los que est n marcados como invalidados - no me queda muy claro c mo es el escenario del elector que vot por una opci n, se imprimi otra, y el resultado fue corregido y marcado como tal)... Pero es posible. Sin embargo, y para cerrar con esta respuesta: Si hacemos una corrida de prueba, en circunstancias controladas, obviamente no se notar n los much simos fallos que una urna electr nica puede introducir cuando los "chicos malos" son sus programadores. Podemos estar seguro que este marcador Atlas-Chivas-Cruz Azul tenga el mismo ndice de fiabilidad como una elecci n de candidatos reales, uno de los cuales puede haberle pagado a la empresa desarrolladora para manipular la elecci n? Y a n si el proceso fuera perfecto, indican aqu que est n _intentando_ licitar estas urnas (y nuevamente, si lo que menciona esta nota es cierto, son de las mejores urnas disponibles, y han atendido a muchos de los se alamientos - Qu bueno!)... Para qu ? Qu nos van a dar estas urnas, qu va a ganar la sociedad? Mayor rapidez? Despreciable - Media hora de ganancia. A cambio de cu nto dinero? Mayor confiabilidad? Me queda claro que no, siendo que no s lo somos cuatro trasnochados los que ponemos su sistema en duda, sino que sus mismos proponentes apuntan a la duda generalizada. La frase con la que cierra la nota se me hace digna para colgar un ep logo: "en ese futuro quiz no tan distante la corrupci n tambi n ocurre y sta se debe siempre al factor humano". Y el factor humano sigue ah . Las urnas electr nicas son programadas por personas, por personas falibles. Sin importar del lado que est n, recordar n la pol mica cuando se hizo p blico que la agregaci n de votos en el 2006 fue supervisada por la empresa Hildebrando, propiedad del cu ado del entonces candidato a la presidencia Felipe Calder n. Qu evita que caigamos en un escenario similar, pero ampliamente distribu do? Y aqu hay que referirnos a la sentencia de la Suprema Corte de Alemania: En dicho pa s, las votaciones electr nicas fueron declaradas anticonstitucionales porque s lo un grupo de especialistas podr an auditarlas. Una caja llena de papeles con la evidencia clara del sentido del voto de cada participante puede ser comprendida por cualquier ciudadano. El c digo que controla a las urnas electr nicas, s lo por un peque o porcentaje de la poblaci n.

26 July 2011

Joachim Breitner: SAT-solving the testing transition problem

It is half-time for me at DebCamp/DebConf11 in Banja Luka, Bosnia-Herzegovina, and that is a good time to write about my work so far. I started with some finger exercises, such as the HTML copying for gnome-terminal (as blogged previously) and cosmetic changes to the buildd.debian.org/status website. After that, I began my main project: A SAT-based solver for the testing transition problem. In Debian, new software first enters a repository called unstable, where it is tested. Once some requirements are fulfilled (package has been in unstable for usually at least 10 days and has no new release critical bugs), it is entitled to enter the repository called testing, which will eventually form a new stable Debian release. But some packages need to migrate together, usually because either requires the new version of the other. Also, no packages in testing ought to become uninstallable by some seemingly unrelated change. The software that decides these things is called britney. It is sufficiently good in making sure nothing bad happens, but not smart enough to figure out what to do in case more than two packages need to migrate simultaneously. My goal is to improve this. Now, the the various requirements can all be expressed as predicative formulas, and a lot of research has been going into writing good solvers for such problems, called SAT-solvers. Hence my plan was to only specify what we expect from a testing migration, but leave the search for a solution to such a general purpose and highly optimized program. My initial progress was good and I had some result after two days of hacking, and the approach is promising. Because the data sets are pretty large (1.5GB of input data, the final SAT problem has 1.8 million clauses with 250.000 variables), so I learned quite a bit about profiling and optimizing Haskell programs, and that parsec is slower than working with BS.lines and BS.split to parse simple input. I also used the FFI to use dpkg code directly to compare Debian version numbers. At some point I noticed that I actually want to solve PMAX-SAT problems: Given two set of clauses, hard clauses and soft clauses, find a variable assignment that fulfills the hard clauses and as many of the soft clauses as possible. Unfortunately, there are no fast Free PMAX-SAT solvers around. The ones that I found and that were fast enough that I could use them, msuncore by Joao Marques-Silva at the University College Dublin and MiniMaxSat by Federico Heras at the Universitat Polit cnica de Catalunya, are only available as statically linked binaries. It is a shame that this is acceptable by the academic community; imagine mathematicians would stop including the proofs in their papers, and only share their theorems. So if you happen to have written a PMAX-SAT solver and can solve this instance (in weighted DIMACS format) in less than five minutes, and want to brag that the Debian project is using your code, then please release it under a Free license (e.g. GPL or BSD) and tell me about it! The code of my SAT-based solver is, of course, available, though slightly unpolished. Parts of it (notably the interface to picosat and the PMAX-SAT solvers) might be of general interest and I plan to put them on Hackage as a separate license. Update: I have contacted the authors of the SAT solvers mentioned above, and they have reconfirmed that they have no intention of releasing the source. Now I put my hope in maxsatz2009 by Chumin LI, which is GPL but was not able to cope with my large instance directly; I think I need to change the memory management.

Flattr this post

30 May 2009

Stefano Zacchiroli: kick-starting turbogears 2 packaging

TurboGears 2 in debian ... soon ! After a long incubation, a few days ago TurboGears 2 has been released. Historically, I've been preferring TurboGears over Django for being closer to the open source philosophy of reusing existing components. Since the long 2.0 release was marking a gap with Django, I was eager to test the 2.0, and I was delighted to find it in Debian. Unfortunately, it doesn't seem to be in good shape yet. In particular it lacks several dependencies before it can even be used to quickstart a project, in spite of its presence in unstable. To give an idea of the needed work, after having installed it manually (via easy_install), my virgin /usr/local/lib/python2.5/site-packages/ has been polluted by 26 egg-thingies. I decided to spend some week-end time to start closing the gap, because we cannot lack such an important web framework in Debian (and ... erm, yes, also because I need it :-) ). Here is the current status of what has been ITP-ed / packaged already (by yours truly): Where to Considering, foolishly, the last package as being already done, the way to go is still long:
    zack@usha:~$ ls /usr/local/lib/python2.5/site-packages/ grep -v repoze.what
    AddOns-0.6-py2.5.egg
    BytecodeAssembler-0.3-py2.5.egg
    Catwalk-2.0.2-py2.5.egg
    easy-install.pth
    Extremes-1.1-py2.5.egg
    PEAK_Rules-0.5a1.dev_r2582-py2.5.egg
    prioritized_methods-0.2.1-py2.5.egg
    sprox-0.5.5-py2.5.egg
    sqlalchemy_migrate-0.5.2-py2.5.egg
    SymbolType-1.0-py2.5.egg
    tg.devtools-2.0-py2.5.egg
    tgext.admin-0.2.4-py2.5.egg
    tgext.crud-0.2.4-py2.5.egg
    TurboGears2-2.0-py2.5.egg
    tw.forms-0.9.3-py2.5.egg
    WebFlash-0.1a9-py2.5.egg
    zope.sqlalchemy-0.4-py2.5.egg

Possibly, some of them are already in Debian hidden somewhere, but sure there is still work to be done. If you want to help, you are more then welcome. The rules are simple: all packages will be maintained under the umbrella of the Python Modules Team, but you should be willing to take responsibility as the primary maintainer. Please get in touch with if you are interested, as I'm in turn already in touch with some other very kind volunteers (thanks Enrico and Federico2!) to coordinate who-is-doing-what. Preview packages available What has already been packaged, including a temporary workaround for python-transaction, will be available in experimental after NEW processing. The idea is that nothing will go to unstable until TurboGears 2 will be (proven to be) fully functional. In the meantime, packages are available from my personal APT repository (signed by my key), here are the friendly /etc/apt/sources.list :
    deb http://people.debian.org/~zack/debian zack-unstable/
    deb-src http://people.debian.org/~zack/debian zack-unstable/

Versions are tilde-friendly and shouldn't get in your way when the official packages will hit unstable. Packaging multiple-egg / multiple-upstream packages In all this, I've faced an interesting problem with the python-repoze. who,what -plugins packages. They correspond to a handful of plugins, each of which is about 20/30 Kb. I didn't consider appropriate to prepare 5 different packages, due to archive bloat potential. Hence, you get the usual problems of multiple-upstream packages. To counter some of them, as well as some egg-specific packaging annoyances with multiple upstream, I wrote a couple of very simple helpers: I'm no Python-packaging-guru so, lazyweb, if you spot in this choice something I utterly overlooked, or if you have improvements to suggest, please let me know. Repacking eggs An interesting problem will be faced in trying to integrate the above approach with Python modules that are shipped only as .egg files (i.e., no tarballs, ... yes there are such horrifying things out there). To preserve uniformity, we would need uscan to support --repack ing of .egg files as they were simple .zip archives. Since eggs are .zip archives ... why not?

8 April 2009

Obey Arthur Liu: Google Summer of Code at Debian: Update, need mentors!

A quick update before the big one about the 2009 Google Summer of Code. I believe we had a great recruitment drive this year and we have a very good set of proposals to work with. We d like to thank everyone involved for their help. We re now ranking out student applications. I promised elsewhere that I ll send out our shortlist of projects once Google sends us our preliminary slot allocation today but I misread the thread on the -mentors list and that count will only happen on Thursday, so we ll have to wait a bit more. That shortlist would only include projects, but not individual students. The idea is to give a heads up to everyone before committing to a group of projects and students. It is very important to inform the community as it increases visibility of the students work, giving them more help and support (and also avoids duplicating existing not yet publicized work!). As far as mentors go, we should have all of our approximately 14 planned projects covered, except for 2. I m posting them here in case you could mentor or help find mentors for those projects. (The wiki pages are not really up to date, so please come on IRC and ask clarifications, see below) Finish Petr Rockai s Adept 3.0 and bring a Qt4 Package Manager to Debian, with a different interface paradigm than Aptitude-gtk.
Petr said he would provide help with the existing codebase but can t mentor. Sune Vuorela from Debian KDE is ready to help with Qt4 related issues. Build Debian tools to create Debian images for Amazon EC2 and the free Eucalyptus implementation. Packaging of the Eucalyptus hosting framework is also possible.
For this project, we already have on board to help: Charles Plessy from Debian Med, Eric Hammond, developer of the existing vmbuilder Ubuntu tool for EC2 and Chris Grzegorczyk and Rich Wolski, from the Eucalyptus team. Plenty of people to get help from. Mentoring is a great experience! See this for what it entails. If we still can t find a mentor by the end of the week, I ll blast an announcement over at debian-devel@l.d.o along with the project shortlist. In the meantime, don t forget to idle on #debian-soc on irc.debian.org.

20 August 2007

Russell Coker: Suggestions and Thanks

One problem with the blog space is that there is a lot of negativity. Many people seem to think that if they don’t like a blog post then the thing to do is to write a post complaining about it - or even worse a complaint that lacks specific details to such an extent that the subject of the complaint would be unable to change their writing in response. The absolute worst thing to do is to post a complaint in a forum that the blog author is unlikely to read - which would be a pointless whinge that benefits no-one. Of course an alternate way for the recipient to takeg such complaints as suggested by Paul Graham is “you’re on the right track when people complain that you’re unqualified, or that you’ve done something inappropriate” and “if they’re driven to such empty forms of complaint, that means you’ve probably done something good” (Paul was talking about writing essays not blogs, but I’m pretty sure that he intended it to apply to blogs too). If you want to actually get a blog author (or probably any author) to make a change in their material in response to your comments then trying to avoid empty complaints is a good idea. Another useful point Paul makes in the same essay is ““Inappropriate” is the null criticism. It’s merely the adjective form of “I don’t like it.”” - something that’s worth considering given the common criticism of particular blog content as being “inappropriate” for an aggregation feed that is syndicating it. Before criticising blog posts you should consider that badly written criticism may result in more of whatever it is that you object to. If you find some specific objective problem in the content or presentation of a blog the first thing to do is to determine the correct way of notifying the author. I believe that it’s a good idea for the author to have an about page which either has a mailto URL or a web form for sending feedback, I have a mailto on my about page - (here’s the link). Another possible method of contact is a comment on a blog post, if it’s an issue for multiple posts on the blog then writing a comment on the most recent post will do (unless of course it’s a comment about the comment system being broken). For those who are new to blogging, the blog author has full control over what happens to comments. If they decide that your comment about the blog color scheme doesn’t belong on a post about C programming then they can respond to the comment in the way that they think best (making a change or not and maybe sending you an email about it) and then delete the comment if they wish. If there is an issue that occurs on multiple blogs then a good option is to write a post about the general concept as I did in the case of column width in blogs where I wrote about one blog as an example of a problem that affects many blogs. I also described how I fixed my own blog in this regard (in sufficient detail to allow others to do the same). Note that most blogs have some degree of support for Linkback so any time you link to someone else’s blog post they will usually get notified in some way. On my blog I have a page for future posts where I invite comments from readers as to what I plan to write about next. Someone who prefers that I not write about topic A could write a comment requesting that I write about topic B instead. Wordpress supports pages as a separate type of item to posts. A post is a dated entry while pages are not sorted in date order and in most themes are displayed prominently on the front page (mine are displayed at the top). I suggest that other bloggers consider doing something comparable. One thing I considered is running a wiki page for the future posts. One of the problems with a wiki page is that I would need to maintain my own private list which is separate, while a page with comments allows only me to edit the page in response to comments and then use the page as my own to-do list. I may experiment with such a wiki page at some future time. One possibility that might be worth considering is a wiki for post requests for any blog that is syndicated by a Planet. For example a wiki related to Planet Debian might request a post about running Debian on the latest SPARC systems, the first blogger to write a post on this topic could then remove the entry from the wish-list (maybe adding the URL to a list of satisfied requests). If the person who made the original request wanted a more detailed post covering some specific area they could then add such a request to the wish-list page. If I get positive feedback on this idea I’ll create the wiki pages and add a few requests for articles that would interest me to start it up. Finally to encourage the production of content that you enjoy reading I suggest publicly thanking people who write posts that you consider to be particularly good. One way of thanking people is to cite their posts in articles on your own blog (taking care to include a link to at least one page to increase their Technorati rank) or web site. Another is to include a periodic (I suggest monthly at most) links post that contains URLs of blog posts you like along with brief descriptions of the content. If you really like a post then thank the author by not only giving a links with a description (to encourage other people to read it) but also describe why you think it’s a great post. Also if recommending a blog make sure you give a feed URL so that anyone who wants to subscribe can do it as easily as possible (particularly for the blogs with a bad HTML layout). Here are some recent blog posts that I particularly liked: Here are some blogs that I read regularly:
Finally I don’t read it myself, but CuteOverload.com is a good site to refer people to when they claim that the Internet is too nasty for children - the Internet has lots of pictures of cute animals!

16 April 2007

Rob Bradford: Profiling made pretty

On Ross’s blog he talks about “a project [that] is even more interesting for the geeks out there”. I’m pleased to disclose that said project is OProfileUI a graphical user interface for the OProfile system profiler. Hopefully this should help the amazing performance wizards do their thing. I also hope that this will lower the barrier to entry, profiling and performance improvements are often seen as a bit of a black art but it can be a good way for new contributors to explore the stack. This was my first project at OpenedHand and so i’m really pleased to see it released and available for other people to try (and improve.) Oh and it has a cool icon (thanks Andreas.)
Go on, grab it!.

9 February 2007

Ross Burton: GUADEC 2007 Pre-Call for Papers

I plan on announcing the GUADEC 2007 Call for Papers in the next day or so, so I want everyone to put their thinking caps on and consider giving a talk this summer. There are several topics I'd like to see a good set of talks on:

3 December 2006

Zak B. Elep: Who moved my sundae?

WTF IS THIS SHIT? Ok, that above would be what a newly-baptized “FOSS advocate-slash-zealot” would say upon seeing Federico Pascual Jr.’s PostScript regarding the FOSS bill. I suppose I could have said that myself a couple of lifetimes ago. But, there is more than meets the eye. While I would like to think of myself as a “FOSS veteran”–believe me, I have still so much to learn about it–I would like to step into the shoes of such a person when approaching an article as sensational as Mr. Pascual’s. Despite what seems to be a most interesting article on the mechanics of government software usage, it fails to address the one particular bit that is just as important as the proposed bill itself: the real Free and Open Source Software. Let me nitpick this article bit by bit:
FREE RIDE: A bill is being pushed in Congress forbidding all government agencies and state-controlled firms from buying and using any of the computer software sold in the market!
Alas, when I first heard of the FOSS bill sometime before September, I also had a bad impression of it. Perhaps I was just too politically allergic at that time (yeah right,) but I tried to adopt a wait-and-see’ approach first and let the dice roll. Perhaps because this bill was to be introduced by a very visible congressman with alleged leftist ties made me feel uncomfortable, but then, so was (and still is) with the current administration. Or perhaps I just felt it wasn’t damn right to legislate FOSS as an end-all solution, preferring instead of presenting is as a process for reforming the local software industry. Looks like first impressions definitely make a difference.
The proposed law to be called “Free/Open Source Software (FOSS) Act of 2006″ commands government offices to use only information and communications software that are given away for free and have no restrictions as to their use. The objectives appear to be to save money for the government and to encourage the making of free (non-commercial) software.
So? What’s wrong with these objectives? FWIW, the early FOSS bill draft did seem to have such a Draconian section as forcing the government and allied offices to use, and use only, FOSS. IMHO that by itself ran against the very fundamental ideal of FOSS: the freedom of choice. Not Hobson’s choice, but real choice. The current bill IIRC now allows this true choice; unless there is an extreme case where FOSS cannot be applied without becoming non-self-sustaining (not to mention self-liquidating,) agencies may implement their infrastructure using FOSS as their primary instrument, with the application of open standards (that is, open document formats, open communications protocols, etc) unifying the disparate components. Perhaps right now, this situation may seem kind of far-fetched, but its not really that far-off, considering what other nations and cities have done (or not done) with FOSS. Munich, Extremadura, Beijing… the list is not yet that long, but its bound to go a long way ;-) Now, while the government may now adopt open standards and open-everything, does not mean that the openness’ forgoes security, either; the government may opt to use well-known encryption protocols and even base their own security infrastructure on them. In fact, they are free to even look inside the source of these well-known standards, to study them, and to branch off new implementation that may even change the way these standards work. In fact, even without this bill in place, the government can participate in the production and development of FOSS!!! Wait a minute, wasn’t I supposed to defend this bill? Eh, well. I suppose this bill is good and all, but like I said earlier, there are those first impressions made by the parties involved in the making of this bill which unsettles me. IMHO I would rather see applications of this first in key cities (the happy works in Munich and in Extremadura are no accident, believe me) which in turn, would allow both local the national government to see just exactly how this FOSS magic works. Then, when the observations have been made, the papers are in, and the workers get hired, then, maybe, we’ll have a nation where FOSS can be mandated as a very strong preference, but never a forced one. Its all about growing up, really. I’m being reminded of Tom DeMarco’s The Deadline, where a newly-retrenched guy from a telco literally lands into the fantastic job of his life, managing an entire nation of software engineers and architects to develop several killer apps within a year (or so; raid you nearest Book Sale and be lucky ;) Its a great experiment: when it succeeds, you’ll be the first to be present, but when it fails, you’ll be the first to be nowhere.
But I can also hear in the background a call to an unholy war against multinationals whose popular software run virtually all maybe 95 percent? of the computers of the world.
Now this is a low blow. Maybe partly because of the prevailing images of the advocates–geeks and leftists, my, what a c-c-c-combo!–is what drives Mr. Pascual to make this point. But really, multinationals are not the main concern. FWIW, FOSS is multinational in nature: it is even multi-denominational and multidisciplinary. FOSS is one of the things that make anonymous people like you and me Internet superheroes. FOSS connects the Internet’s tubes and keeps away the trucks. FOSS makes the Internet serious business, yet also drives them crazy. The Linux distros that I have worked, am working, and continue to work on, are all multinational: Debian and Ubuntu. People from all over the world, from all 7 continents, participate and work on what would arguably be the biggest software distribution the human world may ever produce, freely, regardless of motivation or goal. I suppose all the folks involved have only one common goal, and that is to see a work created for the benefit of the community by the community. I suppose the background howl comes from somewhere altogether, but its hardly just from FOSS. FOSS is the epitome of what is truly multinational, and one thing that is not truly controlled by any one single person, corporation or not.

Next.