Search Results: "bruno"

13 April 2024

Simon Josefsson: Reproducible and minimal source-only tarballs

With the release of Libntlm version 1.8 the release tarball can be reproduced on several distributions. We also publish a signed minimal source-only tarball, produced by git-archive which is the same format used by Savannah, Codeberg, GitLab, GitHub and others. Reproducibility of both tarballs are tested continuously for regressions on GitLab through a CI/CD pipeline. If that wasn t enough to excite you, the Debian packages of Libntlm are now built from the reproducible minimal source-only tarball. The resulting binaries are reproducible on several architectures. What does that even mean? Why should you care? How you can do the same for your project? What are the open issues? Read on, dear reader This article describes my practical experiments with reproducible release artifacts, following up on my earlier thoughts that lead to discussion on Fosstodon and a patch by Janneke Nieuwenhuizen to make Guix tarballs reproducible that inspired me to some practical work. Let s look at how a maintainer release some software, and how a user can reproduce the released artifacts from the source code. Libntlm provides a shared library written in C and uses GNU Make, GNU Autoconf, GNU Automake, GNU Libtool and gnulib for build management, but these ideas should apply to most project and build system. The following illustrate the steps a maintainer would take to prepare a release:
git clone https://gitlab.com/gsasl/libntlm.git
cd libntlm
git checkout v1.8
./bootstrap
./configure
make distcheck
gpg -b libntlm-1.8.tar.gz
The generated files libntlm-1.8.tar.gz and libntlm-1.8.tar.gz.sig are published, and users download and use them. This is how the GNU project have been doing releases since the late 1980 s. That is a testament to how successful this pattern has been! These tarballs contain source code and some generated files, typically shell scripts generated by autoconf, makefile templates generated by automake, documentation in formats like Info, HTML, or PDF. Rarely do they contain binary object code, but historically that happened. The XZUtils incident illustrate that tarballs with files that are not included in the git archive offer an opportunity to disguise malicious backdoors. I blogged earlier how to mitigate this risk by using signed minimal source-only tarballs. The risk of hiding malware is not the only motivation to publish signed minimal source-only tarballs. With pre-generated content in tarballs, there is a risk that GNU/Linux distributions such as Trisquel, Guix, Debian/Ubuntu or Fedora ship generated files coming from the tarball into the binary *.deb or *.rpm package file. Typically the person packaging the upstream project never realized that some installed artifacts was not re-built through a typical autoconf -fi && ./configure && make install sequence, and never wrote the code to rebuild everything. This can also happen if the build rules are written but are buggy, shipping the old artifact. When a security problem is found, this can lead to time-consuming situations, as it may be that patching the relevant source code and rebuilding the package is not sufficient: the vulnerable generated object from the tarball would be shipped into the binary package instead of a rebuilt artifact. For architecture-specific binaries this rarely happens, since object code is usually not included in tarballs although for 10+ years I shipped the binary Java JAR file in the GNU Libidn release tarball, until I stopped shipping it. For interpreted languages and especially for generated content such as HTML, PDF, shell scripts this happens more than you would like. Publishing minimal source-only tarballs enable easier auditing of a project s code, to avoid the need to read through all generated files looking for malicious content. I have taken care to generate the source-only minimal tarball using git-archive. This is the same format that GitLab, GitHub etc offer for the automated download links on git tags. The minimal source-only tarballs can thus serve as a way to audit GitLab and GitHub download material! Consider if/when hosting sites like GitLab or GitHub has a security incident that cause generated tarballs to include a backdoor that is not present in the git repository. If people rely on the tag download artifact without verifying the maintainer PGP signature using GnuPG, this can lead to similar backdoor scenarios that we had for XZUtils but originated with the hosting provider instead of the release manager. This is even more concerning, since this attack can be mounted for some selected IP address that you want to target and not on everyone, thereby making it harder to discover. With all that discussion and rationale out of the way, let s return to the release process. I have added another step here:
make srcdist
gpg -b libntlm-1.8-src.tar.gz
Now the release is ready. I publish these four files in the Libntlm s Savannah Download area, but they can be uploaded to a GitLab/GitHub release area as well. These are the SHA256 checksums I got after building the tarballs on my Trisquel 11 aramo laptop:
91de864224913b9493c7a6cec2890e6eded3610d34c3d983132823de348ec2ca  libntlm-1.8-src.tar.gz
ce6569a47a21173ba69c990965f73eb82d9a093eb871f935ab64ee13df47fda1  libntlm-1.8.tar.gz
So how can you reproduce my artifacts? Here is how to reproduce them in a Ubuntu 22.04 container:
podman run -it --rm ubuntu:22.04
apt-get update
apt-get install -y --no-install-recommends autoconf automake libtool make git ca-certificates
git clone https://gitlab.com/gsasl/libntlm.git
cd libntlm
git checkout v1.8
./bootstrap
./configure
make dist srcdist
sha256sum libntlm-*.tar.gz
You should see the exact same SHA256 checksum values. Hooray! This works because Trisquel 11 and Ubuntu 22.04 uses the same version of git, autoconf, automake, and libtool. These tools do not guarantee the same output content for all versions, similar to how GNU GCC does not generate the same binary output for all versions. So there is still some delicate version pairing needed. Ideally, the artifacts should be possible to reproduce from the release artifacts themselves, and not only directly from git. It is possible to reproduce the full tarball in a AlmaLinux 8 container replace almalinux:8 with rockylinux:8 if you prefer RockyLinux:
podman run -it --rm almalinux:8
dnf update -y
dnf install -y make wget gcc
wget https://download.savannah.nongnu.org/releases/libntlm/libntlm-1.8.tar.gz
tar xfa libntlm-1.8.tar.gz
cd libntlm-1.8
./configure
make dist
sha256sum libntlm-1.8.tar.gz
The source-only minimal tarball can be regenerated on Debian 11:
podman run -it --rm debian:11
apt-get update
apt-get install -y --no-install-recommends make git ca-certificates
git clone https://gitlab.com/gsasl/libntlm.git
cd libntlm
git checkout v1.8
make -f cfg.mk srcdist
sha256sum libntlm-1.8-src.tar.gz 
As the Magnus Opus or chef-d uvre, let s recreate the full tarball directly from the minimal source-only tarball on Trisquel 11 replace docker.io/kpengboy/trisquel:11.0 with ubuntu:22.04 if you prefer.
podman run -it --rm docker.io/kpengboy/trisquel:11.0
apt-get update
apt-get install -y --no-install-recommends autoconf automake libtool make wget git ca-certificates
wget https://download.savannah.nongnu.org/releases/libntlm/libntlm-1.8-src.tar.gz
tar xfa libntlm-1.8-src.tar.gz
cd libntlm-v1.8
./bootstrap
./configure
make dist
sha256sum libntlm-1.8.tar.gz
Yay! You should now have great confidence in that the release artifacts correspond to what s in version control and also to what the maintainer intended to release. Your remaining job is to audit the source code for vulnerabilities, including the source code of the dependencies used in the build. You no longer have to worry about auditing the release artifacts. I find it somewhat amusing that the build infrastructure for Libntlm is now in a significantly better place than the code itself. Libntlm is written in old C style with plenty of string manipulation and uses broken cryptographic algorithms such as MD4 and single-DES. Remember folks: solving supply chain security issues has no bearing on what kind of code you eventually run. A clean gun can still shoot you in the foot. Side note on naming: GitLab exports tarballs with pathnames libntlm-v1.8/ (i.e.., PROJECT-TAG/) and I ve adopted the same pathnames, which means my libntlm-1.8-src.tar.gz tarballs are bit-by-bit identical to GitLab s exports and you can verify this with tools like diffoscope. GitLab name the tarball libntlm-v1.8.tar.gz (i.e., PROJECT-TAG.ARCHIVE) which I find too similar to the libntlm-1.8.tar.gz that we also publish. GitHub uses the same git archive style, but unfortunately they have logic that removes the v in the pathname so you will get a tarball with pathname libntlm-1.8/ instead of libntlm-v1.8/ that GitLab and I use. The content of the tarball is bit-by-bit identical, but the pathname and archive differs. Codeberg (running Forgejo) uses another approach: the tarball is called libntlm-v1.8.tar.gz (after the tag) just like GitLab, but the pathname inside the archive is libntlm/, otherwise the produced archive is bit-by-bit identical including timestamps. Savannah s CGIT interface uses archive name libntlm-1.8.tar.gz with pathname libntlm-1.8/, but otherwise file content is identical. Savannah s GitWeb interface provides snapshot links that are named after the git commit (e.g., libntlm-a812c2ca.tar.gz with libntlm-a812c2ca/) and I cannot find any tag-based download links at all. Overall, we are so close to get SHA256 checksum to match, but fail on pathname within the archive. I ve chosen to be compatible with GitLab regarding the content of tarballs but not on archive naming. From a simplicity point of view, it would be nice if everyone used PROJECT-TAG.ARCHIVE for the archive filename and PROJECT-TAG/ for the pathname within the archive. This aspect will probably need more discussion. Side note on git archive output: It seems different versions of git archive produce different results for the same repository. The version of git in Debian 11, Trisquel 11 and Ubuntu 22.04 behave the same. The version of git in Debian 12, AlmaLinux/RockyLinux 8/9, Alpine, ArchLinux, macOS homebrew, and upcoming Ubuntu 24.04 behave in another way. Hopefully this will not change that often, but this would invalidate reproducibility of these tarballs in the future, forcing you to use an old git release to reproduce the source-only tarball. Alas, GitLab and most other sites appears to be using modern git so the download tarballs from them would not match my tarballs even though the content would. Side note on ChangeLog: ChangeLog files were traditionally manually curated files with version history for a package. In recent years, several projects moved to dynamically generate them from git history (using tools like git2cl or gitlog-to-changelog). This has consequences for reproducibility of tarballs: you need to have the entire git history available! The gitlog-to-changelog tool also output different outputs depending on the time zone of the person using it, which arguable is a simple bug that can be fixed. However this entire approach is incompatible with rebuilding the full tarball from the minimal source-only tarball. It seems Libntlm s ChangeLog file died on the surgery table here. So how would a distribution build these minimal source-only tarballs? I happen to help on the libntlm package in Debian. It has historically used the generated tarballs as the source code to build from. This means that code coming from gnulib is vendored in the tarball. When a security problem is discovered in gnulib code, the security team needs to patch all packages that include that vendored code and rebuild them, instead of merely patching the gnulib package and rebuild all packages that rely on that particular code. To change this, the Debian libntlm package needs to Build-Depends on Debian s gnulib package. But there was one problem: similar to most projects that use gnulib, Libntlm depend on a particular git commit of gnulib, and Debian only ship one commit. There is no coordination about which commit to use. I have adopted gnulib in Debian, and add a git bundle to the *_all.deb binary package so that projects that rely on gnulib can pick whatever commit they need. This allow an no-network GNULIB_URL and GNULIB_REVISION approach when running Libntlm s ./bootstrap with the Debian gnulib package installed. Otherwise libntlm would pick up whatever latest version of gnulib that Debian happened to have in the gnulib package, which is not what the Libntlm maintainer intended to be used, and can lead to all sorts of version mismatches (and consequently security problems) over time. Libntlm in Debian is developed and tested on Salsa and there is continuous integration testing of it as well, thanks to the Salsa CI team. Side note on git bundles: unfortunately there appears to be no reproducible way to export a git repository into one or more files. So one unfortunate consequence of all this work is that the gnulib *.orig.tar.gz tarball in Debian is not reproducible any more. I have tried to get Git bundles to be reproducible but I never got it to work see my notes in gnulib s debian/README.source on this aspect. Of course, source tarball reproducibility has nothing to do with binary reproducibility of gnulib in Debian itself, fortunately. One open question is how to deal with the increased build dependencies that is triggered by this approach. Some people are surprised by this but I don t see how to get around it: if you depend on source code for tools in another package to build your package, it is a bad idea to hide that dependency. We ve done it for a long time through vendored code in non-minimal tarballs. Libntlm isn t the most critical project from a bootstrapping perspective, so adding git and gnulib as Build-Depends to it will probably be fine. However, consider if this pattern was used for other packages that uses gnulib such as coreutils, gzip, tar, bison etc (all are using gnulib) then they would all Build-Depends on git and gnulib. Cross-building those packages for a new architecture will therefor require git on that architecture first, which gets circular quick. The dependency on gnulib is real so I don t see that going away, and gnulib is a Architecture:all package. However, the dependency on git is merely a consequence of how the Debian gnulib package chose to make all gnulib git commits available to projects: through a git bundle. There are other ways to do this that doesn t require the git tool to extract the necessary files, but none that I found practical ideas welcome! Finally some brief notes on how this was implemented. Enabling bootstrappable source-only minimal tarballs via gnulib s ./bootstrap is achieved by using the GNULIB_REVISION mechanism, locking down the gnulib commit used. I have always disliked git submodules because they add extra steps and has complicated interaction with CI/CD. The reason why I gave up git submodules now is because the particular commit to use is not recorded in the git archive output when git submodules is used. So the particular gnulib commit has to be mentioned explicitly in some source code that goes into the git archive tarball. Colin Watson added the GNULIB_REVISION approach to ./bootstrap back in 2018, and now it no longer made sense to continue to use a gnulib git submodule. One alternative is to use ./bootstrap with --gnulib-srcdir or --gnulib-refdir if there is some practical problem with the GNULIB_URL towards a git bundle the GNULIB_REVISION in bootstrap.conf. The srcdist make rule is simple:
git archive --prefix=libntlm-v1.8/ -o libntlm-1.8-src.tar.gz HEAD
Making the make dist generated tarball reproducible can be more complicated, however for Libntlm it was sufficient to make sure the modification times of all files were set deterministically to the timestamp of the last commit in the git repository. Interestingly there seems to be a couple of different ways to accomplish this, Guix doesn t support minimal source-only tarballs but rely on a .tarball-timestamp file inside the tarball. Paul Eggert explained what TZDB is using some time ago. The approach I m using now is fairly similar to the one I suggested over a year ago. If there are problems because all files in the tarball now use the same modification time, there is a solution by Bruno Haible that could be implemented. Side note on git tags: Some people may wonder why not verify a signed git tag instead of verifying a signed tarball of the git archive. Currently most git repositories uses SHA-1 for git commit identities, but SHA-1 is not a secure hash function. While current SHA-1 attacks can be detected and mitigated, there are fundamental doubts that a git SHA-1 commit identity uniquely refers to the same content that was intended. Verifying a git tag will never offer the same assurance, since a git tag can be moved or re-signed at any time. Verifying a git commit is better but then we need to trust SHA-1. Migrating git to SHA-256 would resolve this aspect, but most hosting sites such as GitLab and GitHub does not support this yet. There are other advantages to using signed tarballs instead of signed git commits or git tags as well, e.g., tar.gz can be a deterministically reproducible persistent stable offline storage format but .git sub-directory trees or git bundles do not offer this property. Doing continous testing of all this is critical to make sure things don t regress. Libntlm s pipeline definition now produce the generated libntlm-*.tar.gz tarballs and a checksum as a build artifact. Then I added the 000-reproducability job which compares the checksums and fails on mismatches. You can read its delicate output in the job for the v1.8 release. Right now we insists that builds on Trisquel 11 match Ubuntu 22.04, that PureOS 10 builds match Debian 11 builds, that AlmaLinux 8 builds match RockyLinux 8 builds, and AlmaLinux 9 builds match RockyLinux 9 builds. As you can see in pipeline job output, not all platforms lead to the same tarballs, but hopefully this state can be improved over time. There is also partial reproducibility, where the full tarball is reproducible across two distributions but not the minimal tarball, or vice versa. If this way of working plays out well, I hope to implement it in other projects too. What do you think? Happy Hacking!

8 February 2024

Reproducible Builds: Reproducible Builds at FOSDEM 2024

Core Reproducible Builds developer Holger Levsen presented at the main track at FOSDEM on Saturday 3rd February this year in Brussels, Belgium. Titled Reproducible Builds: The First Ten Years
In this talk Holger h01ger Levsen will give an overview about Reproducible Builds: How it started with a small BoF at DebConf13 (and before), then grew from being a Debian effort to something many projects work on together, until in 2021 it was mentioned in an Executive Order of the President of the United States. And of course, the talk will not end there, but rather outline where we are today and where we still need to be going, until Debian stable (and other distros!) will be 100% reproducible, verified by many. h01ger has been involved in reproducible builds since 2014 and so far has set up automated reproducibility testing for Debian, Fedora, Arch Linux, FreeBSD, NetBSD and coreboot.
More information can be found on FOSDEM s own page for the talk, including a video recording and slides.
Separate from Holger s talk, however, there were a number of other talks about reproducible builds at FOSDEM this year: and there was even an entire track on Software Bill of Materials.

24 August 2023

Debian Brasil: Debian Day 30 years in Belo Horizonte - Brazil

For the first time, the city of Belo Horizonte held a Debian Day to celebrate the anniversary of the Debian Project. The communities Debian Minas Gerais and Free Software Belo Horizonte and Region felt motivated to celebrate this special date due the 30 years of the Debian Project in 2023 and they organized a meeting on August 12nd in UFMG Knowledge Space. The Debian Day organization in Belo Horizonte received the important support from UFMG Computer Science Department to book the room used by the event. It was scheduled three activities: In total, 11 people were present and we took a photo with those who stayed until the end. Presentes no Debian Day 2023 em BH

Debian Brasil: Debian Day 30 anos em Belo Horizonte

Pela primeira vez a cidade de Belo Horizonte realizou um Debian Day para celebrar o anivers rio do Projeto Debian. As comunidades Debian Minas Gerais e Software Livre de BH e Regi o se sentiram motivadas para celebrar esta data especial devido aos 30 anos do Projeto Debian em 2023 e organizou um encontro no dia 12 de agosto dentro Espa o do Conhecimento da UFMG. A organiza o do Debian Day em Belo Horizonte recebeu o importante apoio do Departamento de Ci ncia da Computa o da UFMG para reservar a sala que foi utilizada para o evento. A programa o contou com tr s atividades: No total etiveram presentes 11 pessoas e fizemos uma foto com as que ficaram at o final. Presentes no Debian Day 2023 em BH

22 December 2016

Urvika Gola: Outreachy- Week 1,2 Progress

Since the past few weeks I have been researching and working on creating a white label version of Lumicall with my mentors Daniel, Juliana and Bruno. Lumicall is a free and convenient app for making encrypted phone calls from Android. It uses the SIP protocol to interoperate with other apps and corporate telephone systems. Think of any app that you use to call others using an SIP ID. What does it mean to make a white label version of Lumicall?
White labelling is the idea of taking the whole or piece of Lumicall s code by business users and tweaking it according to their requirements and functionality they want. The white label version would have client s name, logo, icons, themes etc which reflect their brand. Although, the underlying working of both the applications would be same. Think of it like getting a cake from the local bakery, putting it into a home baking dish and passing it off as something you made in front of your friends and earning all the praise. Except cake is more appealing than apps.
What whitelabelling does is it takes away the pain of collecting cake ingredients, mixing them in right proportion, baking at the right temperature and uses the experience of one cake base to create more and more fancier, prettier, grander cakes. This lets the cool bakers (or the business users of lumicall) to focus on other aspects of the party like decoration, games and drinks (or monetization, publicity and new features)
Now you would wonder who these cool bakers or business users of lumicall would be? While researching this work that despite the commonalities, there would need to be a unique identifier for any application in the app store. I found that there is difference between the package name and the applicationID. When I create a new project in Android Studio, the applicationId exactly matches the Java-style package name I chose during setup. However, the application ID and package name are independent of each other beyond this point. The thing to keep in mind is that app stores identify as a changed application ID as a different app altogether.

So if I were to make N separate copies of the application code for N clients, it would be a maintenance nightmare, If the build system is Gradle, using product flavors is a trick that will make this maintenance easier. Instead of N separate copies, I would simply have N product flavors. Each flavor corresponds to a customized version of my application. Pro, free, whitelabel, Debug (for development purposes), Release (for production purposes) are the basic flavors I have identified.

Each flavor would use the same source code and files of the application but resources, icons, manifests etc that are specific to each flavor can be defined again in main/src/flavor_name directory under res folders etc. according to the requirement.
Here is a snippet from my build.gradle file::

snippetcrop Note: You have to define at least two flavors to be able to build multiple variants.
Because once you have flavors, you can only build flavored application. You can t just build default configuration anymore. Define an empty flavor, with no applicationIdSuffix at all, and it will use all of the default config section.

This week I d be moving forward with the implementation of whitelabelling using productFlavors in Lumicall.
I would love to hear from someone who has done this before!
Comment here or email me and I promise you an excellent home-made-store-brought cake!

Wishing you all HAPPY HOLIDAYS! !
-U

17 November 2016

Urvika Gola: Reaching out to Outreachy

The past few weeks have been a whirlwind of work with my application process for Outreachy taking my full attention. When I got to know about Outreachy, I was intrigued as well as dubious. I had many unanswered questions in my mind. Honestly, I had that fear of failure which prevented me from submitting my partially filled application form. I kept contemplating if the application was good enough , if my answers were perfect and had the right balance of du-uh and oh-ah!
In moments of doubt, it is important to surround yourself with people who believe in you more than you believe in yourself. I ve been fortunate enough to have two amazing people, my sister Anjali, who is an engineer at Intel and my friend, Pranav Jain who completed his GSoC 16 with Debian.
They believed in me when I sat staring at my application and encouraged me to click that final button. When I initially applied for Outreachy, I was given a task for building Lumicall and subsequent task was to examine a BASH script which solves the DNS-01 Challenge.
I deployed the DNS-01 challenge in Java and tested my solution against a server.
Within a limited time frame, I figured things out and wrote my solution in Java and then eagerly waited for the results to come out. Going through a full cycle of : lifecycle.JPG I was elated with joy when I got to know I ve been selected for Outreachy to work with Debian. I was excited about open source & found the idea of working on the project open source fun because of the numerous possibilities of contributing towards a voice video and chat communication software.
My project mentor, Daniel Pocock, played a pivotal role in the time after I had submitted my application. Like a true mentor, he replied to my queries promptly and guided me towards finding the solutions to problems on my own. He exemplified how to feel comfortable with developing on open source. I felt inspired and encouraged to move along in my work. Beyond him, The MiniDebConf was when I was finally introduced to the Debian community. It was an overwhelming experience and I felt proud to have come so far.. It was pretty cool to see JitsiMeet being used for this video call. I was also introduced to two of my mentors , Juliana Louback & Bruno Magalh es . I am very excited to learn from them.

I am glad I applied for Outreachy which helped me identify my strengths and I am totally excited to be working with Debian on the project and learn as much as I can throughout the period. I am not a blog person, this is my first blog ever! I would love to share my experience with you all in the hopes of inspiring someone else who is afraid of clicking that final button!

22 August 2016

Zlatan Todori : When you wake up with a feeling

I woke up at 5am. Somehow made myself to soon go back to sleep again. Woke up at 6am. Such is the life of jet-lag. Or I am just getting old for it. But the truth wouldn't be complete with only those assertion. I woke inspired and tired and the same time. Tired because I am doing very time consumable things. Also in the same time very emotional things. AND at the exact same time things that inspire me. On paper, I am technical leader of Purism. In reality, I have insanely good relations with my CEO for such a short time. So good that I for months were not leading the technical shift only, but also I overtook operations (getting orders and delivering them while working with our assembly line to automate most of the tasks in this field). I was playing also as first line of technical support (forums, IRC and email). Actually I was pretty much the only line of support for few months. I was doing some website changes: change some wording, updating bunch of plugins and making it sure all works, resolved (hopefully) Tor and Cloudflare issues for it, annoying caching system for forums, stopped forum spam and so on. I worked on better messaging for Purism public relations. I thought my team to use keys for signing and encryption. I interviewed (and read all mails) for people that were interested in working or helping Purism. In process of doing all that, I maybe wasn't the most speedy person for all our users needs but I hope they understand and forgive me. I was doing all that while I was researching and developing tablets (which ended up not being the most successful campaign but we now do have them as product). I was doing all that while seeing (and resolving) that our kernel builds were failing. Worked on pushing touchpad (not so good but we are still working on) patches upstream (and they ended being upstreamed). While seeing repos being down because of our host. Repos being down because of broken sync with Debian. Repos being down because of our key mis-management. Metadata not working well. PureBrowser getting broken all the time. Tor browser out of date. No real ISO updates. Wrong sources.list entries and so on. And the hardest part on work was, I was doing all this with very limited scope and even more limited resources. So what kept me on, what is pushing me forward and what am I doing? One philosophy - Free software. Let me not explain it as a technical debt. Let me explain it as social movement. In age, where people are "bombed" by media, by all-time lying politicians (which use fear of non-existent threats/terror as model to control population), in age where proprietary corporations are selling your freedom so you can gain temporary convenience the term Free software is like Giordano Bruno in age of Inquisitions. Free software does not only preserve your Freedom to software source usage but it preserves your Freedom to think and think out of the box and not being punished for that. It preserves the Freedom to live - to choose what and when to do, without having the negative impact on your or others people lives. The Freedom to be transparent and to share. Because not only ideas grow with sharing, but we, as human beings, grow as we share. The Freedom to say "NO". NO. I somehow learnt, and personally think, that the Freedom to say NO is the most important Freedom in our lives. No I will not obey some artificially created master that think they can plan and choose my life decision. No I will not negotiate my Freedom for your convenience (also, such Freedom is anyway not real and it is matter of time where you will be blown away by such illusion). No I will not accept your credit because it has STRINGS attached to it which you either don't present or you blur it in mountain of superficial wording. No I will not implant a chip inside me for sake of your research or my convenience. No I will not have social account on media where majority of people are. No, I will not have pacemaker which is a blackbox with proprietary (buggy) software and it harvesting my data without me being able to look at it. Yin-Yang. Yes, I want to collaborate on making world better place for us all. I don't agree with most of people, but that doesn't make them my enemies (although media would like us to feel and think like that). I will try to preserve everyones Freedom as much as I can. Yes I will share with my community and friends. Yes I want to learn from better than I am. Yes I want to have awesome mentors. Yes, I will try to be awesome mentor. Yes, I choose to care and not ignore facts and actions done by me and other people. Yes, I have the right to be imperfect and do mistakes as long as I will aknowledge and work on them. Bugfixing ourselves as humans is the most important task in our lives. As in software, it is very time consumable but also as in software, it is improvement and incredible satisfaction to see better version of yourself, getting more and more features (even if that sometimes means actually getting read of other/bad features). This all is blending with my work at Purism. I spend a lot of time thinking about projects, development and future. I must do that in order not to make grave mistakes. Failing hardware and software is not grave mistake. Serious, but not grave. Grave is if we betray ourselves and our community in pursue for Freedom. We are trying to unify many things - we want to give you security, privacy and FREEDOM with convenience. So I am pushing myself out of comfort zones and also out of conventional and sometimes even my standard way of thinking. I have seen that non-existing infrastructure for PureOS is hurting is a lot but I needed to cope with it to the time where I will be able to say: not anymore, we are starting to build our own infrastructure. I was coping with Cloudflare being assholes to Tor users but now we also shifting away from them. I came to team where people didn't properly understand what and why are we building this. Came to very small and not that efficient team. Now, we employed a dedicated and hard working person on operations (Goran) which I trust. We have dedicated support person (Mladen) which tries hard to work with people. A very creative visual mastermind (Francois). We have a capable Debian Developer (Matthias Klumpp) working on PureOS new infra. We have a capable and dedicated sysadmins (Theo and Stelio) which we didn't even have in past. We are trying to LEVEL UP Free software and unify them in convenient solution which is lead by Joey Hess. We have a hard-working PureOS developer (Hema) who is coping with current non-existent PureOS infra. We have GNOME Boards of Directors person (Jeff) who is trying to light up our image in world (working with James, to try bring some lights into our shadows caused by infinite supply chain delays). We have created Advisory Board for Freedom, Privacy and Security which I don't want to name now as we are preparing to announce soon that (and trust me, we have good people in here). But, the most important thing here is not that they are all capable or cool people. It is the core value in all of them - they care about Freedom and I trust them on their paths. The trust is always important but in Purism it is essential for our work. I built the workflow without time management (everyone spends their time every single day as they see it fit as long as the work gets done). And we don't create insane short deadlines because everyone else thinks it is important (and rarely something is more important than our time freedom). So the trust is built out of knowledge and the knowledge I have about them and their works is because we freely share with no strings attached. Because of them, and other good people from our community I have the energy to sacrifice my entire time for Purism. It is not white and black: CEO and me don't always agree, some members of my team don't always agree with me or I with them, some people in community are very rude, impolite and don't respect our work but even with disagreement everyone in Purism finds agreement at the end (we use facts in our judgments) and all the people who just try to disturb my and mine teams work aren't as efficient as all the lovely words of people who believe in us, who send us words of support and who share ideas and their thoughts with us. There is no more satisfaction for me than reading a personal mail giving us kudos for the work and their understanding of underlaying amount of work and issues. While we are limited with resources we had an occasional outcry from community to help us. Now I want to help them to help me (you see the Freedom of sharing here?). PureOS has now a wiki. It will be a community wiki which is endorsed by Purism as company. Yes you read it right, Purism considers its community part of company (you don't need to get paycheck to be Purism member). That is why a call upon contributors (technical but mostly non-technical too) to help us make PureOS wiki the best resource on net for our needs. Write tutorials for others, gather and put info on wiki, create an ideas page and vote on them so we can see what community wants to see, chat with us so we all understand what, why and how are we working on things. Make it as transparent as possible. Everyone interested please get in touch with our teams by either poking us online (IRC, social accounts) or via emails (our personal or [hr, pr, feedback]@puri.sm. To finish this writing (as it is 8am here and I still want to rest a bit because I will have meetings for 6 hours straight today) - I wanted to share some personal insight into few things from my point of view. I wanted to say despite all the troubles and people who tried to make our time even harder (and it is already hard by all the limitation which come naturally today with our kind of work), we still create products, we still ship them, we still improved step by step, we still hired and we are still building. Keeping all that together and making progress is for me a milestone greater than just creating a technical product. I just hope we will continue and improve our pace so we can start progressing towards my personal great goal - integrate and cooperate with most of FLOSS ecosystem. P.S. yes, I also (finally!) became an official Debian Developer - still didn't have time to sit and properly think and cry (as every good men) about it.

11 December 2015

Lunar: Reproducible builds: week 32 in Stretch cycle

The first reproducible world summit was held in Athens, Greece, from December 1st-3rd with the support of the Linux Foundation, the Open Tech Fund, and Google. Faidon Liambotis has been an amazing help to sort out all local details. People at ImpactHub Athens have been perfect hosts. North of Athens from the Acropolis with ImpactHub in the center Nearly 40 participants from 14 different free software project had very busy days sharing knowledge, building understanding, and producing actual patches. Anyone interested in cross project discussions should join the rb-general mailing-list. What follows focuses mostly on what happened for Debian this previous week. A more detailed report about the summit will follow soon. You can also read the ones from Joachim Breitner from Debian, Clemens Lang from MacPorts, Georg Koppen from Tor, Dhiru Kholia from Fedora, and Ludovic Court s wrote one for Guix and for the GNU project. The Acropolis from  Infrastructure Several discussions at the meeting helped refine a shared understanding of what kind of information should be recorded on a build, and how they could be used. Daniel Kahn Gillmor sent a detailed update on how .buildinfo files should become part of the Debian archive. Some key changes compared to what we had in mind at DebConf15: Hopefully, ftpmasters will be able to comment on the updated proposal soon. Packages fixed The following packages have become reproducible due to changes in their build dependencies: fades, triplane, caml-crush, globus-authz. The following packages became reproducible after getting fixed: Some uploads fixed some reproducibility issues, but not all of them: Patches submitted which have not made their way to the archive yet: akira sent proposals on how to make bash reproducible. Alexander Couzens submitted a patch upstream to add support for SOURCE_DATE_EPOCH in grub image generator (#787795). reproducible.debian.net An issue with some armhf build nodes was tracked down to a bad interaction between uname26 personality and new glibc (Vagrant Cascadian). A Debian package was created for koji, the RPM building and tracking system used by Fedora amongst others. It is currently waiting for review in the NEW queue. (Ximin Luo, Marek Marczykowski-G recki) diffoscope development diffoscope now has a dedicated mailing list to better accommodate its growing user and developer base. Going through diffoscope's guts together enabled several new contributors. Baptiste Daroussin, Ed Maste, Clemens Lang, Mike McQuaid, Joachim Breitner all contributed their first patches to improve portability or add new features. Regular contributors Chris Lamb, Reiner Herrmann, and Levente Polyak also submitted improvements. diffoscope hacking session in Athens The next release should support more operating systems, filesystem image comparison via libguestfs, HTML reports with on-demand loading, and parallel processing for the most noticeable improvements. Package reviews 27 reviews have been removed, 17 added and 14 updated in the previous week. Chris Lamb and Val Lorentz filed 4 new FTBFS reports. Misc. Baptiste Daroussin has started to implement support for SOURCE_DATE_EPOCH in FreeBSD in libpkg and the ports tree. Thanks Joachim Breitner and h01ger for the pictures.

28 September 2014

Ean Schuessler: RoboJuggy at JavaOne

A few months ago I was showing my friend Bruno Souza the work I had been doing with my childhood friend and robotics genius, David Hanson. I had been watching what David was going through in his process of creating life-like robots with the limited industrial software available for motor control. I had suggested to David that binding motors to Blender control structures was a genuinely viable possibility. David talked with his forward looking CEO, Jong Lee, and they were gracious enough to invite me to Hong Kong to make this exciting idea a reality. Working closely the HRI team (Vytas, Gabrielos, Fabien and Davide) with David s friend and collaborators at OpenCog (Ben Goertzel, Mandeep, David, Jamie, Alex and Samuel) a month long creative hack-fest yielded pretty amazing results. Bruno is an avid puppeteer, a global organizer of java user groups and creator of Juggy the Java Finch, mascot of Java users and user groups everywhere. We started talking about how cool it would be to have a robot version of Juggy. When I was in China I had spent a little time playing with Mark Tilden s RSMedia and various versions of David s hobby servo based emotive heads. Bruno and I did a little research into the ROS Java bindings for the Robot Operating System and decided that if we could make that part of the picture we had a great and fun idea for a JavaOne talk. Hunting and gathering I tracked down a fairly priced RSMedia in Alaska, Bruno put a pair of rubber Juggy puppet heads in the mail and we were on our way.
We had decided that we wanted RoboJuggy to be able to run about untethered and the new RaspberryPi B+ seemed like the perfect low power brain to make that happen. I like the Debian based Raspbian distributions but had lately started using the netinst Pi images. These get your Pi up and running in about 15 minutes with a nicely minimalistic install instead of a pile of dependencies you probably don t need. I d recommend anyone interested I m duplicating our work to stay their journey there: Raspbian UA Net Installer Robots seem like an embedded application but ROS only ships packages for Ubuntu. I was pleasantly surprised that there are very good instructions for building ROS from source on the Pi. I ended up following these instructions: Setting up ROS Hydro on the Raspberry Pi Building from source means that all your install ends up being isolated (in ROS speak) and your file locations and build instructions end up being subtly current. As explained in the linked article, this process is also very time consuming. One thing I would recommend once you get past this step is to use the UNIX dd command to back up your entire SD card to a desktop. This way if you make a mess of things in later steps you can restore your install to a pristine Raspbian+ROS install. If your SD drive was on /dev/sdb you might use something like this to do the job:
sudo dd bs=4M if=/dev/sdb   gzip > /home/your_username/image date +%d%m%y .gz
Getting Java in the mix Once you have your Pi all set up with minimal Raspbian and ROS you are going to want a Java VM. The Pi runs the ARM CPU so you need the corresponding version of Java. I tried getting things going initially with OpenJDK and I had some issues with that. I will work on resolving that in the future because I would like to have a 100% Free Software kit for this but since this was for JavaOne I also wanted JDK8, which isn t available in Debian yet. So, I downloaded the Oracle JDK8 package for ARM. Java 8 JDK for ARM At this point you are ready to start installing the ROS Java packages. I m pretty sure the way I did this initially is wrong but I was trying to reconcile the two install procedures for ROS Java and ROS Hydro for Raspberry Pi. I started by following these directions for ROS Java but with a few exceptions (you have to click the install from source link in the page to see the right stuff: Installing ROS Java on Hydro Now these instructions are good but this is a Pi running Debian and not an Ubuntu install. You won t run the apt-get package commands because those tools were already installed in your earlier steps. Also, this creates its own workspace and we really want these packages all in one workspace. You can apparently chain workspaces in ROS but I didn t understand this well enough to get it working so what I did was this:
> mkdir -p ~/rosjava 
> wstool init -j4 ~/rosjava/src https://raw.github.com/rosjava/rosjava/hydro/rosjava.rosinstall
> source ~/ros_catkin_ws/install_isolated/setup.bash > cd ~/rosjava # Make sure we've got all rosdeps and msg packages.
> rosdep update 
> rosdep install --from-paths src -i -y
and then copied the sources installed into ~/rosjava/src into my main ~/ros_catkin_ws/src. Once those were copied over I was able to run a standard build.
> catkin_make_isolated --install
Like the main ROS install this process will take a little while. The Java gradle builds take an especially long time. One thing I would recommend to speed up your workflow is to have an x86 Debian install (native desktop, QEMU instance, docker, whatever) and do these same build from source installs there. This will let you try your steps out on a much faster system before you try them out in the Pi. That can be a big time saver. Putting together the pieces Around this time my RSMedia had finally showed up from Alaska. At first I thought I had a broken unit because it would power up, complain about not passing system tests and then shut back down. It turns out that if you just put the D batteries in and miss the four AAs that it will kind of pretend to be working so watch for that mistake. Here is a picture of the RSMedia when it first came out of the box: wpid-20140911_142904.jpg Other parts were starting to roll in as well. The rubber puppet heads had made their way through Brazilian customs and my Pololu Mini Maestro 24 had also shown up as well as the my servo motors and pan and tilt camera rig. I had previously bought a set of 10 motors for goofing around so I bought the pan and tilt rig by itself for about $5(!) but you can buy a complete set for around $25 from a number of EBay stores. Complete pan and tilt rig with motors for $25 A bit more about the Pololu. This astonishing little motor controller costs about $25 and gives you control of 24 motors with an easy to use and high level serial API. It is probably also possible to control these servos directly from the Pi and eliminate this board but that will be genuinely difficult because of the real-time timing issues. For $25 this thing is a real gem and you won t regret buying it. Now it was time to start dissecting the RSMedia and getting control of its brain. Unfortunately a lot of great information about the RSMedia has floated away since it was in its heyday 5 years ago but there is still some solid information out there that we need to round up and preserve. A great resource is the SourceForge based website here at http://rsmediadevkit.sourceforge.net. That site has links to a number of useful sites. You will definitely want to check out their wiki. To disassemble the RSMedia I followed their instructions. I will say, it would be smart to take more pictures as you are going because they don t take as many as they should. I took pictures of each board and its associated connections as dismantled the unit and that helped me get things back together later. Another important note is that if all you want to do is solder onto the control board and not replace the head then its feasible to solder the board in place without completely disassembling the unit. Here are some photos of the dis-assembly: wpid-20140921_114742.jpg wpid-20140921_113054.jpg wpid-20140921_112619.jpg Now I also had to start adjusting the puppet head, building an armature for the motors to control it and hooking it into the robot. I need to take some more photos of the actual armature. I like to use cardboard for this kind of stuff because it is so fast to work with and relatively strong. One trick I have also learned about cardboard is that if you get something going with it and you need it to be a little more production strength you can paint it down with fiberglass resin from your local auto store. Once it dries it becomes incredibly tough because it soaks through the fibers of the cardboard and hardens around them. You will want to do this in a well ventilated area but its a great way to build super tough prototypes. Another prototyping trick I can suggest is using a combination of Velcro and zipties to hook things together. The result is surprisingly strong and still easy to take apart if things aren t working out. Velcro self-adhesive pads stick to rubber like magic and that is actually how I hooked the jaw servo onto the mask. You can see me torturing its first initial connection here: Since the puppet head had come all the way from Brazil I decided to cook some chicken hearts in the churrascaria style while I worked on them in the garage. This may sound gross but I m telling you, you need to try it! I soaked mine in soy sauce, Sriracha and chinese cooking wine. Delicious but I digress. wpid-20140920_191551.jpg As I was eating my chicken hearts I was also connecting the pan and tilt armature onto the puppet s jaw and eye assembly. It took me most of the evening to get all this going but by about one in the morning things were starting to look good! I only had a few days left to hack things together before JavaOne and things were starting to get tight. I had so much to do and had also started to run into some nasty surprises with the ROS Java control software. It turns out that ROS Java is less than friendly with ROS message structures that are not built in . I had tried to follow the provided instructions but was not (and still have not) been able to get that working. Using unofficial messages with ROS Java I still needed to get control of the RSMedia. Doing that required the delicate operation of soldering to its control board. On the board there are a set of pins that provide a serial interface to the ARM based embedded Linux computer that controls the robot. To do that I followed these excellent instructions: Connecting to the RSMedia Linux Console Port After some sweaty time bent over a magnifying glass I had success: wpid-20140921_143327.jpg I had previously purchased the USB-TTL232 accessory described in the article from Dallas awesome Tanner Electronics store in Dallas. If you are a geek I would recommend that you go there and say hi to its proprietor (and walking encyclopedia of electronics knowledge) Jim Tanner. It was very gratifying when I started a copy of minicom, set it to 115200, N, 8, 1, plugged in the serial widget to the RSMedia and booted it up. I was greeted with a clearly recognizable Linux startup and console prompt. At first I thought I had done something wrong because I couldn t get it to respond to commands but I quickly realized I had flow control turned on. Once turned off I was able to navigate around the file system, execute commands and have some fun. A little research and I found this useful resource which let me get all kinds of body movements going: A collection of useful commands for the RSMedia At this point, I had a usable set of controls for the body as well as the neck armature. I had a controller running the industry s latest and greatest robotics framework that could run on the RSMedia without being tethered to power and I had most of a connection to Java going. Now I just had to get all those pieces working together. The only problem is that time was running out and I only had a couple of days until my talk and still had to pack and square things away at work. The last day was spent doing things that I wouldn t be able to do on the road. My brother Erik (and fantastic artist) came over to help paint up the juggy head and fix the eyeball armature. He used a mix of oil paint, rubber cement which stuck to the mask beautifully. I bought battery packs for the USB Pi power and the 6v motor control and integrated them into a box that could sit below the neck armature. I fixed up a cloth neck sleeve that could cover everything. Luckily during all this my beautiful and ever so supportive girlfriend Becca had helped me get packed or I probably wouldn t have made it out the door. Welcome to San Francisco THIS ARTICLE IS STILL BEING WRITTEN

25 May 2013

Joachim Breitner: My first CTAN package: Typesetting Continued Equalities

I recently had a TeX itch to scratch: I am working on a paper that has several multi-line continued equalities where, depending on the size of the expressions and the explanations of each step, I chose among a few layouts. But implementing the layout together with the actual code was inefficient, as switching the layout involved changing every line. So I came up with the package conteq which allows you to typeset continued equations in a simple declarative manner, e.g.
\begin conteq 
  e^ \pi\cdot i  \\
= -1               & Euler's formula \\
< 0                & this is an inequality \\ 
< \sqrt 3 \\
= \int e^ -x^2  dx & this is due to Gauss.
\end conteq 
and allows you to select the layout via an parameter to the environment, or globally, or either. Also, the styling of the explanations (italics? wrapped in ... ?) can be configured simply by redefining a macro. For more details and an overview of the various styles, check out the package documentation.
If this sounds useful to you, fetch the conteq package from CTAN. But beware: It uses quite current features of the expl3 package, so you need at least the version from 2012/07/02 (TeXLive 2013 is good). You can file bug reports at the GitHub mirror of my git repository.
I d like to thank Bruno Le Floch and Joseph Wright, who made me aware of expl3 on various TeX Exchange questions. I haven t heard of this term before, but supposedly it is the right translation for the German word Gleichungskette .

2 May 2013

Francesca Ciceri: May 1st

Anarchists in Carrara, May 1st Sun.
Anarchy flags.
Red roses for Gaetano Bresci, Sacco and Vanzetti, Giordano Bruno (among others).
Fava beans and Pecorino.
Old anarchists songs, at top of your lungs.
The beautiful streets of Carrara. May 1st is my Christmas.

7 April 2013

Olivier Berger: Migrating picture tags from KPhotoAlbum to digiKam (or others) through IPTC

I've occasionally used KPhotoAlbum for a few years and eventually added many tags to the pictures. But I've decided I wanted to try other tools, and digiKam seems to be the best option from the many reviews I've read. Still, there's apparently no automatic feature to import into digiKam the tags set in KPhotoAlbum. Fortunately, some smart people have implemented Perl tools allowing to overcome this issue. The process involves modifying the pictures to save the tags inside the files, using the IPTC standard. Then, digiKam will be able to load the tags from the modified files. Here's a copy of the (translated) kphotoalbum2iptc.pl script (the original as in french) I copied from this blog post (in french too). I've been able to generate .deb packages for the required 2 perl libs dependencies using the method described in the referenced post , with : dh-make-perl --build --cpan Image::Kimdaba and dh-make-perl --build --cpan Image::IPTCInfo

Thanks to Pierre Doucet and Bruno Adele for sharing this. Hope this helps.

15 September 2012

Luca Bruno: Software Freedom Day celebrations and Debian

Right now, in many parts of the world, people are celebrating the Software Freedom Day 2012. The Debian project as well is participating to some of these events with talks, demos and partying. In particular, you can find our project members actively involved in different locations and activies, among which: Brazil Novo Hamburgo, RS Italy Quiliano, SV UK Rugby A series of hands-on live demonstrations, including: Personally, right now I m celebrating in Italy, attending a talk by the famous kernel hacker Alessandro Rubini (a really great speech about our freedom and how software impacts it). Cheers from the Italian Riviera!

26 August 2011

Luca Bruno: OpenOCD and the Bus Pirate

As an enthusiast Open Hardware supporter, I regularly read the always brilliant Dangerous Prototype blog.
Last week it featured a short but complete tutorial about unbricking a Seagate Dockstar with OpenOCD and the Bus Pirate. The Bus Pirate is an open source hacker multi-tool that talks to electronic stuff, and which can be used as a JTAG adaptor (and much more!).
OpenOCD, the widespread free JTAG debugger, recently gained support for it. The good news is that, after almost a year and half of development, OpenOCD 0.5.0 has been finally released and is currently available in both Debian testing and unstable. Get it from the repository while it is hot, no need to fiddle with autotools and build tools anymore :) As a side-note for interested parties, SWD (Serial Wire Debugging) support is currently under development, along with its companion library (libswd). Hack and enjoy!

5 March 2011

John Goerzen: Visiting Purdue

Terah went to college at Purdue University, and always enjoyed basketball games there. I ve not been much of a sports fan, but have enjoyed watching Purdue games with her on TV. Terah has been wanting to see a game in person for awhile, so a couple of weeks ago, we went. It was a really fun weekend! When we planed the trip, we had no idea that the Purdue-Ohio State game was going to be such a big one. We walked over from the Union Club Hotel where we were staying on the Purdue campus. People were streaming towards Mackey Arena from all directions. Once inside, it was already loud and buzzing people cheering, the band playing. I ve never experienced anything quite so loud. The game started out badly for Purdue; they were down a few points up front. The entire game was a close one, and the crowd sometimes got so loud that nothing else could be heard not even the band or the announcer. When it became clear at the end that Purdue won the game, the people behind me and apparently 12,000 others began screaming at the top of their lungs. My hearing did eventually return to normal. So did my throat, which had gotten rather sore from from cheering myself. This was our first road trip since I got my amateur radio license. I had a lot of fun visiting with people as we drove. I talked to a retired railroad engineer that used to take an amateur radio with him in his locomotive. As he went through a certain town where he had friends he liked to talk with on the radio, he d get their attention by blowing CQ in Morse code with the train s whistle. Some people in Kansas City had us laughing as we passed through. In Missouri, I talked with some farmers and a World War II vet. In Champaign, IL, I visited with a retired Unix systems administrator that had spent decades working with Unix operating systems. Our hotel was connected to the Purdue Memorial Union, a large and historic building. Besides having some ice cream at the Sweet Shop one evening, we also spent a bit of time exploring it. I noticed that the Purdue Amateur Radio Club was having a testing session in there. We walked past once it was underway, and one of the students was not very busy. I introduced myself and asked if we could see their shack. It was neat to see all the equipment some of it quite old in the room that they must have been using for decades. Terah, of course, had ideas for visiting a number of her favorite places while we were there. We visited Arni s and Bruno s, both pizza places. Bruno s happens to be a Swiss pizza place, so much to my surprise, I had Wienerschnitzel there, which was excellent. We ate a (week late) Valentine s dinner at Bistro 501. On our way back, we also chatted with various people on the radio, though not quite as much. We got helpful suggestions for which route to take, and stopped at the excellent Bobby D s Merchant St. BBQ in Emporia, KS for supper. The boys had also enjoyed their weekend with grandparents, but were glad to see us back. They were particularly interested to see a lot of train videos from Youtube with me the next day.

29 January 2011

Adrian von Bidder: Sci-Fi classics (and other stuff)

I again find myself spending time watching movies ... catching up on all those friends who chastise me for not having seen whatever movie we're talking about. For example, the first three Terminator movies. I'm always fond of old style science fiction, so the first of these is quite cute. And then there's the liquid metal effect introduced in Judgment Day, which also is cool, and at the time certainly was at the bleeding edge of what was possible in CG. But by the third movie the concept itself is sure showing its age although it's still well made action, I didn't enjoy it as much. Haven't got hold of the Salvation nor Bruno Mattai's unofficial Terminator 2. Speaking about old fashioned Sci Fi: Blade Runner has a very nice retro look in the buildings and furnishings. And, although this is more because it also was made in the early 80s, very retro computer consoles :-) Fun to watch, but I'm not quite happy with how the plot turns out in the end. But then, it's a Hollywood production, so should I be surprised? But there's not only Sci Fi ... The Shawshank Redemption is a gripping story about an innocent serving a 20 years sentence for murder. A banker entering the rough world of prison, bets are taken how long he'll last. But obviously it turns out that he's really quite tough ... I'm getting good at this ... I thought Jim Jarmusch after about 20s of the opening credits of Down By Law. While it is (again) about innocent people in prison, the focus here is solely on the interaction of three people sharing a cell, excluding almost everything else. Done beautifully in black and white, and while it's not fast paced in any way, the plot has a steady flow to it.

10 November 2010

Luca Bruno: Online sprints, or how to revive a L10N team

The Italian L10N team has not been very active nor growing in recent years. In particular, we pretty much failed at attracting new members in our team, with the result that untranslated files are piling up and manpower is scarce. Following a suggestion of our uber-active Francesca, we decided to try a new move to invert the trend: organizing brief weekly online sprints open to everybody, where graybeard translators will help newcomers getting to grips on Debian L10N infrastructure while collaboratively working on yet-untranslated targets. Last week, we tried our first and very introductory sprint, with a preliminary meeting on IRC to give instructions and setup ad-hoc pads. As a result, we ended with linux-2.6 po-debconf and a web-page completely translated and proofread by almost fifteen people in just a couple a hours. The key point however is that the majority of participants were fresh L10N-newbies, which we hope will join us permanently very soon after this first contact. Encouraged by the initial positive result, we already announced our next sprint for Thursday 11th, which will be focused on package descriptions translation (preceded by a crash course on DDTSS, its related web interface). We hope that even more users will join us this time, and encourage other stalled translation teams in experimenting a similar approach to revive activity and encourage participation.

22 September 2010

Luca Bruno: Report from the Debian/Ubuntu Community Conference, ITA 2010

From the 17th to the 19th of September in Perugia, Italy, it took place the 5th edition of the Italian Debian Community Conference, which has been attended by many contributors and users. For the first time, the event has been organized in collaboration with the Italian Ubuntu community, as to create a new joint conference in order to foster shared contributions and emphasizing the large common ground of our projects. This new experimental kind of mini-conference was then labeled DUCC-IT, to reflect both the local profile and the mutual collaboration. After the initial social night, spent discussing of several ongoing free software efforts and having dinner all together, the conference official opening started on Saturday, temporarily housed at University of Perugia, with a series of talks and hands-on session aimed at recruiting new contributors to work on development, translations, documentation writing and marketing. It has been a good opportunity to celebrate the Software Freedom Day too, in collaboration with FSUG Italia and the participation of some local schools.

DUCC-IT '10 Group Photo The event been also attended by some members of the Debian Women and Ubuntu Women teams (whose goal is to promote women participation in both projects), who organized a round-table debate taking the Italian panorama as a study case. The discussion embraced different topics, ranging from the wide difference in numbers, to the deep causes of this phenomenon and how to improve the situation. With the help of the hacklab staff (hosting the debate), an audio/video streaming has been made available in real-time, and many remote participants joined us with comments and questions. This new kind of collaboration between our communities was found to be really positive and more events has already been drafted for the next year, including a translation sprint and a contributors meeting. We encourage worldwide local communities to try and engage in a similar experiment: organizing and joining a DUCC event will be pure fun. A detailed report of the conference will be soon available, completed by photos, participants comments, video records and slides for the talks.
We d really like to thank the Math Department of University of Perugia, the Projectz On Island hacklab, FSUG Italia, the Ubuntu-it community and everyone who contribute to this event.

29 August 2010

Luca Bruno: Too many gurus spoil the plug

Being a rather patient and peaceful guy, I acknowledge that perfection is a difficult goal and I rarely rant publicly about troubles I ve stumbled upon.
Today however, I feel I have to wholeheartedly agree with Bernd about Guruplug: it has been a waste of money. I received mine in May, with the order placed and payed in February. First thing noticed is the issue with the power supply: I really think they forgot QA testing on these machines, as my PSU (and many others, just skim through the official forum) blew up just an hour after power-up. I wasn t lucky enough to admire over-heating and internal (mis-)cooling, as it went immediately through GlobalScale sales department for a RMA under warranty. And then I waited for GlobalScale, for an actually working unit. And still I am, it s almost September now. Patiently waiting (hoping, I d say) for some answers. I m not sure who to blame here, Marvell, GlobalScale or both, for this issues with regards to QA, design and sales. But I m quite sure the final result has been already perfectly described: a major fail.

21 April 2010

Josselin Mouette: Anything can happen

After skipping 3 entire releases, and 18 months later, here we are, finally: GDM 2.30 is entering unstable. How can you be so late? For those who haven t followed and just wondered why Debian is so late this is lame this sucks Ubuntu is better because they have the latest version and Fedora is even better because they even have versions that don t work at all, here is the short story: the GDM rewrite wasn t really usable until 2.28 (which is the version with which Ubuntu started to ship it, incidentally). Add to that the time to make a transition plan and to integrate it properly, and that makes actually only 6 months. Big thanks go to Luca Bruno (Lethalman) who did most of the job. A quick look at the changelog will give you an idea of the amount of work involved to bring it to our quality standards. GDM 2.20 and 2.30 Since the rewrite has absolutely zero compatibility with previous versions, it will not be upgraded in place. Therefore, while newly installed systems will get GDM 2.30 by default for squeeze, those upgrading from lenny will keep GDM 2.20. The 2.20 version will be dropped after the squeeze release. If you want to upgrade your GDM, simply run apt-get install gdm3. It should work for simple setups, and there s a hack that makes upgrades work even when logged on X. Everyone who has needs for advanced features (such as LTSP people) should make sure GDM 2.30 suits their needs during the squeeze cycle, since the old version will not be here anymore after. GDM packages need your help Finally, here is a call for translations. Anyone can help: just grab the gdm3 sources, get the .pot files and translate them to your language. Beware, there is one file in debian/po for the desktop files and one in debian/po-up for the patches. (I will try to merge them in a later version.) Then submit your translations as bug reports.

Next.