Search Results: "dhs"

12 July 2023

Reproducible Builds: Reproducible Builds in June 2023

Welcome to the June 2023 report from the Reproducible Builds project In our reports, we outline the most important things that we have been up to over the past month. As always, if you are interested in contributing to the project, please visit our Contribute page on our website.


We are very happy to announce the upcoming Reproducible Builds Summit which set to take place from October 31st November 2nd 2023, in the vibrant city of Hamburg, Germany. Our summits are a unique gathering that brings together attendees from diverse projects, united by a shared vision of advancing the Reproducible Builds effort. During this enriching event, participants will have the opportunity to engage in discussions, establish connections and exchange ideas to drive progress in this vital field. Our aim is to create an inclusive space that fosters collaboration, innovation and problem-solving. We are thrilled to host the seventh edition of this exciting event, following the success of previous summits in various iconic locations around the world, including Venice, Marrakesh, Paris, Berlin and Athens. If you re interesting in joining us this year, please make sure to read the event page] which has more details about the event and location. (You may also be interested in attending PackagingCon 2023 held a few days before in Berlin.)
This month, Vagrant Cascadian will present at FOSSY 2023 on the topic of Breaking the Chains of Trusting Trust:
Corrupted build environments can deliver compromised cryptographically signed binaries. Several exploits in critical supply chains have been demonstrated in recent years, proving that this is not just theoretical. The most well secured build environments are still single points of failure when they fail. [ ] This talk will focus on the state of the art from several angles in related Free and Open Source Software projects, what works, current challenges and future plans for building trustworthy toolchains you do not need to trust.
Hosted by the Software Freedom Conservancy and taking place in Portland, Oregon, FOSSY aims to be a community-focused event: Whether you are a long time contributing member of a free software project, a recent graduate of a coding bootcamp or university, or just have an interest in the possibilities that free and open source software bring, FOSSY will have something for you . More information on the event is available on the FOSSY 2023 website, including the full programme schedule.
Marcel Fourn , Dominik Wermke, William Enck, Sascha Fahl and Yasemin Acar recently published an academic paper in the 44th IEEE Symposium on Security and Privacy titled It s like flossing your teeth: On the Importance and Challenges of Reproducible Builds for Software Supply Chain Security . The abstract reads as follows:
The 2020 Solarwinds attack was a tipping point that caused a heightened awareness about the security of the software supply chain and in particular the large amount of trust placed in build systems. Reproducible Builds (R-Bs) provide a strong foundation to build defenses for arbitrary attacks against build systems by ensuring that given the same source code, build environment, and build instructions, bitwise-identical artifacts are created.
However, in contrast to other papers that touch on some theoretical aspect of reproducible builds, the authors paper takes a different approach. Starting with the observation that much of the software industry believes R-Bs are too far out of reach for most projects and conjoining that with a goal of to help identify a path for R-Bs to become a commonplace property , the paper has a different methodology:
We conducted a series of 24 semi-structured expert interviews with participants from the Reproducible-Builds.org project, and iterated on our questions with the reproducible builds community. We identified a range of motivations that can encourage open source developers to strive for R-Bs, including indicators of quality, security benefits, and more efficient caching of artifacts. We identify experiences that help and hinder adoption, which heavily include communication with upstream projects. We conclude with recommendations on how to better integrate R-Bs with the efforts of the open source and free software community.
A PDF of the paper is now available, as is an entry on the CISPA Helmholtz Center for Information Security website and an entry under the TeamUSEC Human-Centered Security research group.
On our mailing list this month:
The antagonist is David Schwartz, who correctly says There are dozens of complex reasons why what seems to be the same sequence of operations might produce different end results, but goes on to say I totally disagree with your general viewpoint that compilers must provide for reproducability [sic]. Dwight Tovey and I (Larry Doolittle) argue for reproducible builds. I assert Any program especially a mission-critical program like a compiler that cannot reproduce a result at will is broken. Also it s commonplace to take a binary from the net, and check to see if it was trojaned by attempting to recreate it from source.

Lastly, there were a few changes to our website this month too, including Bernhard M. Wiedemann adding a simplified Rust example to our documentation about the SOURCE_DATE_EPOCH environment variable [ ], Chris Lamb made it easier to parse our summit announcement at a glance [ ], Mattia Rizzolo added the summit announcement at a glance [ ] itself [ ][ ][ ] and Rahul Bajaj added a taxonomy of variations in build environments [ ].

Distribution work 27 reviews of Debian packages were added, 40 were updated and 8 were removed this month adding to our knowledge about identified issues. A new randomness_in_documentation_generated_by_mkdocs toolchain issue was added by Chris Lamb [ ], and the deterministic flag on the paths_vary_due_to_usrmerge issue as we are not currently testing usrmerge issues [ ] issues.
Roland Clobus posted his 18th update of the status of reproducible Debian ISO images on our mailing list. Roland reported that all major desktops build reproducibly with bullseye, bookworm, trixie and sid , but he also mentioned amongst many changes that not only are the non-free images being built (and are reproducible) but that the live images are generated officially by Debian itself. [ ]
Jan-Benedict Glaw noticed a problem when building NetBSD for the VAX architecture. Noting that Reproducible builds [are] probably not as reproducible as we thought , Jan-Benedict goes on to describe that when two builds from different source directories won t produce the same result and adds various notes about sub-optimal handling of the CFLAGS environment variable. [ ]
F-Droid added 21 new reproducible apps in June, resulting in a new record of 145 reproducible apps in total. [ ]. (This page now sports missing data for March May 2023.) F-Droid contributors also reported an issue with broken resources in APKs making some builds unreproducible. [ ]
Bernhard M. Wiedemann published another monthly report about reproducibility within openSUSE

Upstream patches

Testing framework The Reproducible Builds project operates a comprehensive testing framework (available at tests.reproducible-builds.org) in order to check packages and other artifacts for reproducibility. In June, a number of changes were made by Holger Levsen, including:
  • Additions to a (relatively) new Documented Jenkins Maintenance (djm) script to automatically shrink a cache & save a backup of old data [ ], automatically split out previous months data from logfiles into specially-named files [ ], prevent concurrent remote logfile fetches by using a lock file [ ] and to add/remove various debugging statements [ ].
  • Updates to the automated system health checks to, for example, to correctly detect new kernel warnings due to a wording change [ ] and to explicitly observe which old/unused kernels should be removed [ ]. This was related to an improvement so that various kernel issues on Ubuntu-based nodes are automatically fixed. [ ]
Holger and Vagrant Cascadian updated all thirty-five hosts running Debian on the amd64, armhf, and i386 architectures to Debian bookworm, with the exception of the Jenkins host itself which will be upgraded after the release of Debian 12.1. In addition, Mattia Rizzolo updated the email configuration for the @reproducible-builds.org domain to correctly accept incoming mails from jenkins.debian.net [ ] as well as to set up DomainKeys Identified Mail (DKIM) signing [ ]. And working together with Holger, Mattia also updated the Jenkins configuration to start testing Debian trixie which resulted in stopped testing Debian buster. And, finally, Jan-Benedict Glaw contributed patches for improved NetBSD testing.

If you are interested in contributing to the Reproducible Builds project, please visit our Contribute page on our website. However, you can get in touch with us via:

23 March 2022

Matthew Garrett: AMD's Pluton implementation seems to be controllable

I've been digging through the firmware for an AMD laptop with a Ryzen 6000 that incorporates Pluton for the past couple of weeks, and I've got some rough conclusions. Note that these are extremely preliminary and may not be accurate, but I'm going to try to encourage others to look into this in more detail. For those of you at home, I'm using an image from here, specifically version 309. The installer is happy to run under Wine, and if you tell it to "Extract" rather than "Install" it'll leave a file sitting in C:\\DRIVERS\ASUS_GA402RK_309_BIOS_Update_20220322235241 which seems to have an additional 2K of header on it. Strip that and you should have something approximating a flash image.

Looking for UTF16 strings in this reveals something interesting:

Pluton (HSP) X86 Firmware Support
Enable/Disable X86 firmware HSP related code path, including AGESA HSP module, SBIOS HSP related drivers.
Auto - Depends on PcdAmdHspCoreEnable build value
NOTE: PSP directory entry 0xB BIT36 have the highest priority.
NOTE: This option will NOT put HSP hardware in disable state, to disable HSP hardware, you need setup PSP directory entry 0xB, BIT36 to 1.
// EntryValue[36] = 0: Enable, HSP core is enabled.
// EntryValue[36] = 1: Disable, HSP core is disabled then PSP will gate the HSP clock, no further PSP to HSP commands. System will boot without HSP.

"HSP" here means "Hardware Security Processor" - a generic term that refers to Pluton in this case. This is a configuration setting that determines whether Pluton is "enabled" or not - my interpretation of this is that it doesn't directly influence Pluton, but disables all mechanisms that would allow the OS to communicate with it. In this scenario, Pluton has its firmware loaded and could conceivably be functional if the OS knew how to speak to it directly, but the firmware will never speak to it itself. I took a quick look at the Windows drivers for Pluton and it looks like they won't do anything unless the firmware wants to expose Pluton, so this should mean that Windows will do nothing.

So what about the reference to "PSP directory entry 0xB BIT36 have the highest priority"? The PSP is the AMD Platform Security Processor - it's an ARM core on the CPU package that boots before the x86. The PSP firmware lives in the same flash image as the x86 firmware, so the PSP looks for a header that points it towards the firmware it should execute. This gives a pointer to a "directory" - a list of different object types and where they're located in flash (there's a description of this for slightly older AMDs here). Type 0xb is treated slightly specially. Where most types contain the address of where the actual object is, type 0xb contains a 64-bit value that's interpreted as enabling or disabling various features - something AMD calls "soft fusing" (Intel have something similar that involves setting bits in the Firmware Interface Table). The PSP looks at the bits that are set here and alters its behaviour. If bit 36 is set, the PSP tells Pluton to turn itself off and will no longer send any commands to it.

So, we have two mechanisms to disable Pluton - the PSP can tell it to turn itself off, or the x86 firmware can simply never speak to it or admit that it exists. Both of these imply that Pluton has started executing before it's shut down, so it's reasonable to wonder whether it can still do stuff. In the image I'm looking at, there's a blob starting at 0x0069b610 that appears to be firmware for Pluton - it contains chunks that appear to be the reference TPM2 implementation, and it broadly decompiles as valid ARM code. It should be viable to figure out whether it can do anything in the face of being "disabled" via either of the above mechanisms.

Unfortunately for me, the system I'm looking at does set bit 36 in the 0xb entry - as a result, Pluton is disabled before x86 code starts running and I can't investigate further in any straightforward way. The implication that the user-controllable mechanism for disabling Pluton merely disables x86 communication with it rather than turning it off entirely is a little concerning, although (assuming Pluton is behaving as a TPM rather than having an enhanced set of capabilities) skipping any firmware communication means the OS has no way to know what happened before it started running even if it has a mechanism to communicate with Pluton without firmware assistance. In that scenario it'd be viable to write a bootloader shim that just faked up the firmware measurements before handing control to the OS.

The bit 36 disabling mechanism seems more solid? Again, it should be possible to analyse the Pluton firmware to determine whether it actually pays attention to a disable command being sent. But even if it chooses to ignore that, if the PSP is in a position to just cut the clock to Pluton, it's not going to be able to do a lot. At that point we're trusting AMD rather than trusting Microsoft, but given that you're also trusting AMD to execute the code you're giving them to execute, it's hard to avoid placing trust in them.

Overall: I'm reasonably confident that systems that ship with Pluton disabled via setting bit 36 in the soft fuses are going to disable it sufficiently hard that the OS can't do anything about it. Systems that give the user an option to enable or disable it are a little less clear in that respect, and it's possible (but not yet demonstrated) that an OS could communicate with Pluton anyway. However, if that's true, and if the firmware never communicates with Pluton itself, the user could install a stub loader in UEFI that mimicks the firmware behaviour and leaves the OS thinking everything was good when it absolutely is not.

So, assuming that Pluton in its current form on AMD has no capabilities outside those we know about, the disabling mechanisms are probably good enough. It's tough to make a firm statement on this before I have access to a system that doesn't just disable it immediately, so stay tuned for updates.

comment count unavailable comments

20 October 2021

Arturo Borrero Gonz lez: Iterating on how we do NFS at Wikimedia Cloud Services

Logos This post was originally published in the Wikimedia Tech blog, authored by Arturo Borrero Gonzalez. NFS is a central piece of infrastructure that is essential to services like Toolforge. Recently, the Cloud Services team at Wikimedia had been reviewing how we do NFS. The current situation NFS is a central piece of technology for some of the services that the Wikimedia Cloud Services team offers to the community. We have several shares that power different use cases: Toolforge user home directories live on NFS, and Cloud VPS users can also access dumps using this protocol. The current setup involves several physical hardware servers, with about 20TB of storage, offering shares over 10G links to the cloud. For the system to be more fault-tolerant, we duplicate each share for redundancy using DRBD. Running NFS on dedicated hardware servers has traditionally offered us advantages: mostly on the performance and the capacity fields. As time has passed, we have been enumerating more and more reasons to review how we do NFS. For one, the current setup is in violation of some of our internal rules regarding realm separation. Additionally, we had been longing for additional flexibility managing our servers: we wanted to use virtual machines managed by Openstack Nova. The DRBD-based high-availability system required mostly a hand-crafted procedure for failover/failback. There s also some scalability concerns as NFS is easy to grow up, but not to grow horizontally, and of course, we have to be able to keep the tenancy setup while doing so, something that NFS does by using LDAP/Unix users and may get complicated too when growing. In general, the servers have become too big to fail , clearly technical debt, and it has taken us years to decide on taking on the task to rethink the architecture. It s worth mentioning that in an ideal world, we wouldn t depend on NFS, but the truth is that it will still be a central piece of infrastructure for years to come in services like Toolforge. Over a series of brainstorming meetings, the WMCS team evaluated the situation and sorted out the many moving parts. The team managed to boil down the potential service future to two competing options: Then we decided to research both options in parallel. For a number of reasons, the evaluation was timeboxed to three weeks. Both ideas had a couple of points in common: the NFS data would be stored on our Ceph farm via Cinder volumes, and we would rely on Ceph reliability to avoid using DRBD. Another open topic was how to back up data from Ceph, to store our important bits in more than one basket. We will get to the back up topic later. The manila experiment The Wikimedia Foundation was an early adopter of some Openstack components (Nova, Glance, Designate, Horizon), but Manila was never evaluated for usage until now. Our approach for this experiment was to closely follow the upstream guidelines. We read the documentation and tried to understand the different setups you can build with Manila. As we often feel with other Openstack components, the documentation doesn t perfectly describe how to introduce a given component in your particular local setup. Here we use an admin-controller flat-topology Neutron network. This network is shared by all tenants (or projects) in our Openstack deployment. Also, Manila can use many different driver backends, for things like NetApps or CephFS that we don t use , yet. After some research, the generic driver was the one that seemed to better fit our use case. The generic driver leverages Nova virtual machines instances plus Cinder volume to create and manage the shares. In general, Manila supports two operational modes, whether it should create/destroy the share servers (i.e, the virtual machine instances) or not. This option is called driver_handles_share_server (or DHSS) and takes a boolean value. We were interested in trying with DHSS=true, to really benefit from the potential of the setup. Manila diagram NFS idea 6, original image in Wikitech So, after sorting all these variables, we moved on with our initial testing. We built a PoC setup as depicted in the diagram above, with the manila-share component running in a virtual machine inside the cloud. The PoC led to us reporting several bugs upstream: In some cases we tried to address these bugs ourselves: It s worth mentioning that the upstream community was extra-welcoming to us, and we re thankful for that. However, at the end of our three-week period, our Manila setup still wasn t working as expected. Your experience may change with other drivers perhaps the ZFSonLinux or the CephFS ones. In general, we were having trouble making the setup work as expected, so we decided to abandon this approach in favor of the other option we were considering at the beginning. Simple virtual machine serving NFS The alternative was to create a Nova virtual machine instance by hand and to configure it using puppet. We have been investing in an automation framework lately, so the idea is to not actually create the server by hand. Anyway, the data would be decoupled from the instance into Cinder volumes, which led us to the question we left for later: How should we back up those terabytes of important information? Just to be clear, the backup problem was independent of the above options; with Manila we would still have had to solve the same challenge. We would like to see our data be backed up somewhere else other than in Ceph. And that s exactly where we are at right now. We ve been exploring different backup strategies and will finally use the Cinder backup API. Conclusion The iteration will end with the dedicated NFS hardware servers being stopped, and the shares being served from within the cloud. The migration will take some time to happen because we will check and double-check that everything works as expected (including from the performance point of view) before making definitive changes. We already have some plans to make sure our users experience as little service impact as possible. The most troublesome shares will be those related to Toolforge. At some point we will need to disallow writes to the NFS share, rsync the data out of the hardware servers into the Cinder volumes, point the NFS clients to the new virtual machines, and then enable writes again. The main Toolforge share has about 8TB of data, so this will take a while. We will have more updates in the future. Who knows, perhaps our next-next iteration, in a couple of years, will see us adopting Openstack Manila for good. Featured image credit: File:(from break water) Manila Skyline panoramio.jpg, ewol, CC BY-SA 3.0 This post was originally published in the Wikimedia Tech blog, authored by Arturo Borrero Gonzalez.

11 March 2021

Vincent Fourmond: All tips and tricks about QSoas

I've decided to post regular summaries of all the articles written here about QSoas; this is the first post of this kind. All the articles related to QSoas can be found here also. The articles written here can be separated into several categories. Tutorials to analyze real data These are posts about how to reproduce the data analysis of published articles, including links to the original data so you can fully reproduce our results. These posts all have the label tutorial. All about fits QSoas has a particularly powerful interface for non-linear least square minimisations (fits): Meta-data Meta data describe the conditions in which experiments were performed. Quiz and their solutions Quiz are small problems that take some skill to solve; they can teach you a lot about how to work with QSoas. Other tips and tricks Release annoucements These have generally lot of general information about the possibilities in QSoas:
About QSoas QSoas is a powerful open source data analysis program that focuses on flexibility and powerful fitting capacities. It is released under the GNU General Public License. It is described in Fourmond, Anal. Chem., 2016, 88 (10), pp 5050 5052. Current version is 3.0. You can download its source code there (or clone from the GitHub repository) and compile it yourself, or buy precompiled versions for MacOS and Windows there.

23 September 2020

Vincent Fourmond: Tutorial: analyze Km data of CODHs

This is the first post of a series in which we will provide the readers with simple tutorial approaches to reproduce the data analysis of some of our published papers. All our data analysis is performed using QSoas. Today, we will show you how to analyze the experiments we used to characterize the behaviour of an enzyme, the Nickel-Iron CO dehydrogenase IV from Carboxytothermus hydrogenoformans. The experiments we analyze here are described in much more details in the original publication, Domnik et al, Angewandte Chemie, 2017. The only things you need to know for now are the following: This means that we expect a response of the type: $$i(t) = \frac i_m 1 + \frac K_m [\mathrm CO ](t) $$ in which $$[\mathrm CO ](t) = \begin cases 0, & \text for t < t_0 \\ C_0 \exp \frac t_0 - t \tau , & \text for t\geq t_0 %> \end cases $$ To begin this tutorial, first download the files from the github repository (direct links: data, parameter file and ruby script). Start QSoas, go to the directory where you saved the files, load the data file, and remove spikes in the data using the following commands:
QSoas> cd
QSoas> l Km-CODH-IV.dat
QSoas> R
First fitThen, to fit the above equation to the data, the simplest is to take advantage of the time-dependent parameters features of QSoas. Run simply:
QSoas> fit-arb im/(1+km/s) /with=s:1,exp
This simply launches the fit interface to fit the exact equations above. The im/(1+km/s) is simply the translation of the Michaelis-Menten equation above, and the /with=s:1,exp specifies that s is the result of the sum of 1 exponential like for the definition of above. Then, load the Km-CODH-IV.params parameter files (using the "Parameters.../Load from file" action at the bottom, or the Ctrl+L keyboard shortcut). Your window should now look like this:
To fit the data, just hit the "Fit" button ! (or Ctrl+F). Including an offset The fit is not bad, but not perfect. In particular, it is easy to see why: the current predicted by the fit goes to 0 at large times, but the actual current is below 0. We need therefore to include an offset to take this into consideration. Close the fit window, and re-run a fit, but now with this command:
QSoas> fit-arb im/(1+km/s)+io /with=s:1,exp
Notice the +io bit that corresponds to the addition of an offset current. Load again the base parameters, run the fit again... Your fit window show now look like:
See how the offset current is now much better taken into account. Let's talk a bit more about the parameters: Taking into account mass-transport limitations However, the fit is still unsatisfactory: the predicted curve fails to reproduce the curvature at the beginning and at the end of the decrease. This is due to issues linked to mass-transport limitations, which are discussed in details in Merrouch et al, Electrochimica Acta, 2017. In short, what you need to do is to close the fit window again, load the transport.rb Ruby file that contains the definition of the itrpt function, and re-launch the fit window using:
QSoas> ruby-run transport.rb
QSoas> fit-arb itrprt(s,km,nFAm,nFAmu)+io /with=s:1,exp
Load again the parameter file... but this time you'll have to play a bit more with the starting parameters for QSoas to find the right values when you fit. Here are some tips: A successful fit should look like this:
Here you are ! I hope you enjoyed analyzing our data, and that it will help you analyze yours ! Feel free to comment and ask for clarifications.

About QSoasQSoas is a powerful open source data analysis program that focuses on flexibility and powerful fitting capacities. It is released under the GNU General Public License. It is described in Fourmond, Anal. Chem., 2016, 88 (10), pp 5050 5052. Current version is 2.2. You can download its source code or buy precompiled versions for MacOS and Windows there.

18 June 2017

Simon Josefsson: OpenPGP smartcard under GNOME on Debian 9.0 Stretch

I installed Debian 9.0 Stretch on my Lenovo X201 laptop today. Installation went smooth, as usual. GnuPG/SSH with an OpenPGP smartcard I use a YubiKey NEO does not work out of the box with GNOME though. I wrote about how to fix OpenPGP smartcards under GNOME with Debian 8.0 Jessie earlier, and I thought I d do a similar blog post for Debian 9.0 Stretch . The situation is slightly different than before (e.g., GnuPG works better but SSH doesn t) so there is some progress. May I hope that Debian 10.0 Buster gets this right? Pointers to which package in Debian should have a bug report tracking this issue is welcome (or a pointer to an existing bug report). After first login, I attempt to use gpg --card-status to check if GnuPG can talk to the smartcard.
jas@latte:~$ gpg --card-status
gpg: error getting version from 'scdaemon': No SmartCard daemon
gpg: OpenPGP card not available: No SmartCard daemon
jas@latte:~$ 
This fails because scdaemon is not installed. Isn t a smartcard common enough so that this should be installed by default on a GNOME Desktop Debian installation? Anyway, install it as follows.
root@latte:~# apt-get install scdaemon
Then try again.
jas@latte:~$ gpg --card-status
gpg: selecting openpgp failed: No such device
gpg: OpenPGP card not available: No such device
jas@latte:~$ 
I believe scdaemon here attempts to use its internal CCID implementation, and I do not know why it does not work. At this point I often recall that want pcscd installed since I work with smartcards in general.
root@latte:~# apt-get install pcscd
Now gpg --card-status works!
jas@latte:~$ gpg --card-status
Reader ...........: Yubico Yubikey NEO CCID 00 00
Application ID ...: D2760001240102000006017403230000
Version ..........: 2.0
Manufacturer .....: Yubico
Serial number ....: 01740323
Name of cardholder: Simon Josefsson
Language prefs ...: sv
Sex ..............: male
URL of public key : https://josefsson.org/54265e8c.txt
Login data .......: jas
Signature PIN ....: not forced
Key attributes ...: rsa2048 rsa2048 rsa2048
Max. PIN lengths .: 127 127 127
PIN retry counter : 3 3 3
Signature counter : 8358
Signature key ....: 9941 5CE1 905D 0E55 A9F8  8026 860B 7FBB 32F8 119D
      created ....: 2014-06-22 19:19:04
Encryption key....: DC9F 9B7D 8831 692A A852  D95B 9535 162A 78EC D86B
      created ....: 2014-06-22 19:19:20
Authentication key: 2E08 856F 4B22 2148 A40A  3E45 AF66 08D7 36BA 8F9B
      created ....: 2014-06-22 19:19:41
General key info..: sub  rsa2048/860B7FBB32F8119D 2014-06-22 Simon Josefsson 
sec#  rsa3744/0664A76954265E8C  created: 2014-06-22  expires: 2017-09-04
ssb>  rsa2048/860B7FBB32F8119D  created: 2014-06-22  expires: 2017-09-04
                                card-no: 0006 01740323
ssb>  rsa2048/9535162A78ECD86B  created: 2014-06-22  expires: 2017-09-04
                                card-no: 0006 01740323
ssb>  rsa2048/AF6608D736BA8F9B  created: 2014-06-22  expires: 2017-09-04
                                card-no: 0006 01740323
jas@latte:~$ 
Using the key will not work though.
jas@latte:~$ echo foo gpg -a --sign
gpg: no default secret key: No secret key
gpg: signing failed: No secret key
jas@latte:~$ 
This is because the public key and the secret key stub are not available.
jas@latte:~$ gpg --list-keys
jas@latte:~$ gpg --list-secret-keys
jas@latte:~$ 
You need to import the key for this to work. I have some vague memory that gpg --card-status was supposed to do this, but I may be wrong.
jas@latte:~$ gpg --recv-keys 9AA9BDB11BB1B99A21285A330664A76954265E8C
gpg: failed to start the dirmngr '/usr/bin/dirmngr': No such file or directory
gpg: connecting dirmngr at '/run/user/1000/gnupg/S.dirmngr' failed: No such file or directory
gpg: keyserver receive failed: No dirmngr
jas@latte:~$ 
Surprisingly, dirmngr is also not shipped by default so it has to be installed manually.
root@latte:~# apt-get install dirmngr
Below I proceed to trust the clouds to find my key.
jas@latte:~$ gpg --recv-keys 9AA9BDB11BB1B99A21285A330664A76954265E8C
gpg: key 0664A76954265E8C: public key "Simon Josefsson " imported
gpg: no ultimately trusted keys found
gpg: Total number processed: 1
gpg:               imported: 1
jas@latte:~$ 
Now the public key and the secret key stub are available locally.
jas@latte:~$ gpg --list-keys
/home/jas/.gnupg/pubring.kbx
----------------------------
pub   rsa3744 2014-06-22 [SC] [expires: 2017-09-04]
      9AA9BDB11BB1B99A21285A330664A76954265E8C
uid           [ unknown] Simon Josefsson 
uid           [ unknown] Simon Josefsson 
sub   rsa2048 2014-06-22 [S] [expires: 2017-09-04]
sub   rsa2048 2014-06-22 [E] [expires: 2017-09-04]
sub   rsa2048 2014-06-22 [A] [expires: 2017-09-04]
jas@latte:~$ gpg --list-secret-keys
/home/jas/.gnupg/pubring.kbx
----------------------------
sec#  rsa3744 2014-06-22 [SC] [expires: 2017-09-04]
      9AA9BDB11BB1B99A21285A330664A76954265E8C
uid           [ unknown] Simon Josefsson 
uid           [ unknown] Simon Josefsson 
ssb>  rsa2048 2014-06-22 [S] [expires: 2017-09-04]
ssb>  rsa2048 2014-06-22 [E] [expires: 2017-09-04]
ssb>  rsa2048 2014-06-22 [A] [expires: 2017-09-04]
jas@latte:~$ 
I am now able to sign data with the smartcard, yay!
jas@latte:~$ echo foo gpg -a --sign
-----BEGIN PGP MESSAGE-----
owGbwMvMwMHYxl2/2+iH4FzG01xJDJFu3+XT8vO5OhmNWRgYORhkxRRZZjrGPJwQ
yxe68keDGkwxKxNIJQMXpwBMRJGd/a98NMPJQt6jaoyO9yUVlmS7s7qm+Kjwr53G
uq9wQ+z+/kOdk9w4Q39+SMvc+mEV72kuH9WaW9bVqj80jN77hUbfTn5mffu2/aVL
h/IneTfaOQaukHij/P8A0//Phg/maWbONUjjySrl+a3tP8ll6/oeCd8g/aeTlH79
i0naanjW4bjv9wnvGuN+LPHLmhUc2zvZdyK3xttN/roHvsdX3f53yTAxeInvXZmd
x7W0/hVPX33Y4nT877T/ak4L057IBSavaPVcf4yhglVI8XuGgaTP666Wuslbliy4
5W5eLasbd33Xd/W0hTINznuz0kJ4r1bLHZW9fvjLduMPq5rS2co9tvW8nX9rhZ/D
zycu/QA=
=I8rt
-----END PGP MESSAGE-----
jas@latte:~$ 
Encrypting to myself will not work smoothly though.
jas@latte:~$ echo foo gpg -a --encrypt -r simon@josefsson.org
gpg: 9535162A78ECD86B: There is no assurance this key belongs to the named user
sub  rsa2048/9535162A78ECD86B 2014-06-22 Simon Josefsson 
 Primary key fingerprint: 9AA9 BDB1 1BB1 B99A 2128  5A33 0664 A769 5426 5E8C
      Subkey fingerprint: DC9F 9B7D 8831 692A A852  D95B 9535 162A 78EC D86B
It is NOT certain that the key belongs to the person named
in the user ID.  If you *really* know what you are doing,
you may answer the next question with yes.
Use this key anyway? (y/N) 
gpg: signal Interrupt caught ... exiting
jas@latte:~$ 
The reason is that the newly imported key has unknown trust settings. I update the trust settings on my key to fix this, and encrypting now works without a prompt.
jas@latte:~$ gpg --edit-key 9AA9BDB11BB1B99A21285A330664A76954265E8C
gpg (GnuPG) 2.1.18; Copyright (C) 2017 Free Software Foundation, Inc.
This is free software: you are free to change and redistribute it.
There is NO WARRANTY, to the extent permitted by law.
Secret key is available.
pub  rsa3744/0664A76954265E8C
     created: 2014-06-22  expires: 2017-09-04  usage: SC  
     trust: unknown       validity: unknown
ssb  rsa2048/860B7FBB32F8119D
     created: 2014-06-22  expires: 2017-09-04  usage: S   
     card-no: 0006 01740323
ssb  rsa2048/9535162A78ECD86B
     created: 2014-06-22  expires: 2017-09-04  usage: E   
     card-no: 0006 01740323
ssb  rsa2048/AF6608D736BA8F9B
     created: 2014-06-22  expires: 2017-09-04  usage: A   
     card-no: 0006 01740323
[ unknown] (1). Simon Josefsson 
[ unknown] (2)  Simon Josefsson 
gpg> trust
pub  rsa3744/0664A76954265E8C
     created: 2014-06-22  expires: 2017-09-04  usage: SC  
     trust: unknown       validity: unknown
ssb  rsa2048/860B7FBB32F8119D
     created: 2014-06-22  expires: 2017-09-04  usage: S   
     card-no: 0006 01740323
ssb  rsa2048/9535162A78ECD86B
     created: 2014-06-22  expires: 2017-09-04  usage: E   
     card-no: 0006 01740323
ssb  rsa2048/AF6608D736BA8F9B
     created: 2014-06-22  expires: 2017-09-04  usage: A   
     card-no: 0006 01740323
[ unknown] (1). Simon Josefsson 
[ unknown] (2)  Simon Josefsson 
Please decide how far you trust this user to correctly verify other users' keys
(by looking at passports, checking fingerprints from different sources, etc.)
  1 = I don't know or won't say
  2 = I do NOT trust
  3 = I trust marginally
  4 = I trust fully
  5 = I trust ultimately
  m = back to the main menu
Your decision? 5
Do you really want to set this key to ultimate trust? (y/N) y
pub  rsa3744/0664A76954265E8C
     created: 2014-06-22  expires: 2017-09-04  usage: SC  
     trust: ultimate      validity: unknown
ssb  rsa2048/860B7FBB32F8119D
     created: 2014-06-22  expires: 2017-09-04  usage: S   
     card-no: 0006 01740323
ssb  rsa2048/9535162A78ECD86B
     created: 2014-06-22  expires: 2017-09-04  usage: E   
     card-no: 0006 01740323
ssb  rsa2048/AF6608D736BA8F9B
     created: 2014-06-22  expires: 2017-09-04  usage: A   
     card-no: 0006 01740323
[ unknown] (1). Simon Josefsson 
[ unknown] (2)  Simon Josefsson 
Please note that the shown key validity is not necessarily correct
unless you restart the program.
gpg> quit
jas@latte:~$ echo foo gpg -a --encrypt -r simon@josefsson.org
-----BEGIN PGP MESSAGE-----
hQEMA5U1Fip47NhrAQgArTvAykj/YRhWVuXb6nzeEigtlvKFSmGHmbNkJgF5+r1/
/hWENR72wsb1L0ROaLIjM3iIwNmyBURMiG+xV8ZE03VNbJdORW+S0fO6Ck4FaIj8
iL2/CXyp1obq1xCeYjdPf2nrz/P2Evu69s1K2/0i9y2KOK+0+u9fEGdAge8Gup6y
PWFDFkNj2YiVa383BqJ+kV51tfquw+T4y5MfVWBoHlhm46GgwjIxXiI+uBa655IM
EgwrONcZTbAWSV4/ShhR9ug9AzGIJgpu9x8k2i+yKcBsgAh/+d8v7joUaPRZlGIr
kim217hpA3/VLIFxTTkkm/BO1KWBlblxvVaL3RZDDNI5AVp0SASswqBqT3W5ew+K
nKdQ6UTMhEFe8xddsLjkI9+AzHfiuDCDxnxNgI1haI6obp9eeouGXUKG
=s6kt
-----END PGP MESSAGE-----
jas@latte:~$ 
So everything is fine, isn t it? Alas, not quite.
jas@latte:~$ ssh-add -L
The agent has no identities.
jas@latte:~$ 
Tracking this down, I now realize that GNOME s keyring is used for SSH but GnuPG s gpg-agent is used for GnuPG. GnuPG uses the environment variable GPG_AGENT_INFO to connect to an agent, and SSH uses the SSH_AUTH_SOCK environment variable to find its agent. The filenames used below leak the knowledge that gpg-agent is used for GnuPG but GNOME keyring is used for SSH.
jas@latte:~$ echo $GPG_AGENT_INFO 
/run/user/1000/gnupg/S.gpg-agent:0:1
jas@latte:~$ echo $SSH_AUTH_SOCK 
/run/user/1000/keyring/ssh
jas@latte:~$ 
Here the same recipe as in my previous blog post works. This time GNOME keyring only has to be disabled for SSH. Disabling GNOME keyring is not sufficient, you also need gpg-agent to start with enable-ssh-support. The simplest way to achieve that is to add a line in ~/.gnupg/gpg-agent.conf as follows. When you login, the script /etc/X11/Xsession.d/90gpg-agent will set the environment variables GPG_AGENT_INFO and SSH_AUTH_SOCK. The latter variable is only set if enable-ssh-support is mentioned in the gpg-agent configuration.
jas@latte:~$ mkdir ~/.config/autostart
jas@latte:~$ cp /etc/xdg/autostart/gnome-keyring-ssh.desktop ~/.config/autostart/
jas@latte:~$ echo 'Hidden=true' >> ~/.config/autostart/gnome-keyring-ssh.desktop 
jas@latte:~$ echo enable-ssh-support >> ~/.gnupg/gpg-agent.conf 
jas@latte:~$ 
Log out from GNOME and log in again. Now you should see ssh-add -L working.
jas@latte:~$ ssh-add -L
ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABAQDFP+UOTZJ+OXydpmbKmdGOVoJJz8se7lMs139T+TNLryk3EEWF+GqbB4VgzxzrGjwAMSjeQkAMb7Sbn+VpbJf1JDPFBHoYJQmg6CX4kFRaGZT6DHbYjgia59WkdkEYTtB7KPkbFWleo/RZT2u3f8eTedrP7dhSX0azN0lDuu/wBrwedzSV+AiPr10rQaCTp1V8sKbhz5ryOXHQW0Gcps6JraRzMW+ooKFX3lPq0pZa7qL9F6sE4sDFvtOdbRJoZS1b88aZrENGx8KSrcMzARq9UBn1plsEG4/3BRv/BgHHaF+d97by52R0VVyIXpLlkdp1Uk4D9cQptgaH4UAyI1vr cardno:000601740323
jas@latte:~$ 
Topics for further discussion or research include 1) whether scdaemon, dirmngr and/or pcscd should be pre-installed on Debian desktop systems; 2) whether gpg --card-status should attempt to import the public key and secret key stub automatically; 3) why GNOME keyring is used by default for SSH rather than gpg-agent; 4) whether GNOME keyring should support smartcards, or if it is better to always use gpg-agent for GnuPG/SSH, 5) if something could/should be done to automatically infer the trust setting for a secret key. Enjoy!

2 January 2015

Simon Josefsson: OpenPGP Smartcards and GNOME

The combination of GnuPG and a OpenPGP smartcard (such as the YubiKey NEO) has been implemented and working well for around a decade. I recall starting to use it when I received a FSFE Fellowship card long time ago. Sadly there has been some regressions when using them under GNOME recently. I reinstalled my laptop with Debian Jessie (beta2) recently, and now took the time to work through the issue and write down a workaround. To work with GnuPG and smartcards you install GnuPG agent, scdaemon, pscsd and pcsc-tools. On Debian you can do it like this:
apt-get install gnupg-agent scdaemon pcscd pcsc-tools
Use the pcsc_scan command line tool to make sure pcscd recognize the smartcard before continuing, if that doesn t recognize the smartcard nothing beyond this point will work. The next step is to make sure you have the following line in ~/.gnupg/gpg.conf:
use-agent
Logging out and into GNOME should start gpg-agent for you, through the /etc/X11/Xsession.d/90gpg-agent script. In theory, this should be all that is required. However, when you start a terminal and attempt to use the smartcard through GnuPG you would get an error like this:
jas@latte:~$ gpg --card-status
gpg: selecting openpgp failed: unknown command
gpg: OpenPGP card not available: general error
jas@latte:~$
The reason is that the GNOME Keyring hijacks the GnuPG agent s environment variables and effectively replaces gpg-agent with gnome-keyring-daemon which does not support smartcard commands (Debian bug #773304). GnuPG uses the environment variable GPG_AGENT_INFO to find the location of the agent socket, and when the GNOME Keyring is active it will typically look like this:
jas@latte:~$ echo $GPG_AGENT_INFO 
/run/user/1000/keyring/gpg:0:1
jas@latte:~$ 
If you use GnuPG with a smartcard, I recommend to disable GNOME Keyring s GnuPG and SSH agent emulation code. This used to be easy to achieve in older GNOME releases (e.g., the one included in Debian Wheezy), through the gnome-session-properties GUI. Sadly there is no longer any GUI for disabling this functionality (Debian bug #760102). The GNOME Keyring GnuPG/SSH agent replacement functionality is invoked through the XDG autostart mechanism, and the documented way to disable system-wide services for a normal user account is to invoke the following commands.
jas@latte:~$ mkdir ~/.config/autostart
jas@latte:~$ cp /etc/xdg/autostart/gnome-keyring-gpg.desktop ~/.config/autostart/
jas@latte:~$ echo 'Hidden=true' >> ~/.config/autostart/gnome-keyring-gpg.desktop 
jas@latte:~$ cp /etc/xdg/autostart/gnome-keyring-ssh.desktop ~/.config/autostart/
jas@latte:~$ echo 'Hidden=true' >> ~/.config/autostart/gnome-keyring-ssh.desktop 
jas@latte:~$ 
You now need to logout and login again. When you start a terminal, you can look at the GPG_AGENT_INFO environment variable again and everything should be working again.
jas@latte:~$ echo $GPG_AGENT_INFO 
/tmp/gpg-dqR4L7/S.gpg-agent:1890:1
jas@latte:~$ echo $SSH_AUTH_SOCK 
/tmp/gpg-54VfLs/S.gpg-agent.ssh
jas@latte:~$ gpg --card-status
Application ID ...: D2760001240102000060000000420000
...
jas@latte:~$ ssh-add -L
ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABAQDFP+UOTZJ+OXydpmbKmdGOVoJJz8se7lMs139T+TNLryk3EEWF+GqbB4VgzxzrGjwAMSjeQkAMb7Sbn+VpbJf1JDPFBHoYJQmg6CX4kFRaGZT6DHbYjgia59WkdkEYTtB7KPkbFWleo/RZT2u3f8eTedrP7dhSX0azN0lDuu/wBrwedzSV+AiPr10rQaCTp1V8sKbhz5ryOXHQW0Gcps6JraRzMW+ooKFX3lPq0pZa7qL9F6sE4sDFvtOdbRJoZS1b88aZrENGx8KSrcMzARq9UBn1plsEG4/3BRv/BgHHaF+d97by52R0VVyIXpLlkdp1Uk4D9cQptgaH4UAyI1vr cardno:006000000042
jas@latte:~$ 
That s it. Resolving this properly involves 1) adding smartcard code to the GNOME Keyring, 2) disabling the GnuPG/SSH replacement code in GNOME Keyring completely, 3) reorder the startup so that gpg-agent supersedes gnome-keyring-daemon instead of vice versa, so that people who installed the gpg-agent really gets it instead of the GNOME default, or 4) something else. I don t have a strong opinion on how to solve this, but 3) sounds like a simple way forward.

26 October 2014

Hideki Yamane: Open Source Conference 2014 Tokyo/Fall


18th and 19th October, "Open Source Conference 2014 Tokyo/Fall" was held in Meisei University, Tokyo. About 1,500 participates there. "Tokyo area Debian Study Meeting" booth was there, provided some flyers, DVDs and chat.




In our Debian community session, Nobuhiro Iwamatsu talked about status of Debian8 "Jessie". Thanks, Nobuhiro :)


It seems to be not a "conference" itself but a festival for FOSS and other IT community members, so they enjoyed a lot.





... and we also enjoyed beer after party (of course :)




see you - next event!

22 June 2014

Simon Josefsson: OpenPGP Key Transition Statement

I have created a new OpenPGP key 54265e8c and will be transitioning away from my old key. If you have signed my old key, I would appreciate signatures on my new key as well. I have created a transition statement that can be downloaded from https://josefsson.org/key-transition-2014-06-22.txt. Below is the signed statement.
-----BEGIN PGP SIGNED MESSAGE-----
Hash: SHA512
OpenPGP Key Transition Statement for Simon Josefsson
I have created a new OpenPGP key and will be transitioning away from
my old key.  The old key has not been compromised and will continue to
be valid for some time, but I prefer all future correspondence to be
encrypted to the new key, and will be making signatures with the new
key going forward.
I would like this new key to be re-integrated into the web of trust.
This message is signed by both keys to certify the transition.  My new
and old keys are signed by each other.  If you have signed my old key,
I would appreciate signatures on my new key as well, provided that
your signing policy permits that without re-authenticating me.
The old key, which I am transitioning away from, is:
pub   1280R/B565716F 2002-05-05
      Key fingerprint = 0424 D4EE 81A0 E3D1 19C6  F835 EDA2 1E94 B565 716F
The new key, to which I am transitioning, is:
pub   3744R/54265E8C 2014-06-22
      Key fingerprint = 9AA9 BDB1 1BB1 B99A 2128  5A33 0664 A769 5426 5E8C
The entire key may be downloaded from: https://josefsson.org/54265e8c.txt
To fetch the full new key from a public key server using GnuPG, run:
  gpg --keyserver keys.gnupg.net --recv-key 54265e8c
If you already know my old key, you can now verify that the new key is
signed by the old one:
  gpg --check-sigs 54265e8c
If you are satisfied that you've got the right key, and the User IDs
match what you expect, I would appreciate it if you would sign my key:
  gpg --sign-key 54265e8c
You can upload your signatures to a public keyserver directly:
  gpg --keyserver keys.gnupg.net --send-key 54265e8c
Or email simon@josefsson.org (possibly encrypted) the output from:
  gpg --armor --export 54265e8c
If you'd like any further verification or have any questions about the
transition please contact me directly.
To verify the integrity of this statement:
  wget -q -O- https://josefsson.org/key-transition-2014-06-22.txt gpg --verify
/Simon
-----BEGIN PGP SIGNATURE-----
Version: GnuPG v1.4.12 (GNU/Linux)
iLwEAQEKAAYFAlOnV+AACgkQ7aIelLVlcW89XgUAljJgYfReyR9/bU+Om6UHUttt
CAOgSRqdcQSQ2hT69vzuhb/bc8CslIQcBtGqTgxDFsxEFhbm5zKn+tSzy5MHNHqt
MsqHcZjlYuYVhMXDhka+cfyhtd9zIxjVE5vk8v+GqEGoh8DGYq0vPy3VfvcSz5Z3
MSUpSj8gN00jlU1z4nad3maEq0ApvsLr8EsLZmtxF5TNFvzJ8mmwY+gHBGHjVYkB
8AQBAQoABgUCU6dX4AAKCRAGZKdpVCZejD1eDp46XGL2puMp0le2OF75WIUW8xqf
TMiZeB99ruk3P/jvuLnGPP2J5o7SIKE50FkMEss0yvxi6jBlHk+cJeKWGXVjBpxU
0QHq063NU+kjbMYwDfi5ZxXqaKeYODJm8Xmfh3d7lRaWF5rUOosR8nC/OROSrhg4
TjlAbvbxpQsls/JPbbporK2gbAtMlzJPD8zC8z/dT+t0qjlce8fADugblVW3bACC
Kl53X4XpojzNd/U19tSXkIBdNY/GVJqci+iruiJ1WGARF9ocnIXVuNXsfyt7UGq4
UiM/AeDVzI76v1QnE8WpsmSXzi2zXe3VahUPhOU2nPDoL53ggiVsTY3TwilvQLfX
Av/74PIaEtCi1g23YeojQlpdYzcWfnE+tUyTSNwPIBzyzHvFAHNg1Pg0KKUALsD9
P7EjrMuz63z2276EBKX8++9GnQQNCNfdHSuX4WGrBx2YgmOOqRdllMKz6pVMZdJO
V+gXbCMx0D5G7v50oB58Mb5NOgIoOnh3IQhJ7LkLwmcdG39yCdpU+92XbAW73elV
kmM8i0wsj5kDUU2ys32Gj2HnsVpbnh3Fvm9fjFJRbbQL/FxNAjzNcHe4cF3g8hTb
YVJJlzhmHGvd7HvXysJJaa0=
=ZaqY
-----END PGP SIGNATURE-----
flattr this!

30 January 2014

Russell Coker: The Movie Experience

Phandroid has one of many articles about a man being detained for wearing Google Glass in a cinema [1]. The article states as a fact that it s probably not smart to bring a recording device into a movie theater which is totally bogus. I ve visited a government office where recording devices were prohibited, they provided a locker for me to store everything that could be used for electronic storage outside their main security zone, that s what you do when you ban recording devices. Any place that doesn t have such facilities really isn t banning recording. The Gadgeteer has the original story with more detail with an update showing that the Department of Homeland Security were responsible for detaining the victim [2]. There are lots of issues here with DHS continuing to do nothing good and more bad things than most people suspect and with the music and film industry organisations attacking innocent people. But one thing that seems to be ignored is that movies are a recreational activity, so it s an experience that they are selling not just a movie. Any organisation that wants to make money out of movies really should be trying to make movies fun. The movie experience has always involved queuing, paying a lot of money for tickets ($20 per seat seems common), buying expensive drinks/snacks, and having to waste time on anti-piracy adverts. Now they are adding the risk of assault, false-arrest, and harassment under color of law to the down-sides of watching a movie. Downloading a movie via Bittorrent takes between 20 minutes and a few hours (depending on size and internet connectivity). Sometimes it can be quicker to download a movie than to drive to a cinema and if you are organising a group to watch a movie it will definitely be easier to download it. When you watch a movie at home you can pause it for a toilet break and consume alcoholic drinks while watching (I miss the Dutch cinemas where an intermission and a bar were standard features). It s just a better experience to download a movie via Bittorrent. I ve previously written about the way that downloading movies is better than buying a DVD [3], now they are making the cinema a worse experience too. I sometimes wonder if groups like the MPAA are actually trying to make money from movies or whether they just want to oppress their audiences for fun or psychological research. I could imagine someone like the young Phillip Zimbardo working for the MPAA and doing experiments to determine how badly movie industry employees can treat their customers before the customers revolt. Anyone who watches a Jack Ryan movie (or any movie with a Marty-Stu/Gary-Stu character) obviously doesn t even want to experience the stress of an unhappy ending to a movie. It seems obvious that such people won t want the stress of potentially being assaulted in the cinema. In terms of economics it seems a bad idea to do anything about recording in the cinema. When I was 11 I was offered the opportunity to watch a movie that had been recorded by a video camera in the US before it was released in Australia, I wasn t interested because watching a low quality recording wouldn t be fun. It seems to me that if The Pirate Bay (the main site for Bittorrent downloads of movies) [4] was filled with awful camera recordings of movies then it would discourage people from using it. A quick search shows some camera recordings on The Pirate Bay, it seems that if you want to download a movie of reasonable quality then you have to read the Wikipedia page about Pirated Movie Release Types [5] to make sure that you get a good quality download. But if you buy a DVD in a store or visit a cinema then you are assured of image and sound quality. If the movie industry were smarter they would start uploading camera recordings of movies described as Blue-Ray rips to mess with Bittorrent users and put newbies off downloading movies.

3 January 2013

Richard Hartmann: GnuPG key transition statement

-----BEGIN PGP SIGNED MESSAGE-----
Hash: SHA1,SHA512
I am transitioning GPG keys from an old 1024-bit DSA key to a new
4096-bit RSA key.  The old key will continue to be valid for some
time, but I prefer all new correspondance to be encrypted to the new
key, and will be making all signatures going forward with the new key.
This transition document is signed with both keys to validate the
transition.
If you have signed my old key, I would appreciate signatures on my new
key as well, provided that your signing policy permits that without
re-authenticating me.
The old key, which I am transitioning away from, is:
  pub   1024D/DFCA34A3 2007-10-07 [expires: 2013-10-24]
    Key fingerprint = FE23 BE62 DF18 72FB C58D  D637 FFAE 0427 DFCA 34A3
The new key, to which I am transitioning, is:
  pub   4096R/95206DD3 2013-01-02 [expires: 2016-01-02]
    Key fingerprint = DF0B FDFF 4A4D DA01 7944  1B8F 6906 4B01 9520 6DD3
To fetch the full new key from a public key server using GnuPG, run:
  gpg --keyserver keys.gnupg.net --recv-key 95206DD3
If you have already validated my old key, you can then validate that
the new key is signed by my old key:
  gpg --check-sigs 95206DD3
If you then want to sign my new key, a simple and safe way to do that
is by using caff (shipped in Debian as part of the "signing-party"
package) as follows:
  caff 95206DD3
Please contact me via e-mail at <richih.mailinglist@gmail.com>
if you have any questions about this document or this transition.
    Richard Hartmann
    richih.mailinglist@gmail.com
    2013-01-03
-----BEGIN PGP SIGNATURE-----
Version: GnuPG v1.4.12 (GNU/Linux)
iEYEARECAAYFAlDk27gACgkQ/64EJ9/KNKMGDgCeLyQy6yhEu3OB8UarJ6LYOfVY
ihgAnRSbYMrVxK+28souMHiLx4y0rSQ+iQIcBAEBCgAGBQJQ5Nu4AAoJEGkGSwGV
IG3TcbYP/je8mu+MCIiNNY8w/gN+/9S8gEGWd2p9LeN71xpCuSpBVSevAxxStV9R
2XRTfI8tf/nk27y+4073IscUTPaBkb2IhvIfCnFmOtZjP9slS2JMHOylqH3KLulQ
Cm/IbpKCAwTmQtRMzFxKoRjNPJDCVzZYpoD9r7TmIrs+iKtDQ/w/zQmtwEFWncYK
2JJqVEd+qN9REpWlKDDifcUDPGiugsICelAaUF8VMvp2ouVxQTGbz53IdAgvrX9G
vM+BAfel3W73nfF3L+08vgad0BGGMnEWcfcZ6r+DKu2WXgC8AXT0Y32sD8TiGJSd
JrWMqwec1iOd6lfOpFDdhSCFyMl5r1NiOLw5iNsETHYjysW2N1wZCicmF/8ukm/y
EpIXf3z8vtLrofGLN1jTC34Jugts8C5NNRil195Hkim36t2F97OJswNLhxt7nQ41
vhyXaBK3kwOZ1nVp6xM/yMnQzi5eVO1W/BE8Wr27O6pe3KqEx0Lv8zBzQjj4ly64
shyNRY3gzxs/KZ3p/3nrSn1AYOXIuCM3nILh89qQ/ynotrqYmpFkUorUSSOvraoQ
LN5IPH0wh8F2eDUCui2fGDEmnQ5S5mlHnIZmL3gkPVGLhNaoFQh0R8UwUO6HIxMQ
cTxv5TGL1YWtIvl0bvlEXK6MA4g0ZBXk15JZkWUD84zycr5cb79d
=DFzt
-----END PGP SIGNATURE-----

13 October 2012

Vasudev Kamath: Weekly Log - 06/13-102012

Update for 06/10/2012 Well I had not written the weekly work log for the last week that is because the week was short (oh yeah thanks to all these bandhs our week got fluctuated and to be frank the week was only 4 days long) and second was my lazyness. Here goes the update * After a long discussion with Aravinda on "why productivity is reducing these days." We concluded the social network is eating up most of time. So decision was made and I closed all pinned tabs for Twitter Gmail Identi.ca and Friendica on my browser * After the above resolution I finished almost 4 chapters of Moder Perl within 2 hours!. Indeed Social networks kill the productivity. Update for 13/10/2012 I've really fallen in love with computerised bots, thanks to the wonderful KGB bot :-). So majority of my work on this week is on bots. Jabber Dictionary Redesign After thinking for a while I decided to re-write the dictionary bot which when release got an overwhelming response as see in the comments of above link. Few reason for re-write
  1. Generalise the bot framework so single bot can handle multiple languages
  2. Improve the data collection on bot side
  3. Current code base was not very well organised and trying to add more feature it had become messy.
  4. Provide XEP support to help in data collections
Few changes which are already implemented include
  1. New code is now using pyxmpp2 library instead of GPLed xmppy used by current code.
  2. Implemented XEP-0071 extension to properly format the meaning display by the bot
  3. Current implementation was displaying all words in one set without distinguishing between adjectives,verbs proverbs etc. even though wiktionary displays meaning based on this. New implementation gives out meaning in same format as it is displayed on wiktionary.
Things remaining todo
  1. Separating wiktionary parsing logic from bot code by providing some sort of intermediate interface between bot and wiktionary parsers.
  2. Adding more language wiktionary parsers and teaching the bot to become multilingual :-)
  3. Integrating XEP-0004 (Data Forms) for taking the meaning input from user. Current code requires user to enter data in particular format
suckless-tools fix for Wheezy Jakub Wilk suggested me to prepare a minimal version of suckless-tools for Wheezy which includes a patch to the bug #685611. And few minor changes which involve taking over the package and fixes in copyright file other than above mentioned bug. Hopefully release-team will be okay with these changes. I'm waiting for the upload to file an unblock-request. I did face a problem I was halfway through wit suckless-tools_39-1 when Jakub asked me about this change and current repository was fresh one prepared for 39-1 and didn't have history for 38 version. First I thought of preparing separate repository for 38 version which was not an correct option, but I even couldn't play with current repository. So finally I renamed current version of repository to suckless-tools-39.git on collab-maint and prepared fresh repository suckless-tools.git basing it on 38-1 version. From 39 version suckless-tools will be following 3.0 (quilt) source format and will not be working with git-buildpackage as the tool can't handle multi-tarball packages. Yes every tool involved in the package will have separate tarball from 39. More work on Bots Well today I again worked on Jabber bot, but not the dictionary bot. This time its an SMS Gateway bot for Jonas Smedegard and the coding was done in Perl. Thanks to jonas I finally could apply what I learnt in Perl. Code may not be very elegant but it works :-). And after hacking one full day in Perl now I'm not feeling very much interested to go back and hack on my own Python based dictionary bot :-). But I will anyway. Misc So that's it folks. Quite longish post hope you are not bored reading it :-). Well time for movie C'ya all with next weeks log.

26 January 2012

Russell Coker: Links January 2012

Cops in Tennessee routinely steal cash from citizens [1]. They are ordered to do so and in some cases their salary is paid from the cash that they take. So they have a good reason to imagine that any large sum of money is drug money and take it. David Frum wrote an insightful article for NY Mag about the problems with the US Republican Party [2]. TreeHugger.com has an interesting article about eco-friendly features on some modern cruise ships [3]. Dan Walsh describes how to get the RSA SecureID PAM module working on a SE Linux system [4]. It s interesting that RSA was telling everyone to turn off SE Linux and shipping a program that was falsely marked as needing an executable stack and which uses netstat instead of /dev/urandom for entropy. Really the only way RSA could do worse could be to fall victim to an Advanced Persistent Attack :-# The Long Now has an interesting summary of a presentation about archive.org [5]. I never realised the range of things that archive.org stores, I will have to explore that if I find some spare time! Jonah Lehrer wrote a detailed and informative article about the way that American high school students receive head injuries playing football[6]. He suggests that it might eventually be the end of the game as we know it. Fran ois Marier wrote an informative article about optimising PNG files [7], optipng is apparently the best option at the moment but it doesn t do everything you might want. Helen Keeble wrote an interesting review of Twilight [8]. The most noteworthy thing about it IMHO is that she tries to understand teenage girls who like the books and movies. Trying to understand young people is quite rare. Jon Masters wrote a critique of the concept of citizen journalism and described how he has two subscriptions to the NYT as a way of donating to support quality journalism [9]. The only comment on his post indicates a desire for biased news (such as Fox) which shows the reason why most US media is failing at journalism. Luis von Ahn gave an interesting TED talk about crowd-sourced translation [10]. He starts by describing CAPTCHAs and the way that his company ReCAPTCHA provides the CAPTCHA service while also using people s time to digitise books. Then he describes his online translation service and language education system DuoLingo which allows people to learn a second language for free while translating text between languages [11]. One of the benefits of this is that people don t have to pay to learn a new language and thus poor people can learn other languages great for people in developing countries that want to learn first-world languages! DuoLingo is in a beta phase at the moment but they are taking some volunteers. Cory Doctorow wrote an insightful article for the Publishers Weekly titles Copyrights vs Human Rights [12] which is primarily about SOPA. Naomi Wolf wrote an insightful article for The Guardian about the Occupy movement, among other things the highest levels of the US government are using the DHS as part of the crackdown [13]. Naomi s claim is that the right-wing and government attacks on the Occupy movement are due to the fact that they want to reform the political process and prevent corruption. John Bohannon gave an interesting and entertaining TED talk about using dance as part of a presentation [14]. He gave an example of using dancerts to illustrate some concepts related to physics and then spoke about the waste of PowerPoint. Joe Sabia gave an amusing and inspiring TED talk about the technology of storytelling [15]. He gave the presentation with live actions on his iPad to match his words, a difficult task to perform successfully. Thomas Koch wrote an informative post about some of the issues related to binary distribution of software [16]. I think the problem is evenm worse than Thomas describes. Related posts:
  1. Links January 2011 Halla Tomasdottir gave an interesting TED talk about her financial...
  2. Links January 2010 Magnus Larsson gave an interesting TED talk about using bacteria...
  3. Links January 2009 Jennifer 8 Lee gave an interesting TED talk about the...

19 October 2011

Steve Langasek: Debian: not stale, just hardened

Rapha l Hertzog recently announced a new dpkg-buildflags interface in dpkg that at long last gives the distribution, the package maintainers, and users the control they want over the build flags used when building packages. The announcement mail gives all the gory details about how to invoke dpkg-buildflags in your build to be compliant; but the nice thing is, if you're using dh(1) with debian/compat=9, debhelper does it for you automatically so long as you're using a build system that it knows how to pass compiler flags to. So for the first time, /usr/share/doc/debhelper/examples/rules.tiny can now be used as-is to provide a policy-compliant package by default (setting -g -O2 or -g -O0 for your build regardless of how debian/rules is invoked). Of course, none of my packages actually work that way; among other things I have a habit of liberally sprinkling DEB_MAINT_CFLAGS_APPEND := -Wall in my rules, and sometimes DEB_LDFLAGS_MAINT_APPEND := -Wl,-z,defs and DEB_CFLAGS_MAINT_APPEND := $(shell getconf LFS_CFLAGS) as well. And my upstreams' build systems rarely work 100% out of the box with dhauto* without one override or another somewhere. So in practice, the shortest debian/rules file in any of my packages seems to be 13 lines currently. But that's 13 lines of almost 100% signal, unlike the bad old days of cut'n'pasted dh_* command lists. The biggest benefit, though, isn't in making it shorter to write a rules file with the old, standard build options. The biggest benefit is that dpkg-buildflags now also outputs build-hardening compiler and linker flags by default on Debian. Specifically, using the new interface lets you pick up all of these hardening flags for free:
-fstack-protector --param=ssp-buffer-size=4 -Wformat -Wformat-security -Werror=format-security -Wl,-z,relro
It also lets you get -fPIE and -Wl,-z,now by adding this one line to your debian/rules (assuming you're using dh(1) and compat 9):
export DEB_BUILD_MAINT_OPTIONS := hardening=+pie,+bindnow
Converting all my packages to use dh(1) has always been a long-term goal, but some packages are easier to convert than others. This was the tipping point for me, though. Even though debhelper compat level 9 isn't yet frozen, meaning there might still be other behavior changes to it that will make more work for me between now and release, over the past couple of weekends I've been systematically converting all my packages to use it with dh. In particular, pam and samba have been rebuilt to use the default hardening flags, and openldap uses these flags plus PIE support. (Samba already builds with PIE by default courtesy of upstream.) You can't really make samba and openldap out on the graph, but they're there (with their rules files reduced by 50% or more). I cannot overstate the significance of proactive hardening. There have been a number of vulnerabilities over the past few years that have been thwarted on Ubuntu because Ubuntu is using -fstack-protector by default. Debian has a great security team that responds quickly to these issues as soon as they're revealed, but we don't always get to find out about them before they're already being exploited in the wild. In this respect, Debian has lagged behind other distros. With dpkg-buildflags, we now have the tools to correct this. It's just a matter of getting packages to use the new interfaces. If you're a maintainer of a security sensitive package (such as a network-facing daemon or a setuid application), please enable dpkg-buildflags in your package for wheezy! (Preferably with PIE as well.) And if you don't maintain security sensitive packages, you can still help out with the hardening release goal.

16 April 2011

Timo Jyrinki: MeeGo Summit FI Days 1 & 2

MeeGo Summit FI is now nearing completion, with several keynotes and other presentations, Meegathon 24h contest just coming to an end and a lot of interesting discussions had. See full program for details. Yesterday was a hugely energetic day, but today the lack of sleep starts to kick in a bit at least for me.

Some highlights via photos:



Keynote venue was a movie theater




MeeGo status update by Valtteri Halla / Intel - talking among else about tablets, IVI, and the 20 person team at Nokia doing MeeGo(.com) for N900 phone





Mikko Terho / Nokia - "Internet for the next billion => Qt good candidate", "code wins politics and standards"




Carsten Munk / Nomovok - "Hacking your existence: the importance of open-ended devices in the MeeGo world"




In addition to MeeGo tablet demonstrations a Wayland compositor was demoed by a Nomovok employee.



One of the many Qt / QML related talks was held by Tapani Mikola / Nokia



Evening party




Day 2 started with a few more presentations and Finhack event launching in the Protomo room as well

Still remaining for the day are Meegathon demonstrations (well actually I'm right now already following those while finishing this - cool demos!) , Meegathon awards, a panel discussion on "MeeGo, Nokia, Finns - finished? Can MeeGo be important in Finland without being inside Nokia's core?", BoF sessions and finally Intel AppUp Application Lab including some MeeGo table give-outs.

Thanks to organizers, many of whom were volunteers. The event has been running completely smoothly, coming not as a big surprise after the hugely successful last summer's Akademy 2010 also held in Tampere.

18 December 2010

Felipe Sateler: CDBS -- An introduction

It seems the current trend is to use short-form dh. Some people have even thought that dh has superseded CDBS. Since I prefer CDBS for my own packaging, I will say that no, CDBS is not being deprecated and in fact has been active. Packaging with CDBS is very simple. I'll try to explain how it works, and how to package with it. This may turn into a blog post series, but I won't promise anything.

CDBS is a set of makefiles that do several tasks that are common while packaging software, so that you don't have to repeat them over and over on each package. The makefiles are classified into 2 groups: classes and rules. Classes implement the rules required for building and installing software. These are classes because they can and do inherit from others. For example, there is the makefile class, and the autotools, qmake and cmake classes inherit from it, and the gnome and kde (which is for kde 3) classes inherits from the autotools class. Rules implement several other general purpose rules that don't depend on the toolchain used to build the package. For example, there is the debhelper rules file, which takes care of creating the debian package using the usual dh_* commands. There are several rules files which do all sorts of useful stuff, from running license checks to downloading new upstream releases. If this post transforms into a series, we may look into some of them. For now, I'll demonstrate how to use CDBS to package some simple software (only the debian/rules file, of course).

The first task while running creating debian/rules is determining which build system the software uses. I will use qutecsound package as a guide. So, first things first, we start using CDBS!
#!/usr/bin/make -f

# Uncomment this to turn on verbose mode.
#export DH_VERBOSE=1

include /usr/share/cdbs/1/rules/debhelper.mk
include /usr/share/cdbs/1/class/qmake.mk


Now this would be all that is required if qutecsound where a standard qmake package where nothing needs to be customized. Alas, this is not the case. First, there are some variables that need to be passed to qmake to appropriately configure the build. Also, the projects .pro file is not standard-named. How do we fix this? The CDBS way of overriding this kind of behavior is through the use of variables. Since we are using the qmake class, we will be overriding variables named DEB_QMAKE_*:
DEB_QMAKE_CONFIG_VAL += build64
DEB_QMAKE_ARGS = qcs.pro


We add a configuration value (because CDBS is setting other config values), and we set the extra arguments to qmake.

Now, there is a problem: the resulting binary is not created with the name we want. How do we change that? We could patch the source to avoid that, but it is far easier to add a rule to the makefile to do that. We will modify the build to rename the file at the end (and then clean it up on clean because the upstream makefile will not spot it):
build/qutecsound::
[ -f bin/qutecsound ] mv bin/qutecsound-d bin/qutecsound

clean::
rm -f bin/qutecsound


So what did we just do? We extended the build and clean rules with extra stuff to do. In CDBS each package listed in debian/control gets a build/package rule (and several others at different stages of the build) so that one can add stuff specific to each package (this will be much more useful when dpkg finally learns about build-arch and build-indep).

For the final touch, we want to do a few more things. First, we want to ensure that we are using QT4's qmake, because the build will fail if QT3's is used. Second, since I created a manpage for the command, I want to install it. Finally, we want to use parallel building when the user has specified it, to build the project faster. How to do that? Again, via variables:
QMAKE = qmake-qt4
DEB_BUILD_PARALLEL = 1
DEB_INSTALL_MANPAGES_qutecsound = debian/qutecsound.1


CDBS creates several variables for each of the packages listed in debian/control, so we can customize each package build. In our case, this variable is passed to dh_installman from the debhelper rules file (CDBS will invoke the debhelper tools once for each binary package).

After all this we now have the complete debian/rules of a relatively simple package. How does it look like? Like this:
#!/usr/bin/make -f

# Uncomment this to turn on verbose mode.
#export DH_VERBOSE=1

include /usr/share/cdbs/1/rules/debhelper.mk
include /usr/share/cdbs/1/class/qmake.mk

QMAKE = qmake-qt4
DEB_QMAKE_CONFIG_VAL += build64
DEB_QMAKE_ARGS = qcs.pro
DEB_BUILD_PARALLEL = 1

DEB_INSTALL_MANPAGES_qutecsound = debian/qutecsound.1

build/qutecsound::
[ -f bin/qutecsound ] mv bin/qutecsound-d bin/qutecsound

clean::
rm -f bin/qutecsound

Tags: cdbs, packaging

9 September 2010

Biella Coleman: Experiential Amnesia

So, conventional wisdom is that once you experience something first hand, it sits close to you, so that you can learn from it, think about it, and possibly invoke it. This rings true for me, except for one particular type of experience finishing a large and complicated project like writing a dissertation, taking PhD qualifying exams, like writing a very complicated review article. Once it is done and over with and a few months have walked away, I no longer can fathom, at all, how I even did it. It is as if I went through the experience and then amnesia set in, erasing and wiping out the fibers of events, emotions, and thoughts that went into making and finishing the project. I find this unnerving, for various reasons. First, this di-juncture is not one I experience with other experiences, even unpleasant ones, like pain (broken collarbone, burnt hand, terrible earache), for I am able, nearly always, to shore up some shadow of those experience. More practically, when I am faced again with needing to conquer what feels unconquerable (which, unsurprisingly, is my current predicament), I have no experiential reserve to guide me. Instead, it is like I am standing at a new shore, for the first time. I tell myself, I have done and I can do it but there is no concomitant emotional register or memory that assures me that this is in fact true. Maybe a few more projects like these and there will be some imprint to steer me in the future but somehow I suspect my experiential amnesia will remain the same.

10 July 2010

Colin Watson: debhelper statistics, redux

Apropos of my previous post, I see that dh has now overtaken CDBS as the most popular rules helper system of its kind in Debian unstable, and shows no particular sign of slowing its rate of uptake any time soon. The resolution of the graph is such that you can't see it yet, but dh drew dead level with CDBS on Thursday, and today 3836 packages are using dh as opposed to 3823 using CDBS.

2 March 2010

Colin Watson: debhelper statistics

I don't know if anyone else has been tracking this recently, but a while back I got curious about the relative proportions of dh(1) and CDBS in the archive, and started running some daily analysis on the Lintian lab. Apologies for my poor graphing abilities, but the graph is here (occasionally updated): Although dh is still a bit behind CDBS, the steady upward trend is quite striking - it looks set to break 20% soon, up from under 13% in September - compared with CDBS which has been sitting within half a percentage point of 25% the whole time. Incidentally, was that an ftpmaster trying to sign his name in the graph over Christmas or something? :-)

6 September 2009

DebConf team: DebConf10 visa information available (Posted by Jimmy Kaplowitz)

Hello, The DebConf10 local team would like to announce availability of visa information at http://debconf10.debconf.org/visas.xhtml Full information is contained at that page, provided by our lawyer; however some important points are indicated below. - The United States depends on its tens of millions of visitors annually for its economy to function. Getting approved for a visa is not a rare exception, and it is even easier given our generous free help from an immigration lawyer. - If you are from a Visa Waiver Program country (see visa page), fill out the ESTA web form to apply for your travel authorization now. You don t need any information about the conference itself or your means of travel. - If you will need to apply for a visa, check the visa information page for information on what to do. Carefully check the wait times for your country s embassy. For most countries there is no *immediate* urgency, but plan to get an appointment well in advance of May 2010. - Make sure you will have a passport that will expire in February 2011 or later (6 months after the latest possible DebConf date). If not, apply for a new passport. Special note to Venezuelans: Since the wait time for a visa appointment in Caracas is so long, we have been paying special attention to its visa application process. We have reports that the dates for visa appointments are moving quickly, getting later and later. If you are are applying for a visa in Caracas, you need to make an appointment immediately. You just need to make an appointment now, supporting materials can be assembled later. Also consider applying for a visa in a different US embassy such as the Quito, Ecuador one with a significantly shorter wait time. Consult with our lawyer for advice on the advantages and disadvantages of doing this. The local team hopes that everyone interested can meet us in New York City and have a great DebConf10 experience! Feel free to email us (publicly archived list) or ask in #debconf-team or #debconf-nyc on OFTC with your questions or ideas. - The DebConf10 Local Team

Next.