Search Results: "ranty"

29 June 2022

Russell Coker: Philips 438P1 43 4K Monitor

I have just returned a Philips 438P1 43 4K Monitor [1] and gone back to my Samsung 28 4K monitor model LU28E590DS/XY AKA UE590. The main listed differences are the size and the fact that the Samsung is TN but the Philips is IPS. Here s a comparison of TN and IPS technologies [2]. Generally I think that TN is probably best for a monitor but in theory IPS shouldn t be far behind. The Philips monitor has a screen with a shiny surface which may be good for a TV but isn t good for a monitor. Also it seemed to blur the pixels a bit which again is probably OK for a TV that is trying to emulate curved images but not good for a monitor where it s all artificial straight lines. The most important thing for me in a monitor is how well it displays text in small fonts, for that I don t really want the round parts of the letters to look genuinely round as a clear octagon or rectangle is better than a fuzzy circle. There is some controversy about the ideal size for monitors. Some people think that nothing larger than 28 is needed and some people think that a 43 is totally usable. After testing I determined that 43 is really too big, I had to move to see it all. Also for my use it s convenient to be able to turn a monitor slightly to allow someone else to get a good view and a 43 monitor is too large to move much (maybe future technology for lighter monitors will change this). Previously I had been unable to get my Samsung monitor to work at 4K resolution with 60Hz and had believed it was due to cheap video cards. I got the Philips monitor to work with HDMI so it s apparent that the Samsung monitor doesn t do 4K@60Hz on HDMI. This isn t a real problem as the Samsung monitor doesn t have built in speakers. The Philips monitor has built in speakers for HDMI sound which means one less cable to my PC and no desk space taken by speakers. I bought the Philips monitor on eBay in opened unused condition. Inside the box was a sheet with a printout stating that the monitor blanks the screen periodically, so the seller knew that it wasn t in unused condition, it was tested and failed the test. If the Philips monitor had been as minimally broken as described then I might have kept it. However it seems that certain patterns of input caused it to reboot. For example I could be watching Netflix and have it drop out, I would press the left arrow to watch that bit again and have it drop out again. On one occasion I did a test and found that a 5 second section of Netflix content caused the monitor to reboot on 6/8 times I viewed it. The workaround I discovered was to switch between maximised window and full-screen mode when it had a dropout. So I just press left-arrow and then F and I can keep watching. That s not what I expect from a $700 monitor! I considered checking for Philips firmware updates but decided against it because I didn t want to risk voiding the warranty if it didn t work correctly and I decided I just didn t like the monitor that much. Ideally for my next monitor I ll get a 4K screen of about 35 , TN, and a screen that s not shiny. At the moment there doesn t seem to be many monitors between 32 and 43 in size, so 32 may do. I am quite happy with the Samsung monitor so getting the same but slightly larger is fine. It s a pity they stopped making 5K displays.

16 January 2022

Russell Coker: SSD Endurance

I previously wrote about the issue of swap potentially breaking SSD [1]. My conclusion was that swap wouldn t be a problem as no normally operating systems that I run had swap using any significant fraction of total disk writes. In that post the most writes I could see was 128GB written per day on a 120G Intel SSD (writing the entire device once a day). My post about swap and SSD was based on the assumption that you could get many thousands of writes to the entire device which was incorrect. Here s a background on the terminology from WD [2]. So in the case of the 120G Intel SSD I was doing over 1 DWPD (Drive Writes Per Day) which is in the middle of the range of SSD capability, Intel doesn t specify the DWPD or TBW (Tera Bytes Written) for that device. The most expensive and high end NVMe device sold by my local computer store is the Samsung 980 Pro which has a warranty of 150TBW for the 250G device and 600TBW for the 1TB device [3]. That means that the system which used to have an Intel SSD would have exceeded the warranty in 3 years if it had a 250G device. My current workstation has been up for just over 7 days and has averaged 110GB written per day. It has some light VM use and the occasional kernel compile, a fairly typical developer workstation. It s storage is 2*Crucial 1TB NVMe devices in a BTRFS RAID-1, the NVMe devices are the old series of Crucial ones and are rated for 200TBW which means that they can be expected to last for 5 years under the current load. This isn t a real problem for me as the performance of those devices is lower than I hoped for so I will buy faster ones before they are 5yo anyway. My home server (and my wife s workstation) is averaging 325GB per day on the SSDs used for the RAID-1 BTRFS filesystem for root and for most data that is written much (including VMs). The SSDs are 500G Samsung 850 EVOs [4] which are rated at 150TBW which means just over a year of expected lifetime. The SSDs are much more than a year old, I think Samsung stopped selling them more than a year ago. Between the 2 SSDs SMART reports 18 uncorrectable errors and btrfs device stats reports 55 errors on one of them. I m not about to immediately replace them, but it appears that they are well past their prime. The server which runs my blog (among many other things) is averaging over 1TB written per day. It currently has a RAID-1 of hard drives for all storage but it s previous incarnation (which probably had about the same amount of writes) had a RAID-1 of enterprise SSDs for the most written data. After a few years of running like that (and some time running with someone else s load before it) the SSDs became extremely slow (sustained writes of 15MB/s) and started getting errors. So that s a pair of SSDs that were burned out. Conclusion The amounts of data being written are steadily increasing. Recent machines with more RAM can decrease storage usage in some situations but that doesn t compare to the increased use of checksummed and logged filesystems, VMs, databases for local storage, and other things that multiply writes. The amount of writes allowed under warranty isn t increasing much and there are new technologies for larger SSD storage that decrease the DWPD rating of the underlying hardware. For the systems I own it seems that they are all going to exceed the rated TBW for the SSDs before I have other reasons to replace them, and they aren t particularly high usage systems. A mail server for a large number of users would hit it much earlier. RAID of SSDs is a really good thing. Replacement of SSDs is something that should be planned for and a way of swapping SSDs to less important uses is also good (my parents have some SSDs that are too small for my current use but which work well for them). Another thing to consider is that if you have a server with spare drive bays you could put some extra SSDs in to spread the wear among a larger RAID-10 array. Instead of having a 2*SSD BTRFS RAID-1 for a server you could have 6*SSD to get a 3* longer lifetime than a regular RAID-1 before the SSDs wear out (BTRFS supports this sort of thing). Based on these calculations and the small number of errors I ve seen on my home server I ll add a 480G SSD I have lying around to the array to spread the load and keep it running for a while longer.

30 November 2021

Russell Coker: Your Device Has Been Improved

I ve just started a Samsung tablet downloading a 770MB update, the description says:
Technically I have no doubt that both those claims are true and accurate. But according to common understanding of the English language I think they are both misleading. By stability improved they mean fixed some bugs that made it unstable and no technical person would imagine that after a certain number of such updates the number of bugs will ever reach zero and the tablet will be perfectly reliable. In fact if you should consider yourself lucky if they fix more bugs than they add. It s not THAT uncommon for phones and tablets to be bricked (rendered unusable by software) by an update. In the past I got a Huawei Mate9 as a warranty replacement for a Nexus 6P because an update caused so many Nexus 6P phones to fail that they couldn t be replaced with an identical phone [1]. By security improved they usually mean fixed some security flaws that were recently discovered to make it almost as secure as it was designed to be . Note that I deliberately say almost as secure because it s sometimes impossible to fix a security flaw without making significant changes to interfaces which requires more work than desired for an old product and also gives a higher probability of things going wrong. So it s sometimes better to aim for almost as secure or alternatively just as secure but with some features disabled. Device manufacturers (and most companies in the Android space make the same claims while having the exact same bugs to deal with, Samsung is no different from the others in this regards) are not making devices more secure or more reliable than when they were initially released. They are aiming to make them almost as secure and reliable as when they were released. They don t have much incentive to try too hard in this regard, Samsung won t suffer if I decide my old tablet isn t reliable enough and buy a new one, which will almost certainly be from Samsung because they make nice tablets. As a thought experiment, consider if car repairers did the same thing. Getting us to service your car will improve fuel efficiency , great how much more efficient will it be than when I purchased it? As another thought experiment, consider if car companies stopped providing parts for car repair a few years after releasing a new model. This is effectively what phone and tablet manufacturers have been doing all along, software updates for stability and security are to devices what changing oil etc is for cars.

22 April 2021

Russell Coker: HP ML350P Gen8

I m playing with a HP Proliant ML350P Gen8 server (part num 646676-011). For HP servers ML means tower (see the ProLiant Wikipedia page for more details [1]). For HP servers the generation indicates how old the server is, Gen8 was announced in 2012 and Gen10 seems to be the current generation. Debian Packages from HP
wget -O /usr/local/hpePublicKey2048_key1.pub https://downloads.linux.hpe.com/SDR/hpePublicKey2048_key1.pub
echo "# HP RAID" >> /etc/apt/sources.list
echo "deb [signed-by=/usr/local/hpePublicKey2048_key1.pub] http://downloads.linux.hpe.com/SDR/downloads/MCP/Debian/ buster/current non-free" >> /etc/apt/sources.list
The above commands will setup the APT repository for Debian/Buster. See the HP Downloads FAQ [2] for more information about their repositories. hponcfg This package contains the hponcfg program that configures ILO (the HP remote management system) from Linux. One noteworthy command is hponcfg -r to reset the ILO, something you should do before selling an old system. ssacli This package contains the ssacli program to configure storage arrays, here are some examples of how to use it:
# list controllers and show slot numbers
ssacli controller all show
# list arrays on controller identified by slot and give array IDs
ssacli controller slot=0 array all show
# show details of one array
ssacli controller slot=0 array A show
# show all disks on one controller
ssacli controller slot=0 physicaldrive all show
# show config of a controller, this gives RAID level etc
ssacli controller slot=0 show config
# delete array B (you can immediately pull the disks from it)
ssacli controller slot=0 array B delete
# create an array type RAID0 with specified drives, do this with one drive per array for BTRFS/ZFS
ssacli controller slot=0 create type=arrayr0 drives=1I:1:1
When a disk is used in JBOD mode just under 33MB will be used at the end of the disk for the RAID metadata. If you have existing disks with a DOS partition table you can put it in a HP array as a JBOD and it will work with all data intact (GPT partition table is more complicated). When all disks are removed from the server the cooling fans run at high speed, this would be annoying if you wanted to have a diskless workstation or server using only external storage. ssaducli This package contains the ssaducli diagnostic utility for storage arrays. The SSD wear gauge report doesn t work for the 2 SSDs I tested it on, maybe it only supports SAS SSDs not SATA SSDs. It doesn t seem to do anything that I need. storcli This package contains both 32bit and 64bit versions of the MegaRAID utility and deletes whichever one doesn t match the installation in the package postinst, so it fails debsums checks etc. The MegaRAID utility is for a different type of RAID controller to the Smart Storage Array (AKA SSA) that the other utilities work with. As an aside it seems that there are multiple types of MegaRAID controller, the management program from the storcli package doesn t work on a Dell server with MegaRAID. They should have made separate 32bit and 64bit versions of this package. Recommendations Here is HP page for downloading firmware updates (including security updates) [3], you have to login first and have a warranty. This is legal but poor service. Dell servers have comparable prices (on the second hand marker) and comparable features but give free firmware updates to everyone. Dell have overall lower quality of Debian packages for supporting utilities, but a wider range of support so generally Dell support seems better in every way. Dell and HP hardware seems of equal quality so overall I think it s best to buy Dell. Suggestions for HP Finding which of the signing keys to use is unreasonably difficult. You should get some HP employees to sign the HP keys used for repositories with their personal keys and then go to LUG meetings and get their personal keys well connected to the web of trust. Then upload the HP keys to the public key repositories. You should also use the same keys for signing all versions of the repositories. Having different keys for the different versions of Debian wastes people s time. Please provide firmware for all users, even if they buy systems second hand. It is in your best interests to have systems used long-term and have them run securely. It is not in your best interests to have older HP servers perform badly. Having all the fans run at maximum speed when power is turned on is a standard server feature. Some servers can throttle the fan when the BIOS is running, it would be nice if HP servers did that. Having ridiculously loud fans until just before GRUB starts is annoying.

8 April 2021

Sean Whitton: consfigurator-live-build

One of my goals for Consfigurator is to make it capable of installing Debian to my laptop, so that I can stop booting to GRML and manually partitioning and debootstrapping a basic system, only to then turn to configuration management to set everything else up. My configuration management should be able to handle the partitioning and debootstrapping, too. The first stage was to make Consfigurator capable of debootstrapping a basic system, chrooting into it, and applying other arbitrary configuration, such as installing packages. That s been in place for some weeks now. It s sophisticated enough to avoid starting up newly installed services, but I still need to add some bind mounting. Another significant piece is teaching Consfigurator how to partition block devices. That s quite tricky to do in a sufficiently general way I want to cleanly support various combinations of LUKS, LVM and regular partitions, including populating /etc/crypttab and /etc/fstab. I have some ideas about how to do it, but it ll probably take a few tries to get the abstractions right. Let s imagine that code is all in place, such that Consfigurator can be pointed at a block device and it will install a bootable Debian system to it. Then to install Debian to my laptop I d just need to take my laptop s disk drive out and plug it into another system, and run Consfigurator on that system, as root, pointed at the block device representing my laptop s disk drive. For virtual machines, it would be easy to write code which loop-mounts an empty disk image, and then Consfigurator could be pointed at the loop-mounted block device, thereby making the disk image file bootable. This is adequate for virtual machines, or small single-board computers with tiny storage devices (not that I actually use any of those, but I want Consfigurator to be able to make disk images for them!). But it s not much good for my laptop. I casually referred to taking out my laptop s disk drive and connecting it to another computer, but this would void my laptop s warranty. And Consfigurator would not be able to update my laptop s NVRAM, as is needed on UEFI systems. What s wanted here is a live system which can run Consfigurator directly on the laptop, pointed at the block device representing its physical disk drive. Ideally this live system comes with a chroot with the root filesystem for the new Debian install already built, so that network access is not required, and all Consfigurator has to do is partition the drive and copy in the contents of the chroot. The live system could be set up to automatically start doing that upon boot, but another option is to just make Consfigurator itself available to be used interactively. The user boots the live system, starts up Emacs, starts up Lisp, and executes a Consfigurator deployment, supplying the block device representing the laptop s disk drive as an argument to the deployment. Consfigurator goes off and partitions that drive, copies in the contents of the chroot, and executes grub-install to make the laptop bootable. This is also much easier to debug than a live system which tries to start partitioning upon boot. It would look something like this:
    ;; melete.silentflame.com is a Consfigurator host object representing the
    ;; laptop, including information about the partitions it should have
    (deploy-these :local ...
      (chroot:partitioned-and-installed
        melete.silentflame.com "/srv/chroot/melete" "/dev/nvme0n1"))
Now, building live systems is a fair bit more involved than installing Debian to a disk drive and making it bootable, it turns out. While I want Consfigurator to be able to completely replace the Debian Installer, I decided that it is not worth trying to reimplement the relevant parts of the Debian Live tool suite, because I do not need to make arbitrary customisations to any live systems. I just need to have some packages installed and some files in place. Nevertheless, it is worth teaching Consfigurator how to invoke Debian Live, so that the customisation of the chroot which isn t just a matter of passing options to lb_config(1) can be done with Consfigurator. This is what I ve ended up with in Consfigurator s source code:
(defpropspec image-built :lisp (config dir properties)
  "Build an image under DIR using live-build(7), where the resulting live
system has PROPERTIES, which should contain, at a minimum, a property from
CONSFIGURATOR.PROPERTY.OS setting the Debian suite and architecture.  CONFIG
is a list of arguments to pass to lb_config(1), not including the '-a' and
'-d' options, which Consfigurator will supply based on PROPERTIES.
This property runs the lb_config(1), lb_bootstrap(1), lb_chroot(1) and
lb_binary(1) commands to build or rebuild the image.  Rebuilding occurs only
when changes to CONFIG or PROPERTIES mean that the image is potentially
out-of-date; e.g. if you just add some new items to PROPERTIES then in most
cases only lb_chroot(1) and lb_binary(1) will be re-run.
Note that lb_chroot(1) and lb_binary(1) both run after applying PROPERTIES,
and might undo some of their effects.  For example, to configure
/etc/apt/sources.list, you will need to use CONFIG not PROPERTIES."
  (:desc (declare (ignore config properties))
         #?"Debian Live image built in $ dir ")
  (let* (...)
    ;; ...
     (eseqprops
      ;; ...
      (on-change
          (eseqprops
           (on-change
               (file:has-content ,auto/config ,(auto/config config) :mode #o755)
             (file:does-not-exist ,@clean)
             (%lbconfig ,dir)
             (%lbbootstrap t ,dir))
           (%lbbootstrap nil ,dir)
           (deploys ((:chroot :into ,chroot)) ,host))
        (%lbchroot ,dir)
        (%lbbinary ,dir)))))
Here, %lbconfig is a property running lb_config(1), %lbbootstrap one which runs lb_bootstrap(1), etc. Those properties all just change directory to the right place and run the command, essentially, with a little extra code to handle failed debootstraps and the like. The ON-CHANGE and ESEQPROPS combinators work together to sequence the interaction of the Debian Live suite and Consfigurator. This way, we only rebuild the chroot if the configuration changed, and we only rebuild the image if the chroot changed. Now over in my personal consfig:
(try-register-data-source
 :git-snapshot :name "consfig" :repo #P"src/cl/consfig/" ...)
(defproplist hybrid-live-iso-built :lisp ()
  "Build a Debian Live system in /srv/live/spw.
Typically this property is not applied in a DEFHOST form, but rather run as
needed at the REPL.  The reason for this is that otherwise the whole image will
get rebuilt each time a commit is made to my dotfiles repo or to my consfig."
  (:desc "Sean's Debian Live system image built")
  (live-build:image-built.
      '("--archive-areas" "main contrib non-free" ...)
      "/srv/live/spw"
    (os:debian-stable "buster" :amd64)
    (basic-props)
    (apt:installed "whatever" "you" "want")
    (git:snapshot-extracted "/etc/skel/src" "dotfiles")
    (file:is-copy-of "/etc/skel/.bashrc" "/etc/skel/src/dotfiles/.bashrc")
    (git:snapshot-extracted "/root/src/cl" "consfig")))
The first argument to LIVE-BUILD:IMAGE-BUILT. is additional arguments to lb_config(1). The third argument onwards are the properties for the live system. The cool thing is GIT:SNAPSHOT-EXTRACTED the calls to this ensure that a copy of my Emacs configuration and my consfig end up in the live image, ready to be used interactively to install Debian, as described above. I ll need to add something like (chroot:host-chroot-bootstrapped melete.silentflame.com "/srv/chroot/melete") too. As with everything Consfigurator-related, Joey Hess s Propellor is the giant upon whose shoulders I m standing.

13 March 2021

Louis-Philippe V ronneau: Preventing an OpenPGP Smartcard from caching the PIN eternally

While I'm overall very happy about my migration to an OpenPGP hardware token, the process wasn't entirely seamless and I had to hack around some issues, for example the PIN caching behavior in GnuPG. As described in this bug the cache-ttl parameter in GnuPG is not implemented and thus does nothing. This means once you type in your PIN, it is cached for as long as the token is plugged. Security-wise, this is not great. Instead of manually disconnecting the token frequently, I've come up with a script that restarts scdameon if the token hasn't been used during the last X minutes. It seems to work well and I call it using this cron entry: */5 * * * * my_user /usr/local/bin/restart-scdaemon To get a log from scdaemon, you'll need a ~/.gnupg/scdaemon.conf file that looks like this:
debug-level basic
log-file /var/log/scdaemon.log
Hopefully it can be useful to others!
 #!/usr/bin/python3
# Copyright 2021, Louis-Philippe V ronneau <pollo@debian.org>
#
# This script is free software: you can redistribute it and/or modify it under
# the terms of the GNU General Public License as published by the Free Software
# Foundation, either version 3 of the License, or (at your option) any later
# version.
# 
# This script is distributed in the hope that it will be useful, but WITHOUT
# ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or FITNESS
# FOR A PARTICULAR PURPOSE. See the GNU General Public License for more
# details.
# 
# You should have received a copy of the GNU General Public License along with
# this script. If not, see <http://www.gnu.org/licenses/>.
"""
This script restarts scdaemon after X minutes of inactivity to reset the PIN
cache. It is meant to be ran by cron each X/2 minutes.
This is needed because there is currently no way to set a cache time for
smartcards. See https://dev.gnupg.org/T3362#137811 for more details.
"""
import os
import sys
import subprocess
from datetime import datetime, timedelta
from argparse import ArgumentParser
p = ArgumentParser(description=__doc__)
p.add_argument('-l', '--log', default="/var/log/scdaemon.log",
               help='Path to the scdaemon log file.')
p.add_argument('-t', '--timeout', type=int, default="10",
               help=("Desired cache time in minutes."))
args = p.parse_args()
def get_last_line(scdaemon_log):
    """Returns the last line of the scdameon log file."""
    with open(scdaemon_log, 'rb') as f:
        f.seek(-2, os.SEEK_END)
        while f.read(1) != b'\n':
            f.seek(-2, os.SEEK_CUR)
        last_line = f.readline().decode()
    return last_line
def check_time(last_line, timeout):
    """Returns True if scdaemon hasn't been called since the defined timeout."""
    # We don't need to restart scdaemon if no gpg command has been run since
    # the last time it was restarted.
    should_restart = True
    if "OK closing connection" in last_line:
        should_restart = False
    else:
        last_time = datetime.strptime(last_line[:19], '%Y-%m-%d %H:%M:%S')
        now = datetime.now()
        delta = now - last_time
        if delta <= timedelta(minutes = timeout):
            should_restart = False
    return should_restart
def restart_scdaemon(scdaemon_log):
    """Restart scdaemon and verify the restart process was successful."""
    subprocess.run(['gpgconf', '--reload', 'scdaemon'], check=True)
    last_line = get_last_line(scdaemon_log)
    if "OK closing connection" not in last_line:
        sys.exit("Restarting scdameon has failed.")
def main():
    """Main function."""
    last_line = get_last_line(args.log)
    should_restart = check_time(last_line, args.timeout)
    if should_restart:
        restart_scdaemon(args.log)
if __name__ == "__main__":
    main()

4 December 2020

Dirk Eddelbuettel: #31: Test your R package against bleeding-edge gcc

Welcome to the 31th post in the rapturously rampant R recommendations series, or R4 for short. This post will once again feature Docker for use with R. Earlier this week, I received a note from CRAN about how my RcppTOML package was no longer building with the (as of right now of course unreleased) version 11 of the GNU C++ compiler, i.e. g++-11. And very kindly even included a hint about the likely fix (which was of course correct). CRAN, and one of its maintainers in particular, is extremely forward-looking in terms of toolchain changes. A year ago we were asked to updated possible use of global variables in C code as gcc-10 tightened the rules. This changes is a C++ one, and a fairly simple one of simply being more explicit with include headers. Previous g++ release had done the same. The question now was about the least painful way to get g++-11 onto my machine, with the least amount of side-effects. Regular readers of this blog will know where this is headed, but even use of Docker requires binaries. A look at g++-11 within packages.debian.org comes up empty. No Debian means no Ubuntu. But there is a PPA for Ubuntu with toolchain builds we have used before. And voil there we have it: within the PPA for Ubuntu Toolchain repository is the volatile packages PPA with both g++-10 and g++-11. Here Ubuntu 20.10 works with g++-10, but g++-11 requires Ubuntu 21.04. Docker containers are there for either. So with the preliminaries sorted out, the key steps are fairly straightforward: And that is it! RcppTOML is fairly minimal and could be a member of the tinyverse so no other dependencies are needed if your package has any you could just use the standard steps to install from source, or binary (including using RSPM or bspm). You can see the resulting Dockerfile which contains a minimal amount of extra stuff to deal with some environment variables and related settings. Nothing critical, but it smoothes the experience somewhat. This container is now built (under label rocker/r-edge with tags latest and gcc-11), and you can download it from Docker Hub. With that the proof of the (now fixed and uploaded) package building becomes as easy as
edd@rob:~/git/rcpptoml(master)$ docker run --rm -ti -v $PWD:/mnt -w /mnt rocker/r-edge:gcc-11 g++ --version
g++ (Ubuntu 11-20201128-0ubuntu2) 11.0.0 20201128 (experimental) [master revision fb6b29c85c4:a331ca6194a:e87559d202d90e614315203f38f9aa2f5881d36e]
Copyright (C) 2020 Free Software Foundation, Inc.
This is free software; see the source for copying conditions.  There is NO
warranty; not even for MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.

edd@rob:~/git/rcpptoml(master)$ 
edd@rob:~/git/rcpptoml(master)$ docker run --rm -ti -v $PWD:/mnt -w /mnt rocker/r-edge:gcc-11 R CMD INSTALL RcppTOML_0.1.7.tar.gz
* installing to library  /usr/local/lib/R/site-library 
* installing *source* package  RcppTOML  ...
** using staged installation
** libs
g++ -std=gnu++11 -I"/usr/share/R/include" -DNDEBUG -I../inst/include/ -DCPPTOML_USE_MAP -I'/usr/lib/R/site-library/Rcpp/include'    -fpic  -g -O2 -fdebug-prefix-map=/build/r-base-Fuvi9C/r-base-4.0.3=. -fstack-protector-strong -Wformat -Werror=format-security -Wdate-time -D_FORTIFY_SOURCE=2 -g  -c RcppExports.cpp -o RcppExports.o
g++ -std=gnu++11 -I"/usr/share/R/include" -DNDEBUG -I../inst/include/ -DCPPTOML_USE_MAP -I'/usr/lib/R/site-library/Rcpp/include'    -fpic  -g -O2 -fdebug-prefix-map=/build/r-base-Fuvi9C/r-base-4.0.3=. -fstack-protector-strong -Wformat -Werror=format-security -Wdate-time -D_FORTIFY_SOURCE=2 -g  -c parse.cpp -o parse.o
g++ -std=gnu++11 -shared -L/usr/lib/R/lib -Wl,-Bsymbolic-functions -Wl,-z,relro -o RcppTOML.so RcppExports.o parse.o -L/usr/lib/R/lib -lR
installing to /usr/local/lib/R/site-library/00LOCK-RcppTOML/00new/RcppTOML/libs
** R
** inst
** byte-compile and prepare package for lazy loading
** help
*** installing help indices
** building package indices
** testing if installed package can be loaded from temporary location
** checking absolute paths in shared objects and dynamic libraries
** testing if installed package can be loaded from final location
** testing if installed package keeps a record of temporary installation path
* DONE (RcppTOML)
edd@rob:~/git/rcpptoml(master)$ 
I hope both the availability of such a base container with gcc-11 (and g++-11 and gfortran-11) as well as a recipe for building similar containers with newer clang version will help other developers. If you like this or other open-source work I do, you can sponsor me at GitHub.

This post by Dirk Eddelbuettel originated on his Thinking inside the box blog. Please report excessive re-aggregation in third-party for-profit settings.

9 August 2020

Charles Plessy: Thank you, VAIO

I use everyday a VAIO Pro mk2 that I bought 5 years ago with 3 years of warranty. It has been a few months that I was noticing that something was slowly inflating inside. In July, things accelerated to the point that its thickness had doubled. After we called the customer service of VAIO, somebody came to pick up the laptop in order to make a cost estimate. Then we learned on the phone that it would be free. It is back in my hands in less than two weeks. Bravo VAIO !

13 July 2020

Antoine Beaupr : Not recommending Purism

This is just a quick note to mention that I have updated my hardware documentation on the Librem 13v4 laptop. It has unfortunately turned into a rather lengthy (and ranty) piece about Purism. Let's just say that waiting weeks for your replacement laptop (yes, it died again) does wonders for creativity. To quote the full review:
TL;DR: I recommend people avoid the Purism brand and products. I find they have questionable politics, operate in a "libre-washing" fashion, and produce unreliable hardware. Will not buy again.
People who have read the article might want to jump directly to the new sections: I have also added the minor section of the missing mic jack. I realize that some folks (particularly at Debian) might still work at Purism, and that this article might be demoralizing for their work. If that is the case, I am sorry this article triggered you in any way and I hope this can act as a disclaimer. But I feel it is my duty to document the issues I am going through, as a user, and to call bullshit when I see it (let's face it, the anti-interdiction stuff and the Purism 5 crowd-funding campaign were total bullshit). I also understand that the pandemic makes life hard for everyone, and probably makes a bad situation at Purism worse. But those problems existed before the pandemic happened. They were issues I had identified in 2019 and that I simply never got around to document. I wish that people wishing to support the free software movement would spend their energy towards organisations that actually do honest work in that direction, like System76 and Pine64. And if you're going to go crazy with an experimental free hardware design, why not go retro with the MNT Reform. In the meantime, if you're looking for a phone, I recommend you give the Fairphone a fair chance. It really is a "fair" (as in, not the best, but okay) phone that you can moderately liberate, and it actually frigging works. See also my hardware review of the FP2. Update: this kind of blew up, for my standards: 10k visitors in ~24h while I usually get about 1k visitors after a week on any regular blog post. There were more discussions on the subject here: Trigger warning: some of those threads include personal insults and explicitly venture into the free speech discussion, with predictable (sad) consequences...

9 July 2020

Enrico Zini: Laptop migration

This laptop used to be extra-flat My laptop battery started to explode in slow motion. HP requires 10 business days to repair my laptop under warranty, and I cannot afford that length of downtime. Alternatively, HP quoted me 375 + VAT for on-site repairs, which I tought was very funny. For 376.55 + VAT, which is pretty much exactly the same amount, I bought instead a refurbished ThinkPad X240 with a dual-core I5, 8G of RAM, 250G SSD, and a 1920x1080 IPS display, to use as a spare while my laptop is being repaired. I'd like to thank HP for giving me the opportunity to own a ThinkPad. Since I'm migrating all my system to the spare and then (hopefully) back, I'm documenting what I need to be fully productive on new hardware. Install Debian A basic Debian netinst with no tasks selected is good enough to get going. Note that if wifi worked in Debian Installer, it doesn't mean that it will work in the minimal system it installed. See here for instructions on quickly bringing up wifi on a newly installed minimal system. Copy /home A simple tar of /home is all I needed to copy my data over. A neat way to do it was connecting the two laptops with an ethernet cable, and using netcat:
# On the source
tar -C / -zcf - home   nc -l -p 12345 -N
# On the target
nc 10.0.0.1 12345   tar -C / -zxf -
Since the data travel unencrypted in this way, don't do it over wifi. Install packages I maintain a few simple local metapackages that depend on the packages I usually used. I could just install those and let apt bring in their dependencies. For the build dependencies of the programs I develop, I use mk-build-deps from the devscripts package to create metapackages that make sure they are installed. Here's an extract from debian/control of the metapackage:
Source: enrico
Section: admin
Priority: optional
Maintainer: Enrico Zini <enrico@debian.org>
Build-Depends: debhelper (>= 11)
Standards-Version: 3.7.2.1
Package: enrico
Section: admin
Architecture: all
Depends:
  mc, mmv, moreutils, powertop, syncmaildir, notmuch,
  ncdu, vcsh, ddate, jq, git-annex, eatmydata,
  vdirsyncer, khal, etckeeper, moc, pwgen
Description: Enrico's working environment
Package: enrico-devel
Section: devel
Architecture: all
Depends:
  git, python3-git, git-svn, gitk, ansible, fabric,
  valgrind, kcachegrind, zeal, meld, d-feet, flake8, mypy, ipython3,
  strace, ltrace
Description: Enrico's development environment
Package: enrico-gui
Section: x11
Architecture: all
Depends:
  xclip, gnome-terminal, qalculate-gtk, liferea, gajim,
  mumble, sm, syncthing, virt-manager
Recommends: k3b
Description: Enrico's GUI environment
Package: enrico-sanity
Section: admin
Architecture: all
Conflicts: libapache2-mod-php, libapache2-mod-php5, php5, php5-cgi, php5-fpm, libapache2-mod-php7.0, php7.0, libphp7.0-embed, libphp-embed, libphp5-embed
Description: Enrico's sanity
 Metapackage with a list of packages that I do not want anywhere near my
 system.
System-wide customizations I tend to avoid changing system-wide configuration as much as possible, so copying over /home and installing packages takes care of 99% of my needs. There are a few system-wide tweaks I cannot do without: For postfix, I have a little ansible playbook that takes care of it. Network Manager system connections need to be copied manually: a plain copy and a systemctl restart network-manager are enough. Note that Network Manager will ignore the files unless their owner and permissions are what it expects. Fine tuning Comparing the output of dpkg --get-selections between the old and the new system might highlight packages manually installed in a hurry and not added to the metapackages. Finally, what remains is fixing the sad state of mimetype associations, which seem to associate opening file depending on whatever application was installed last, phases of the moon, and what option is the most annoying. Currently on my system, PDFs are opened in inkscape by xdg-open and in calibre by run-mailcap. Let's see how long it takes to figure this one out.

10 May 2020

Russell Coker: IT Asset Management

In my last full-time position I managed the asset tracking database for my employer. It was one of those things that someone needed to do, and it seemed that only way that someone wouldn t equate to no-one was for me to do it which was ok. We used Snipe IT [1] to track the assets. I don t have enough experience with asset tracking to say that Snipe is better or worse than average, but it basically did the job. Asset serial numbers are stored, you can have asset types that allow you to just add one more of the particular item, purchase dates are stored which makes warranty tracking easier, and every asset is associated with a person or listed as available. While I can t say that Snipe IT is better than other products I can say that it will do the job reasonably well. One problem that I didn t discover until way too late was the fact that the finance people weren t tracking serial numbers and that some assets in the database had the same asset IDs as the finance department and some had different ones. The best advice I can give to anyone who gets involved with asset tracking is to immediately chat to finance about how they track things, you need to know if the same asset IDs are used and if serial numbers are tracked by finance. I was pleased to discover that my colleagues were all honourable people as there was no apparent evaporation of valuable assets even though there was little ability to discover who might have been the last person to use some of the assets. One problem that I ve seen at many places is treating small items like keyboards and mice as assets . I think that anything that is worth less than 1 hour s pay at the minimum wage (the price of a typical PC keyboard or mouse) isn t worth tracking, treat it as a disposable item. If you hire a programmer who requests an unusually expensive keyboard or mouse (as some do) it still won t be a lot of money when compared to their salary. Some of the older keyboards and mice that companies have are nasty, months of people eating lunch over them leaves them greasy and sticky. I think that the best thing to do with the keyboards and mice is to give them away when people leave and when new people join the company buy new hardware for them. If a company can t spend $25 on a new keyboard and mouse for each new employee then they either have a massive problem of staff turnover or a lack of priority on morale.

Norbert Preining: Updating Dovecot for Debian

A tweet of a friend pointed me at the removal of dovecot from Debian/testing, which surprised me a bit. Investigating the situation it seems that Dovecot in Debian is lagging a bit behind in releases, and hasn t seen responses to some RC bugs. This sounds critical to me as dovecot is a core part of many mail setups, so I prepared updated packages.
Based on the latest released version of Dovecot, 2.3.10, I have made a package starting from the current Debian packaging and adjusted to the newer upstream. The package builds on Debian Buster (10), Testing, and Unstable on i386 and x64 archs. The packages are available on OBS, as usual: For Unstable:
deb https://download.opensuse.org/repositories/home:/npreining:/debian-dovecot/Debian_Unstable/ ./
For Testing:
deb https://download.opensuse.org/repositories/home:/npreining:/debian-dovecot/Debian_Testing/ ./
For Debian 10 Buster:
deb https://download.opensuse.org/repositories/home:/npreining:/debian-dovecot/Debian_10/ ./
To make these repositories work, don t forget that you need to import my OBS gpg key: obs-npreining.asc, best to download it and put the file into /etc/apt/trusted.gpg.d/obs-npreining.asc. These packages are provided without any warranty. Enjoy.

24 March 2020

Norbert Preining: KDE/Plasma 5.18 for Debian

Update 2020-04-03: Please see this post for updated location of the packages!!! Now for i586 and amd64 architectures! I have been trying out the Plasma Desktop for one week now, and I am very positively surprised. Compared to the clumsy history of KDE3, the current desktop is extremely small-footprint and smooth, surprisingly. Integration is as expected great, and mixing programs from the other world (Gtk/Gnome) works also extremely smooth.
If there are a few things I would change, then mostly the chaos about kwallet and Gnome keyring. I would love to have one secret storage, and it seems that Gnome Keyring is preferrable, but this is in flux at the moment. Also, it is not that pressing, because I have moved all my passwords into pass and thus don t need the secret storage that much anymore. So, after a bit of working with Plasma, I realized that Debian still ships an old version, the most recent being 5.18.3 LTS. Thus, I embarked onto a journey of updating all the necessary packages, and there are a lot: in total 106 packages I did update (and one new one!) until I finally had a new plasma-desktop package available. If you are interested, there are binaries for amd64 and sources in my Debian repository (WARNING: These packages are for Debian/sid and maybe testing, and cannot be used with Buster!):
deb https://www.preining.info/debian unstable kde
deb-src https://www.preining.info/debian unstable kde
As usual, don t forget to import my GPG key, and all packages are without warranty :-; There are two packages that I didn t manage to update: kde-gtk-config which has changed a lot and contains far less files, and breeze-icons which fails on its own symlink tests. If anyone has an idea, please let me know. If other packages are missing, please also drop me a line and I ll try to update them. Enjoy.

18 October 2017

Norbert Preining: Kobo firmware 4.6.9995 mega update (KSM, nickel patch, ssh, fonts)

It has been ages that I haven t updated the MegaUpdate package for Kobo. Now that a new and seemingly rather bug-free and quick firmware release (4.6.9995) has been released, I finally took the time to update the whole package to the latest releases of all the included items. The update includes all my favorite patches and features: Kobo Start Menu, koreader, coolreader, pbchess, ssh access, custom dictionaries, and some side-loaded fonts. Kobo Logo So what are all these items: Install procedure Download Mark6 Kobo GloHD firmware: Kobo 4.6.9995 for GloHD Mega update: Kobo-4.6.9995-combined/Mark6/KoboRoot.tgz Mark5 Aura firmware: Kobo 4.6.9995 for Aura Mega update: Kobo-4.6.9995-combined/Mark5/KoboRoot.tgz Mark4 Kobo Glo, Aura HD firmware: Kobo 4.6.9995 for Glo and AuraHD Mega update: Kobo-4.6.9995-combined/Mark4/KoboRoot.tgz Latest firmware Warning: Sideloading or crossloading the incorrect firmware can break/brick your device. The link below is for Kobo GloHD ONLY. The first step is to update the Kobo to the latest firmware. This can easily be done by just getting the latest firmware from the links above and unpacking the zip file into the .kobo directory on your device. Eject and enjoy the updating procedure. Mega update Get the combined KoboRoot.tgz for your device from the links above and put it into the .kobo directory, then eject and enjoy the updating procedure again. After this the device should reboot and you will be kicked into KSM, from where after some time of waiting Nickel will be started. If you consider the fonts too small, select Configure, then the General, and add item, then select kobomenuFontsize=55 and save. Remarks to some of the items included The full list of included things is above, here are only some notes about what specific I have done. WARNINGS If this is the first time you install this patch, you to fix the password for root and disable telnet. This is an important step, here are the steps you have to take (taken from this old post):
  1. Turn on Wifi on the Kobo and find IP address
    Go to Settings Connect and after this is done, go to Settings Device Information where you will see something like
    IP Address: 192.168.1.NN

    (numbers change!)
  2. telnet into your device
    telnet 192.168.1.NN
    it will ask you the user name, enter root (without the quotes) and no password
  3. (ON THE GLO) change home directory of root
    edit /etc/passwd with vi and change the entry for root by changing the 6th field from: / to /root (without the quotes). After this procedure the line should look like
    root::0:0:root:/root:/bin/sh
    don t forget to save the file
  4. (ON THE GLO) create ssh keys for dropbear
    [root@(none) ~]# mkdir /etc/dropbear
    [root@(none) ~]# cd /etc/dropbear
    [root@(none) ~]# dropbearkey -t dss -f dropbear_dss_host_key
    [root@(none) ~]# dropbearkey -t rsa -f dropbear_rsa_host_key
  5. (ON YOUR PERSONAL COMPUTER) check that you can log in with ssh
    ssh root@192.168.1.NN
    You should get dropped into your device again
  6. (ON THE GLO) log out of the telnet session (the first one you did)
    [root@(none) ~]# exit
  7. (ON THE GLO) in your ssh session, change the password of root
    [root@(none) ~]# passwd
    you will have to enter the new password two times. Remember it well, you will not be easily able to recover it without opening your device.
  8. (ON THE GLO) disable telnet login
    edit the file /etc/inetd.conf.local on the GLO (using vi) and remove the telnet line (the line starting with 23).
  9. restart your device
The combined KoboRoot.tgz is provided without warranty. If you need to reset your device, don t blame me!

10 September 2017

Sylvain Beucler: dot-zed archive file format

TL,DR: I reverse-engineered the .zed encrypted archive format.
Following a clean-room design, I'm providing a description that can be implemented by a third-party.
Interested? :) (reference version at: https://www.beuc.net/zed/) .zed archive file format Introduction Archives with the .zed extension are conceptually similar to an encrypted .zip file. In addition to a specific format, .zed files support multiple users: files are encrypted using the archive master key, which itself is encrypted for each user and/or authentication method (password, RSA key through certificate or PKCS#11 token). Metadata such as filenames is partially encrypted. .zed archives are used as stand-alone or attached to e-mails with the help of a MS Outlook plugin. A variant, which is not covered here, can encrypt/decrypt MS Windows folders on the fly like ecryptfs. In the spirit of academic and independent research this document provides a description of the file format and encryption algorithms for this encrypted file archive. See the conventions section for conventions and acronyms used in this document. Structure overview The .zed file format is composed of several layers. Or as a diagram:
+----------------------------------------------------------------------------------------------------+
  .zed archive (MS-CBF)                                                                               
                                                                                                      
   stream #1                         stream #2                       stream #3...                     
  +------------------------------+  +---------------------------+  +---------------------------+      
    metadata (MS-OLEPS)               encryption (AES)               encryption (AES)                 
                                      512-bytes chunks               512-bytes chunks                 
    +--------------------------+                                                                      
      obfuscation (static key)        +-----------------------+      +-----------------------+        
      +----------------------+       -  compression (zlib)     -    -  compression (zlib)     -       
       _ctlfile (TLV)                                                                            ...  
      +----------------------+          +---------------+              +---------------+               
    +--------------------------+          file contents                  file contents                
                                                                                                      
    +--------------------------+     -  +---------------+      -    -  +---------------+      -       
      _catalog (TLV)                                                                                  
    +--------------------------+      +-----------------------+      +-----------------------+        
  +------------------------------+  +---------------------------+  +---------------------------+      
+----------------------------------------------------------------------------------------------------+
Encryption schemes Several AES key sizes are supported, such as 128 and 256 bits. The Cipher Block Chaining (CBC) block cipher mode of operation is used to decrypt multiple AES 16-byte blocks, which means an initialisation vector (IV) is stored in clear along with the ciphertext. All filenames and file contents are encrypted using the same encryption mode, key and IV (e.g. if you remove and re-add a file in the archive, the resulting stream will be identical). No cleartext padding is used during encryption; instead, several end-of-stream handlers are available, so the ciphertext has exactly the size of the cleartext (e.g. the size of the compressed file). The following variants were identified in the 'encryption_mode' field. STREAM This is the end-of-stream handler for: This end-of-stream handler is apparently specific to the .zed format, and applied when the cleartext's does not end on a 16-byte boundary ; in this case special processing is performed on the last partial 16-byte block. The encryption and decryption phases are identical: let's assume the last partial block of cleartext (for encryption) or ciphertext (for decryption) was appended after all the complete 16-byte blocks of ciphertext: In either case, if the full ciphertext is less then one AES block (< 16 bytes), then the IV is used instead of the second-to-last block. CTS CTS or CipherText Stealing is the end-of-stream handler for: It matches the CBC-CS3 variant as described in Recommendation for Block Cipher Modes of Operation: Three Variants of Ciphertext Stealing for CBC Mode. Empty cleartext Since empty filenames or metadata are invalid, and since all files are compressed (resulting in a minimum 8-byte zlib cleartext), no empty cleartext was encrypted in the archive. metadata stream It is named 05356861616161716149656b7a6565636e576a33317a7868304e63 (hexadecimal), i.e. the character with code 5 followed by '5haaaaqaIekzeecnWj31zxh0Nc' (ASCII). The format used is OLE Property Set (MS-OLEPS). It introduces 2 property names "_ctlfile" (index 3) and "_catalog" (index 4), and 2 instances of said properties each containing an application-specific VT_BLOB (type 0x0041). _ctlfile: obfuscated global properties and access list This subpart is stored under index 3 ("_ctlfile") of the MS-OLEPS metadata. It consists of: The ciphertext is encrypted with AES-CBC "STREAM" mode using 128-bit static key 37F13CF81C780AF26B6A52654F794AEF (hexadecimal) and the prepended IV so as to obfuscate the access list. The ciphertext is continuous and not split in chunks (unlike files), even when it is larger than 512 bytes. The decrypted text contain properties in a TLV format as described in _ctlfile TLV: Archives may include "mandatory" users that cannot be removed. They are typically used to add an enterprise wide recovery RSA key to all archives. Extreme care must be taken to protect these key, as it can decrypt all past archives generated from within that company. _catalog: file list This subpart is stored under index 4 ("_catalog") of the MS-OLEPS metadata. It contains a series of 'fileprops' TLV structures, one for each file or directory. The file hierarchy can be reconstructed by checking the 'parent_id' field of each file entry. If 'parent_id' is 0 then the file is located at the top-level of the hierarchy, otherwise it's located under the directory with the matching 'file_id'. TLV format This format is a series of fields : Value semantics depend on its Type. It may contain an uint32be integer, a UTF-16LE string, a character sequence, or an inner TLV structure. Unless otherwise noted, TLV structures appear once. Some fields are optional and may not be present at all (e.g. 'archive_createdwith'). Some fields are unique within a structure (e.g. 'files_iv'), other may be repeated within a structure to form a list (e.g. 'fileprops' and 'passworduser'). The following top-level types that have been identified, and detailed in the next sections: Some additional unidentified types may be present. _ctlfile TLV _catalog TLV Decrypting the archive AES key rsauser The user accessing the archive will be authenticated by comparing his/her X509 certificate with the one stored in the 'certificate' field using DER format. The 'files_key_ciphertext' field is then decrypted using the PKCS#1 v1.5 encryption mechanism, with the private key that matches the user certificate. passworduser An intermediary user key, a user IV and an integrity checksum will be derived from the user password, using the deprecated PKCS#12 method as described at rfc7292 appendix B. Note: this is not PKCS#5 (nor PBKDF1/PBKDF2), this is an incompatible method from PKCS#12 that notably does not use HMAC. The 'pkcs12_hashfunc' field defines the underlying hash function. The following values have been identified: PBA - Password-based authentication The user accessing the archive will be authenticated by deriving an 8-byte sequence from his/her password. The parameters for the derivation function are: The derivation is checked against 'pba_checksum'. PBE - Password-based encryption Once the user is identified, 2 new values are derived from the password with different parameters to produce the IV and the key decryption key, with the same hash function: The parameters specific to user key are: The user key needs to be truncated to a length of 'encryption_strength', as specified in bytes in the archive properties. The parameters specific to user IV are: Once the key decryption key and the IV are derived, 'files_key_ciphertext' is decrypted using AES CBC, with PKCS#7 padding. Identifying file streams The name of the MS-CFB stream is derived by shuffling the bytes from the 'file_id' field and then encoding the result as hexadecimal. The reordering is:
Initial  offset: 0 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15
Shuffled offset: 3 2 1 0 5 4 7 6 8 9 10 11 12 13 14 15
The 16th byte is usually a NUL byte, hence the stream identifier is a 30-character-long string. Decrypting files The compressed stream is split in chunks of 512 bytes, each of them encrypted separately using AES CBS and the global archive encryption scheme. Decryption uses the global AES key (retrieved using the user credentials), and the global IV (retrieved from the deobfuscated archive metadata). The IV for each chunk is computed by: Each chunk is an independent stream and the decryption process involves end-of-stream handling even if this is not the end of the actual file. This is particularly important for the CTS handler. Note: this is not to be confused with CTR block cipher mode of operation with operates differently and requires a nonce. Decompressing files Compressed streams are zlib stream with default compression options and can be decompressed following the zlib format. Test cases Excluded for brevity, cf. https://www.beuc.net/zed/#test-cases. Conventions and references Feedback Feel free to send comments at beuc@beuc.net. If you have .zed files that you think are not covered by this document, please send them as well (replace sensitive files with other ones). The author's GPG key can be found at 8FF1CB6E8D89059F. Copyright (C) 2017 Sylvain Beucler Copying and distribution of this file, with or without modification, are permitted in any medium without royalty provided the copyright notice and this notice are preserved. This file is offered as-is, without any warranty.

Sylvain Beucler: dot-zed archive file format

TL,DR: I reverse-engineered the .zed encrypted archive format.
Following a clean-room design, I'm providing a description that can be implemented by a third-party.
Interested? :) (reference version at: https://www.beuc.net/zed/) .zed archive file format Introduction Archives with the .zed extension are conceptually similar to an encrypted .zip file. In addition to a specific format, .zed files support multiple users: files are encrypted using the archive master key, which itself is encrypted for each user and/or authentication method (password, RSA key through certificate or PKCS#11 token). Metadata such as filenames is partially encrypted. .zed archives are used as stand-alone or attached to e-mails with the help of a MS Outlook plugin. A variant, which is not covered here, can encrypt/decrypt MS Windows folders on the fly like ecryptfs. In the spirit of academic and independent research this document provides a description of the file format and encryption algorithms for this encrypted file archive. See the conventions section for conventions and acronyms used in this document. Structure overview The .zed file format is composed of several layers. Or as a diagram:
+----------------------------------------------------------------------------------------------------+
  .zed archive (MS-CBF)                                                                               
                                                                                                      
   stream #1                         stream #2                       stream #3...                     
  +------------------------------+  +---------------------------+  +---------------------------+      
    metadata (MS-OLEPS)               encryption (AES)               encryption (AES)                 
                                      512-bytes chunks               512-bytes chunks                 
    +--------------------------+                                                                      
      obfuscation (static key)        +-----------------------+      +-----------------------+        
      +----------------------+       -  compression (zlib)     -    -  compression (zlib)     -       
       _ctlfile (TLV)                                                                            ...  
      +----------------------+          +---------------+              +---------------+               
    +--------------------------+          file contents                  file contents                
                                                                                                      
    +--------------------------+     -  +---------------+      -    -  +---------------+      -       
      _catalog (TLV)                                                                                  
    +--------------------------+      +-----------------------+      +-----------------------+        
  +------------------------------+  +---------------------------+  +---------------------------+      
+----------------------------------------------------------------------------------------------------+
Encryption schemes Several AES key sizes are supported, such as 128 and 256 bits. The Cipher Block Chaining (CBC) block cipher mode of operation is used to decrypt multiple AES 16-byte blocks, which means an initialisation vector (IV) is stored in clear along with the ciphertext. All filenames and file contents are encrypted using the same encryption mode, key and IV (e.g. if you remove and re-add a file in the archive, the resulting stream will be identical). No cleartext padding is used during encryption; instead, several end-of-stream handlers are available, so the ciphertext has exactly the size of the cleartext (e.g. the size of the compressed file). The following variants were identified in the 'encryption_mode' field. STREAM This is the end-of-stream handler for: This end-of-stream handler is apparently specific to the .zed format, and applied when the cleartext's does not end on a 16-byte boundary ; in this case special processing is performed on the last partial 16-byte block. The encryption and decryption phases are identical: let's assume the last partial block of cleartext (for encryption) or ciphertext (for decryption) was appended after all the complete 16-byte blocks of ciphertext: In either case, if the full ciphertext is less then one AES block (< 16 bytes), then the IV is used instead of the second-to-last block. CTS CTS or CipherText Stealing is the end-of-stream handler for: It matches the CBC-CS3 variant as described in Recommendation for Block Cipher Modes of Operation: Three Variants of Ciphertext Stealing for CBC Mode. Empty cleartext Since empty filenames or metadata are invalid, and since all files are compressed (resulting in a minimum 8-byte zlib cleartext), no empty cleartext was encrypted in the archive. metadata stream It is named 05356861616161716149656b7a6565636e576a33317a7868304e63 (hexadecimal), i.e. the character with code 5 followed by '5haaaaqaIekzeecnWj31zxh0Nc' (ASCII). The format used is OLE Property Set (MS-OLEPS). It introduces 2 property names "_ctlfile" (index 3) and "_catalog" (index 4), and 2 instances of said properties each containing an application-specific VT_BLOB (type 0x0041). _ctlfile: obfuscated global properties and access list This subpart is stored under index 3 ("_ctlfile") of the MS-OLEPS metadata. It consists of: The ciphertext is encrypted with AES-CBC "STREAM" mode using 128-bit static key 37F13CF81C780AF26B6A52654F794AEF (hexadecimal) and the prepended IV so as to obfuscate the access list. The ciphertext is continuous and not split in chunks (unlike files), even when it is larger than 512 bytes. The decrypted text contain properties in a TLV format as described in _ctlfile TLV: Archives may include "mandatory" users that cannot be removed. They are typically used to add an enterprise wide recovery RSA key to all archives. Extreme care must be taken to protect these key, as it can decrypt all past archives generated from within that company. _catalog: file list This subpart is stored under index 4 ("_catalog") of the MS-OLEPS metadata. It contains a series of 'fileprops' TLV structures, one for each file or directory. The file hierarchy can be reconstructed by checking the 'parent_id' field of each file entry. If 'parent_id' is 0 then the file is located at the top-level of the hierarchy, otherwise it's located under the directory with the matching 'file_id'. TLV format This format is a series of fields : Value semantics depend on its Type. It may contain an uint32be integer, a UTF-16LE string, a character sequence, or an inner TLV structure. Unless otherwise noted, TLV structures appear once. Some fields are optional and may not be present at all (e.g. 'archive_createdwith'). Some fields are unique within a structure (e.g. 'files_iv'), other may be repeated within a structure to form a list (e.g. 'fileprops' and 'passworduser'). The following top-level types that have been identified, and detailed in the next sections: Some additional unidentified types may be present. _ctlfile TLV _catalog TLV Decrypting the archive AES key rsauser The user accessing the archive will be authenticated by comparing his/her X509 certificate with the one stored in the 'certificate' field using DER format. The 'files_key_ciphertext' field is then decrypted using the PKCS#1 v1.5 encryption mechanism, with the private key that matches the user certificate. passworduser An intermediary user key, a user IV and an integrity checksum will be derived from the user password, using the deprecated PKCS#12 method as described at rfc7292 appendix B. Note: this is not PKCS#5 (nor PBKDF1/PBKDF2), this is an incompatible method from PKCS#12 that notably does not use HMAC. The 'pkcs12_hashfunc' field defines the underlying hash function. The following values have been identified: PBA - Password-based authentication The user accessing the archive will be authenticated by deriving an 8-byte sequence from his/her password. The parameters for the derivation function are: The derivation is checked against 'pba_checksum'. PBE - Password-based encryption Once the user is identified, 2 new values are derived from the password with different parameters to produce the IV and the key decryption key, with the same hash function: The parameters specific to user key are: The user key needs to be truncated to a length of 'encryption_strength', as specified in bytes in the archive properties. The parameters specific to user IV are: Once the key decryption key and the IV are derived, 'files_key_ciphertext' is decrypted using AES CBC, with PKCS#7 padding. Identifying file streams The name of the MS-CFB stream is derived by shuffling the bytes from the 'file_id' field and then encoding the result as hexadecimal. The reordering is:
Initial  offset: 0 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15
Shuffled offset: 3 2 1 0 5 4 7 6 8 9 10 11 12 13 14 15
The 16th byte is usually a NUL byte, hence the stream identifier is a 30-character-long string. Decrypting files The compressed stream is split in chunks of 512 bytes, each of them encrypted separately using AES CBS and the global archive encryption scheme. Decryption uses the global AES key (retrieved using the user credentials), and the global IV (retrieved from the deobfuscated archive metadata). The IV for each chunk is computed by: Each chunk is an independent stream and the decryption process involves end-of-stream handling even if this is not the end of the actual file. This is particularly important for the CTS handler. Note: this is not to be confused with CTR block cipher mode of operation with operates differently and requires a nonce. Decompressing files Compressed streams are zlib stream with default compression options and can be decompressed following the zlib format. Test cases Excluded for brevity, cf. https://www.beuc.net/zed/#test-cases. Conventions and references Feedback Feel free to send comments at beuc@beuc.net. If you have .zed files that you think are not covered by this document, please send them as well (replace sensitive files with other ones). The author's GPG key can be found at 8FF1CB6E8D89059F. Copyright (C) 2017 Sylvain Beucler Copying and distribution of this file, with or without modification, are permitted in any medium without royalty provided the copyright notice and this notice are preserved. This file is offered as-is, without any warranty.

18 June 2017

Simon Josefsson: OpenPGP smartcard under GNOME on Debian 9.0 Stretch

I installed Debian 9.0 Stretch on my Lenovo X201 laptop today. Installation went smooth, as usual. GnuPG/SSH with an OpenPGP smartcard I use a YubiKey NEO does not work out of the box with GNOME though. I wrote about how to fix OpenPGP smartcards under GNOME with Debian 8.0 Jessie earlier, and I thought I d do a similar blog post for Debian 9.0 Stretch . The situation is slightly different than before (e.g., GnuPG works better but SSH doesn t) so there is some progress. May I hope that Debian 10.0 Buster gets this right? Pointers to which package in Debian should have a bug report tracking this issue is welcome (or a pointer to an existing bug report). After first login, I attempt to use gpg --card-status to check if GnuPG can talk to the smartcard.
jas@latte:~$ gpg --card-status
gpg: error getting version from 'scdaemon': No SmartCard daemon
gpg: OpenPGP card not available: No SmartCard daemon
jas@latte:~$ 
This fails because scdaemon is not installed. Isn t a smartcard common enough so that this should be installed by default on a GNOME Desktop Debian installation? Anyway, install it as follows.
root@latte:~# apt-get install scdaemon
Then try again.
jas@latte:~$ gpg --card-status
gpg: selecting openpgp failed: No such device
gpg: OpenPGP card not available: No such device
jas@latte:~$ 
I believe scdaemon here attempts to use its internal CCID implementation, and I do not know why it does not work. At this point I often recall that want pcscd installed since I work with smartcards in general.
root@latte:~# apt-get install pcscd
Now gpg --card-status works!
jas@latte:~$ gpg --card-status
Reader ...........: Yubico Yubikey NEO CCID 00 00
Application ID ...: D2760001240102000006017403230000
Version ..........: 2.0
Manufacturer .....: Yubico
Serial number ....: 01740323
Name of cardholder: Simon Josefsson
Language prefs ...: sv
Sex ..............: male
URL of public key : https://josefsson.org/54265e8c.txt
Login data .......: jas
Signature PIN ....: not forced
Key attributes ...: rsa2048 rsa2048 rsa2048
Max. PIN lengths .: 127 127 127
PIN retry counter : 3 3 3
Signature counter : 8358
Signature key ....: 9941 5CE1 905D 0E55 A9F8  8026 860B 7FBB 32F8 119D
      created ....: 2014-06-22 19:19:04
Encryption key....: DC9F 9B7D 8831 692A A852  D95B 9535 162A 78EC D86B
      created ....: 2014-06-22 19:19:20
Authentication key: 2E08 856F 4B22 2148 A40A  3E45 AF66 08D7 36BA 8F9B
      created ....: 2014-06-22 19:19:41
General key info..: sub  rsa2048/860B7FBB32F8119D 2014-06-22 Simon Josefsson 
sec#  rsa3744/0664A76954265E8C  created: 2014-06-22  expires: 2017-09-04
ssb>  rsa2048/860B7FBB32F8119D  created: 2014-06-22  expires: 2017-09-04
                                card-no: 0006 01740323
ssb>  rsa2048/9535162A78ECD86B  created: 2014-06-22  expires: 2017-09-04
                                card-no: 0006 01740323
ssb>  rsa2048/AF6608D736BA8F9B  created: 2014-06-22  expires: 2017-09-04
                                card-no: 0006 01740323
jas@latte:~$ 
Using the key will not work though.
jas@latte:~$ echo foo gpg -a --sign
gpg: no default secret key: No secret key
gpg: signing failed: No secret key
jas@latte:~$ 
This is because the public key and the secret key stub are not available.
jas@latte:~$ gpg --list-keys
jas@latte:~$ gpg --list-secret-keys
jas@latte:~$ 
You need to import the key for this to work. I have some vague memory that gpg --card-status was supposed to do this, but I may be wrong.
jas@latte:~$ gpg --recv-keys 9AA9BDB11BB1B99A21285A330664A76954265E8C
gpg: failed to start the dirmngr '/usr/bin/dirmngr': No such file or directory
gpg: connecting dirmngr at '/run/user/1000/gnupg/S.dirmngr' failed: No such file or directory
gpg: keyserver receive failed: No dirmngr
jas@latte:~$ 
Surprisingly, dirmngr is also not shipped by default so it has to be installed manually.
root@latte:~# apt-get install dirmngr
Below I proceed to trust the clouds to find my key.
jas@latte:~$ gpg --recv-keys 9AA9BDB11BB1B99A21285A330664A76954265E8C
gpg: key 0664A76954265E8C: public key "Simon Josefsson " imported
gpg: no ultimately trusted keys found
gpg: Total number processed: 1
gpg:               imported: 1
jas@latte:~$ 
Now the public key and the secret key stub are available locally.
jas@latte:~$ gpg --list-keys
/home/jas/.gnupg/pubring.kbx
----------------------------
pub   rsa3744 2014-06-22 [SC] [expires: 2017-09-04]
      9AA9BDB11BB1B99A21285A330664A76954265E8C
uid           [ unknown] Simon Josefsson 
uid           [ unknown] Simon Josefsson 
sub   rsa2048 2014-06-22 [S] [expires: 2017-09-04]
sub   rsa2048 2014-06-22 [E] [expires: 2017-09-04]
sub   rsa2048 2014-06-22 [A] [expires: 2017-09-04]
jas@latte:~$ gpg --list-secret-keys
/home/jas/.gnupg/pubring.kbx
----------------------------
sec#  rsa3744 2014-06-22 [SC] [expires: 2017-09-04]
      9AA9BDB11BB1B99A21285A330664A76954265E8C
uid           [ unknown] Simon Josefsson 
uid           [ unknown] Simon Josefsson 
ssb>  rsa2048 2014-06-22 [S] [expires: 2017-09-04]
ssb>  rsa2048 2014-06-22 [E] [expires: 2017-09-04]
ssb>  rsa2048 2014-06-22 [A] [expires: 2017-09-04]
jas@latte:~$ 
I am now able to sign data with the smartcard, yay!
jas@latte:~$ echo foo gpg -a --sign
-----BEGIN PGP MESSAGE-----
owGbwMvMwMHYxl2/2+iH4FzG01xJDJFu3+XT8vO5OhmNWRgYORhkxRRZZjrGPJwQ
yxe68keDGkwxKxNIJQMXpwBMRJGd/a98NMPJQt6jaoyO9yUVlmS7s7qm+Kjwr53G
uq9wQ+z+/kOdk9w4Q39+SMvc+mEV72kuH9WaW9bVqj80jN77hUbfTn5mffu2/aVL
h/IneTfaOQaukHij/P8A0//Phg/maWbONUjjySrl+a3tP8ll6/oeCd8g/aeTlH79
i0naanjW4bjv9wnvGuN+LPHLmhUc2zvZdyK3xttN/roHvsdX3f53yTAxeInvXZmd
x7W0/hVPX33Y4nT877T/ak4L057IBSavaPVcf4yhglVI8XuGgaTP666Wuslbliy4
5W5eLasbd33Xd/W0hTINznuz0kJ4r1bLHZW9fvjLduMPq5rS2co9tvW8nX9rhZ/D
zycu/QA=
=I8rt
-----END PGP MESSAGE-----
jas@latte:~$ 
Encrypting to myself will not work smoothly though.
jas@latte:~$ echo foo gpg -a --encrypt -r simon@josefsson.org
gpg: 9535162A78ECD86B: There is no assurance this key belongs to the named user
sub  rsa2048/9535162A78ECD86B 2014-06-22 Simon Josefsson 
 Primary key fingerprint: 9AA9 BDB1 1BB1 B99A 2128  5A33 0664 A769 5426 5E8C
      Subkey fingerprint: DC9F 9B7D 8831 692A A852  D95B 9535 162A 78EC D86B
It is NOT certain that the key belongs to the person named
in the user ID.  If you *really* know what you are doing,
you may answer the next question with yes.
Use this key anyway? (y/N) 
gpg: signal Interrupt caught ... exiting
jas@latte:~$ 
The reason is that the newly imported key has unknown trust settings. I update the trust settings on my key to fix this, and encrypting now works without a prompt.
jas@latte:~$ gpg --edit-key 9AA9BDB11BB1B99A21285A330664A76954265E8C
gpg (GnuPG) 2.1.18; Copyright (C) 2017 Free Software Foundation, Inc.
This is free software: you are free to change and redistribute it.
There is NO WARRANTY, to the extent permitted by law.
Secret key is available.
pub  rsa3744/0664A76954265E8C
     created: 2014-06-22  expires: 2017-09-04  usage: SC  
     trust: unknown       validity: unknown
ssb  rsa2048/860B7FBB32F8119D
     created: 2014-06-22  expires: 2017-09-04  usage: S   
     card-no: 0006 01740323
ssb  rsa2048/9535162A78ECD86B
     created: 2014-06-22  expires: 2017-09-04  usage: E   
     card-no: 0006 01740323
ssb  rsa2048/AF6608D736BA8F9B
     created: 2014-06-22  expires: 2017-09-04  usage: A   
     card-no: 0006 01740323
[ unknown] (1). Simon Josefsson 
[ unknown] (2)  Simon Josefsson 
gpg> trust
pub  rsa3744/0664A76954265E8C
     created: 2014-06-22  expires: 2017-09-04  usage: SC  
     trust: unknown       validity: unknown
ssb  rsa2048/860B7FBB32F8119D
     created: 2014-06-22  expires: 2017-09-04  usage: S   
     card-no: 0006 01740323
ssb  rsa2048/9535162A78ECD86B
     created: 2014-06-22  expires: 2017-09-04  usage: E   
     card-no: 0006 01740323
ssb  rsa2048/AF6608D736BA8F9B
     created: 2014-06-22  expires: 2017-09-04  usage: A   
     card-no: 0006 01740323
[ unknown] (1). Simon Josefsson 
[ unknown] (2)  Simon Josefsson 
Please decide how far you trust this user to correctly verify other users' keys
(by looking at passports, checking fingerprints from different sources, etc.)
  1 = I don't know or won't say
  2 = I do NOT trust
  3 = I trust marginally
  4 = I trust fully
  5 = I trust ultimately
  m = back to the main menu
Your decision? 5
Do you really want to set this key to ultimate trust? (y/N) y
pub  rsa3744/0664A76954265E8C
     created: 2014-06-22  expires: 2017-09-04  usage: SC  
     trust: ultimate      validity: unknown
ssb  rsa2048/860B7FBB32F8119D
     created: 2014-06-22  expires: 2017-09-04  usage: S   
     card-no: 0006 01740323
ssb  rsa2048/9535162A78ECD86B
     created: 2014-06-22  expires: 2017-09-04  usage: E   
     card-no: 0006 01740323
ssb  rsa2048/AF6608D736BA8F9B
     created: 2014-06-22  expires: 2017-09-04  usage: A   
     card-no: 0006 01740323
[ unknown] (1). Simon Josefsson 
[ unknown] (2)  Simon Josefsson 
Please note that the shown key validity is not necessarily correct
unless you restart the program.
gpg> quit
jas@latte:~$ echo foo gpg -a --encrypt -r simon@josefsson.org
-----BEGIN PGP MESSAGE-----
hQEMA5U1Fip47NhrAQgArTvAykj/YRhWVuXb6nzeEigtlvKFSmGHmbNkJgF5+r1/
/hWENR72wsb1L0ROaLIjM3iIwNmyBURMiG+xV8ZE03VNbJdORW+S0fO6Ck4FaIj8
iL2/CXyp1obq1xCeYjdPf2nrz/P2Evu69s1K2/0i9y2KOK+0+u9fEGdAge8Gup6y
PWFDFkNj2YiVa383BqJ+kV51tfquw+T4y5MfVWBoHlhm46GgwjIxXiI+uBa655IM
EgwrONcZTbAWSV4/ShhR9ug9AzGIJgpu9x8k2i+yKcBsgAh/+d8v7joUaPRZlGIr
kim217hpA3/VLIFxTTkkm/BO1KWBlblxvVaL3RZDDNI5AVp0SASswqBqT3W5ew+K
nKdQ6UTMhEFe8xddsLjkI9+AzHfiuDCDxnxNgI1haI6obp9eeouGXUKG
=s6kt
-----END PGP MESSAGE-----
jas@latte:~$ 
So everything is fine, isn t it? Alas, not quite.
jas@latte:~$ ssh-add -L
The agent has no identities.
jas@latte:~$ 
Tracking this down, I now realize that GNOME s keyring is used for SSH but GnuPG s gpg-agent is used for GnuPG. GnuPG uses the environment variable GPG_AGENT_INFO to connect to an agent, and SSH uses the SSH_AUTH_SOCK environment variable to find its agent. The filenames used below leak the knowledge that gpg-agent is used for GnuPG but GNOME keyring is used for SSH.
jas@latte:~$ echo $GPG_AGENT_INFO 
/run/user/1000/gnupg/S.gpg-agent:0:1
jas@latte:~$ echo $SSH_AUTH_SOCK 
/run/user/1000/keyring/ssh
jas@latte:~$ 
Here the same recipe as in my previous blog post works. This time GNOME keyring only has to be disabled for SSH. Disabling GNOME keyring is not sufficient, you also need gpg-agent to start with enable-ssh-support. The simplest way to achieve that is to add a line in ~/.gnupg/gpg-agent.conf as follows. When you login, the script /etc/X11/Xsession.d/90gpg-agent will set the environment variables GPG_AGENT_INFO and SSH_AUTH_SOCK. The latter variable is only set if enable-ssh-support is mentioned in the gpg-agent configuration.
jas@latte:~$ mkdir ~/.config/autostart
jas@latte:~$ cp /etc/xdg/autostart/gnome-keyring-ssh.desktop ~/.config/autostart/
jas@latte:~$ echo 'Hidden=true' >> ~/.config/autostart/gnome-keyring-ssh.desktop 
jas@latte:~$ echo enable-ssh-support >> ~/.gnupg/gpg-agent.conf 
jas@latte:~$ 
Log out from GNOME and log in again. Now you should see ssh-add -L working.
jas@latte:~$ ssh-add -L
ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABAQDFP+UOTZJ+OXydpmbKmdGOVoJJz8se7lMs139T+TNLryk3EEWF+GqbB4VgzxzrGjwAMSjeQkAMb7Sbn+VpbJf1JDPFBHoYJQmg6CX4kFRaGZT6DHbYjgia59WkdkEYTtB7KPkbFWleo/RZT2u3f8eTedrP7dhSX0azN0lDuu/wBrwedzSV+AiPr10rQaCTp1V8sKbhz5ryOXHQW0Gcps6JraRzMW+ooKFX3lPq0pZa7qL9F6sE4sDFvtOdbRJoZS1b88aZrENGx8KSrcMzARq9UBn1plsEG4/3BRv/BgHHaF+d97by52R0VVyIXpLlkdp1Uk4D9cQptgaH4UAyI1vr cardno:000601740323
jas@latte:~$ 
Topics for further discussion or research include 1) whether scdaemon, dirmngr and/or pcscd should be pre-installed on Debian desktop systems; 2) whether gpg --card-status should attempt to import the public key and secret key stub automatically; 3) why GNOME keyring is used by default for SSH rather than gpg-agent; 4) whether GNOME keyring should support smartcards, or if it is better to always use gpg-agent for GnuPG/SSH, 5) if something could/should be done to automatically infer the trust setting for a secret key. Enjoy!

2 June 2017

Evgeni Golov: Breaking glass, OnePlus service and Android backups

While visiting our Raleigh office, I managed to crack the glass on the screen of my OnePlus 3. Luckily it was a clean crack from the left upper corner, to the right lower one. The crack was not really interfering with neither touch nor display, so I had not much pressure in fixing it. eBay lists new LCD sets for 110-130 , and those still require manual work of getting the LCD assembly out of the case, replacing it, etc. There are also glass-only sets for ~20 , but these require the complete removal of the glued glass part from the screen, and reattaching it, nothing you want to do at home. But there is also still the vendor, who can fix it, right? Internet suggested they would do it for about 100 , which seemed fair. As people have been asking about the support experience, here is a quick write up what happened:
  • Opened the RMA request online on Sunday, providing a brief description of the issue and some photos
  • Monday morning answer from the support team, confirming this is way out of warranty, but I can get the device fixed for about 93
  • After confirming that the extra cost is expected, I had an UPS sticker to ship the device to CTDI in Poland
  • UPS even tried a pick-up on Tuesday, but I was not properly prepared, so I dropped the device later at a local UPS point
  • It arrived in Poland on Wednesday
  • On Thursday the device was inspected, pictures made etc
  • Friday morning I had a quote in my inbox, asking me to pay 105 - the service partner decided to replace the front camera too, which was not part of the original 93 approximation.
  • Paid the money with my credit card and started waiting.
  • The actual repair happened on Monday.
  • Quality controlled on Tuesday.
  • Shipped to me on Wednesday.
  • Arrived at my door on Thursday.
All in all 9 working days, which is not great, but good enough IMHO. And the repair is good, and it was not (too) expensive. So I am a happy user of an OnePlus 3 again. Well, almost. Before sending the device for repairs, had to take a backup and wipe it. I would not send it with my, even encrypted, data on it. And backups and Android is something special. Android will backup certain data to Google, if you allow it to. Apps can forbid that. Sadly this also blocks non-cloud backups with adb backup. So to properly backup your system, you either need root or you create a full backup of the system in the recovery and restore that. I did the backup using TWRP, transferred it to my laptop, wiped the device, sent it in, got it back, copied the backup to the phone, restored it and... Was locked out of the device, it would not take my password anymore. Well, it seems that happens, just delete some files and it will be fine. It's 2017, are backups of mobile devices really supposed to be that hard?!

31 May 2017

Lars Wirzenius: Using a Yubikey 4 for ensafening one's encryption

Introduction I've written before about using a U2F key with PAM. This post continues the theme and explains how to use a smartcard with GnuPG for storing OpenPGP private keys. Specifically, a Yubikey 4 card, because that's what I have, but any good GnuPG compatible card should work. The Yubikey is both a GnuPG compatible smart card, and a U2F card. The Yubikey 4 can handle keys up to 4096 bits. Older Yubikeys can only handle keys up to 2095 bits. The reason to do this is to make it harder for an attacker to steal your encryption keys. I will assume you don't already have an OpenPGP key, or are willing to generate a new one. I will also assume you run Debian stretch; some of the desktop environment setup details may differ between Debian versions or between Linux distributions. You will need: Terminology Some terminology: Outline The process outline is:
  1. Create a new, signing-only master key with GnuPG.
  2. Create three "subkeys", one each for encryption, signing, and authentication. These subkeys are what everyone else uses.
  3. Export copies of the master key pair and the subkey pairs and put them in a safe place.
  4. Put the subkeys on the Yubikey.
  5. GnuPG will automatically use the keys from the card. You have to have the card plugged into a USB port for things to work. If someone steals your laptop, they won't get the private subkeys. Even if they steal your Yubikey, they won't get them (the smartcard is physically designed to prevent that), and can't even use them (because there's PIN codes or passphrases and getting them wrong several times locks up the smartcard).
  6. Use gpg-agent as your SSH agent, and the authentication-only subkey on the Yubikey is used as your ssh key.
Configure GnuPG The process in more detail: Create new keys
$ gpg --full-generate-key
gpg (GnuPG) 2.1.18; Copyright (C) 2017 Free Software Foundation, Inc.
This is free software: you are free to change and redistribute it.
There is NO WARRANTY, to the extent permitted by law.

Please select what kind of key you want:
(1) RSA and RSA (default)
(2) DSA and Elgamal
(3) DSA (sign only)
(4) RSA (sign only)
Your selection? 4
RSA keys may be between 1024 and 4096 bits long.
What keysize do you want? (2048) 4096
Requested keysize is 4096 bits
Please specify how long the key should be valid.
0 = key does not expire
= key expires in n days
w = key expires in n weeks
m = key expires in n months
y = key expires in n years
Key is valid for? (0) 1y
Key expires at Tue 29 May 2018 06:43:54 PM EEST
Is this correct? (y/N) y

GnuPG needs to construct a user ID to identify your key.

Real name: Lars Wirzenius
Email address: liw@liw.fi
Comment: test key
You selected this USER-ID:
"Lars Wirzenius (test key) <liw@liw.fi>>"

Change (N)ame, (C)omment, (E)mail or (O)kay/(Q)uit? o
We need to generate a lot of random bytes. It is a good idea to perform
some other action (type on the keyboard, move the mouse, utilize the
disks) during the prime generation; this gives the random number
generator a better chance to gain enough entropy.
gpg: key 25FB738D6EE435F7 marked as ultimately trusted
gpg: directory '/home/liw/.gnupg/openpgp-revocs.d' created
gpg: revocation certificate stored as '/home/liw/.gnupg/openpgp-revocs.d/A734C10BF2DF39D19DC0F6C025FB738D6EE435F7.rev'
public and secret key created and signed.

Note that this key cannot be used for encryption. You may want to use
the command "--edit-key" to generate a subkey for this purpose.
pub rsa4096 2017-05-29 [SC] [expires: 2018-05-29]
A734C10BF2DF39D19DC0F6C025FB738D6EE435F7
A734C10BF2DF39D19DC0F6C025FB738D6EE435F7
uid Lars Wirzenius (test key) <liw@liw.fi>
  • Note that I set a 1-year expiration for they key. The expiration can be extended at any time (if you have the master secret key), but unless you do, the key won't accidentally live longer than the chosen time.
  • Review the key:
$ gpg --list-secret-keys
/home/liw/.gnupg/pubring.kbx
----------------------------
sec rsa4096 2017-05-29 [SC] [expires: 2018-05-29]
A734C10BF2DF39D19DC0F6C025FB738D6EE435F7
uid [ultimate] Lars Wirzenius (test key) <liw@liw.fi>
  • You now have the signing-only master key. You should now create three subkeys (keyid is the key identifier shown in the key listing, A734C10BF2DF39D19DC0F6C025FB738D6EE435F7 above). Use the --expert option to be able to add an authentication-only subkey.
$ gpg --edit-key --expert A734C10BF2DF39D19DC0F6C025FB738D6EE435F7z
gpg (GnuPG) 2.1.18; Copyright (C) 2017 Free Software Foundation, Inc.
This is free software: you are free to change and redistribute it.
There is NO WARRANTY, to the extent permitted by law.

Secret key is available.

sec rsa4096/25FB738D6EE435F7
created: 2017-05-29 expires: 2018-05-29 usage: SC
trust: ultimate validity: ultimate
[ultimate] (1). Lars Wirzenius (test key) <liw@liw.fi>

gpg> addkey
Please select what kind of key you want:
(3) DSA (sign only)
(4) RSA (sign only)
(5) Elgamal (encrypt only)
(6) RSA (encrypt only)
(7) DSA (set your own capabilities)
(8) RSA (set your own capabilities)
(10) ECC (sign only)
(11) ECC (set your own capabilities)
(12) ECC (encrypt only)
(13) Existing key
Your selection? 4
RSA keys may be between 1024 and 4096 bits long.
What keysize do you want? (2048) 4096
Requested keysize is 4096 bits
Please specify how long the key should be valid.
0 = key does not expire
= key expires in n days
w = key expires in n weeks
m = key expires in n months
y = key expires in n years
Key is valid for? (0) 1y
Key expires at Tue 29 May 2018 06:44:52 PM EEST
Is this correct? (y/N) y
Really create? (y/N) y
We need to generate a lot of random bytes. It is a good idea to perform
some other action (type on the keyboard, move the mouse, utilize the
disks) during the prime generation; this gives the random number
generator a better chance to gain enough entropy.

sec rsa4096/25FB738D6EE435F7
created: 2017-05-29 expires: 2018-05-29 usage: SC
trust: ultimate validity: ultimate
ssb rsa4096/05F88308DFB71774
created: 2017-05-29 expires: 2018-05-29 usage: S
[ultimate] (1). Lars Wirzenius (test key) <liw@liw.fi>

gpg> addkey
Please select what kind of key you want:
(3) DSA (sign only)
(4) RSA (sign only
(5) Elgamal (encrypt only)
(6) RSA (encrypt only)
(7) DSA (set your own capabilities)
(8) RSA (set your own capabilities)
(10) ECC (sign only)
(11) ECC (set your own capabilities)
(12) ECC (encrypt only)
(13) Existing key
Your selection? 6
RSA keys may be between 1024 and 4096 bits long.
What keysize do you want? (2048) 4096
Requested keysize is 4096 bits
Please specify how long the key should be valid
0 = key does not expire
= key expires in n days
w = key expires in n weeks
m = key expires in n months
y = key expires in n years
Key is valid for? (0) 1y
Key expires at Tue 29 May 2018 06:45:22 PM EEST
Is this correct? (y/N) y
Really create? (y/N) y
We need to generate a lot of random bytes. It is a good idea to perform
some other action (type on the keyboard, move the mouse, utilize the
disks) during the prime generation; this gives the random number
generator a better chance to gain enough entropy.

sec rsa4096/25FB738D6EE435F7
created: 2017-05-29 expires: 2018-05-29 usage: SC
trust: ultimate validity: ultimate
ssb rsa4096/05F88308DFB71774
created: 2017-05-29 expires: 2018-05-29 usage: S
ssb rsa4096/2929E8A96CBA57C7
created: 2017-05-29 expires: 2018-05-29 usage: E
[ultimate] (1). Lars Wirzenius (test key) <liw@liw.fi>

gpg> addkey
Please select what kind of key you want:
(3) DSA (sign only)
(4) RSA (sign only)
(5) Elgamal (encrypt only)
(6) RSA (encrypt only)
(7) DSA (set your own capabilities)
(8) RSA (set your own capabilities)
(10) ECC (sign only)
(11) ECC (set your own capabilities)
(12) ECC (encrypt only)
(13) Existing key
Your selection? 8

Possible actions for a RSA key: Sign Encrypt Authenticate
Current allowed actions: Sign Encrypt

(S) Toggle the sign capability
(E) Toggle the encrypt capability
(A) Toggle the authenticate capability
(Q) Finished

Your selection? a

Possible actions for a RSA key: Sign Encrypt Authenticate
Current allowed actions: Sign Encrypt Authenticate

(S) Toggle the sign capability
(E) Toggle the encrypt capability
(A) Toggle the authenticate capability
(Q) Finished

Your selection? s

Possible actions for a RSA key: Sign Encrypt Authenticate
Current allowed actions: Encrypt Authenticate

(S) Toggle the sign capability
(E) Toggle the encrypt capability
(A) Toggle the authenticate capability
(Q) Finished

Your selection? e

Possible actions for a RSA key: Sign Encrypt Authenticate
Current allowed actions: Authenticate

(S) Toggle the sign capability
(E) Toggle the encrypt capability
(A) Toggle the authenticate capability
(Q) Finished

Your selection? q
RSA keys may be btween 1024 and 4096 bits long.
What keysize do you want? (2048) 4096
Requested keysize is 4096 bits
Please specify how long the key should be valid.
0 = key does not expire
= key expires in n days
w = key expires in n weeks
m = key expires in n months
y = key expires in n years
Key is valid for? (0) 1y
Key expires at Tue 29 May 2018 06:45:56 PM EEST
Is this correct? (y/N) y
Really create? (y/N) y
We need to generate a lot of random bytes. It is a good idea to perform
some other action (type on the keyboard, move the mouse, utilize the
disks) during the prime generation; this gives the random number
generator a better chance to gain enough entropy.

sec rsa4096/25FB738D6EE435F7
created: 2017-05-29 expires: 2018-05-29 usage: SC
trust: ultimate validity: ultimate
ssb rsa4096/05F88308DFB71774
created: 2017-05-29 expires: 2018-05-29 usage: S
ssb rsa4096/2929E8A96CBA57C7
created: 2017-05-29 expires: 2018-05-29 usage: E
ssb rsa4096/4477EB0AEF1C440A
created: 2017-05-29 expires: 2018-05-29 usage: A
[ultimate] (1). Lars Wirzenius (test key) <liw@liw.fi>

gpg> save
Export secret keys to files, make a backup
  • You now have a master key and three subkeys. They are hidden in the ~/.gnupg directory. It is time to "export" the secret keys out from there.
$ gpg --export-secret-key --armor keyid > master.key
$ gpg --export-secret-subkeys --armor keyid > subkeys.key
  • You should keep these files safe. You don't want to lose them, and you don't want anyone else to get access to them. I recommend you format two USB memory sticks, format them using full-disk encryption, and copy the exported files to both of them. Then keep them somewhere safe. There's ways of making this part more sophisticated, but that's for another time.
  • The next step involves some hoop-jumping. What we want is to have the master secret key NOT on you machine, so we tell GnuPG to remove it. We exported it above, so we won't lose it. However, deleting the master secret key also removes the secret subkeys. But we can import those without importing the master secret key.
$ gpg --delete-secret-key keyid
$ gpg --import subkeys.key
  • Now verify that you have the secret subkeys, but not the master key. There should be one line starting with sec# (note the hash mark, which indicates the key isn't available), and three lines starting with ssb (no hash mark).
$ gpg -K
/home/liw/.gnupg/pubring.kbx
----------------------------
sec# rsa4096 2017-05-29 [SC] [expires: 2018-05-29]
A734C10BF2DF39D19DC0F6C025FB738D6EE435F7
uid [ultimate] Lars Wirzenius (test key) <liw@liw.fi>
ssb rsa4096 2017-05-29 [S] [expires: 2018-05-29]
ssb rsa4096 2017-05-29 [E] [expires: 2018-05-29]
ssb rsa4096 2017-05-29 [A] [expires: 2018-05-29]
Install subkeys on a Yubikey
  • Now insert the Yubikey in a USB slot. We can start transferring the secret subkeys to the Yubikey. If you want, you can set your name and other information, and change PIN codes. There's several types of PIN codes: normal use, unblocking a locked card, and a third PIN code for admin operations. Changing the PIN codes is a good idea, otherwise everyone will just try the default of 123456 (admin 12345678). However, I'm skipping that in the interest of brevity.
$ gpg -card-edit
...
  • Actually move the subkeys to the card. Note that this does a move, not a copy, and the subkeys will be removed from your ~/.gnupg (check with gpg -K).
$ gpg --edit-key liw
gpg (GnuPG) 2.1.18; Copyright (C) 2017 Free Software Foundation, Inc.
This is free software: you are free to change and redistribute it.
There is NO WARRANTY, to the extent permitted by law.

Secret key is available.

pub rsa4096/25FB738D6EE435F7
created: 2017-05-29 expires: 2018-05-29 usage: SC
trust: ultimate validity: ultimate
ssb rsa4096/05F88308DFB71774
created: 2017-05-29 expires: 2018-05-29 usage: S
ssb rsa4096/2929E8A96CBA57C7
created: 2017-05-29 expires: 2018-05-29 usage: E
ssb rsa4096/4477EB0AEF1C440A
created: 2017-05-29 expires: 2018-05-29 usage: A
[ultimate] (1). Lars Wirzenius (test key) <liw@liw.fi>

gpg> key 1

pub rsa4096/25FB738D6EE435F7
created: 2017-05-29 expires: 2018-05-29 usage: SC
trust: ultimate validity: ultimate
ssb* rsa4096/05F88308DFB71774
created: 2017-05-29 expires: 2018-05-29 usage: S
ssb rsa4096/2929E8A96CBA57C7
created: 2017-05-29 expires: 2018-05-29 usage: E
ssb rsa4096/4477EB0AEF1C440A
created: 2017-05-29 expires: 2018-05-29 usage: A
[ultimate] (1). Lars Wirzenius (test key) <liw@liw.fi>

gpg> keytocard
Please select where to store the key:
(1) Signature key
(3) Authentication key
Your selection? 1

pub rsa4096/25FB738D6EE435F7
created: 2017-05-29 expires: 2018-05-29 usage: SC
trust: ultimate validity: ultimate
ssb* rsa4096/05F88308DFB71774
created: 2017-05-29 expires: 2018-05-29 usage: S
ssb rsa4096/2929E8A96CBA57C7
created: 2017-05-29 expires: 2018-05-29 usage: E
ssb rsa4096/4477EB0AEF1C440A
created: 2017-05-29 expires: 2018-05-29 usage: A
[ultimate] (1). Lars Wirzenius (test key) <liw@liw.fi>

gpg> key 1

pub rsa4096/25FB738D6EE435F7
created: 2017-05-29 expires: 2018-05-29 usage: SC
trust: ultimate validity: ultimate
ssb rsa4096/05F88308DFB71774
created: 2017-05-29 expires: 2018-05-29 usage: S
ssb rsa4096/2929E8A96CBA57C7
created: 2017-05-29 expires: 2018-05-29 usage: E
ssb rsa4096/4477EB0AEF1C440A
created: 2017-05-29 expires: 2018-05-29 usage: A
[ultimate] (1). Lars Wirzenius (test key) <liw@liw.fi>

gpg> key 2

pub rsa4096/25FB738D6EE435F7
created: 2017-05-29 expires: 2018-05-29 usage: SC
trust: ultimate validity: ultimate
ssb rsa4096/05F88308DFB71774
created: 2017-05-29 expires: 2018-05-29 usage: S
ssb* rsa4096/2929E8A96CBA57C7
created: 2017-05-29 expires: 2018-05-29 usage: E
ssb rsa4096/4477EB0AEF1C440A
created: 2017-05-29 expires: 2018-05-29 usage: A
[ultimate] (1). Lars Wirzenius (test key) <liw@liw.fi>

gpg> keytocard
Please select where to store the key:
(2) Encryption key
Your selection? 2

pub rsa4096/25FB738D6EE435F7
created: 2017-05-29 expires: 2018-05-29 usage: SC
trust: ultimate validity: ultimate
ssb rsa4096/05F88308DFB71774
created: 2017-05-29 expires: 2018-05-29 usage: S
ssb* rsa4096/2929E8A96CBA57C7
created: 2017-05-29 expires: 2018-05-29 usage: E
ssb rsa4096/4477EB0AEF1C440A
created: 2017-05-29 expires: 2018-05-29 usage: A
[ultimate] (1). Lars Wirzenius (test key) <liw@liw.fi>

gpg> key 2

pub rsa4096/25FB738D6EE435F7
created: 2017-05-29 expires: 2018-05-29 usage: SC
trust: ultimate validity: ultimate
ssb rsa4096/05F88308DFB71774
created: 2017-05-29 expires: 2018-05-29 usage: S
ssb rsa4096/2929E8A96CBA57C7
created: 2017-05-29 expires: 2018-05-29 usage: E
ssb rsa4096/4477EB0AEF1C440A
created: 2017-05-29 expires: 2018-05-29 usage: A
[ultimate] (1). Lars Wirzenius (test key) <liw@liw.fi>

gpg> key 3

pub rsa4096/25FB738D6EE435F7
created: 2017-05-29 expires: 2018-05-29 usage: SC
trust: ultimate validity: ultimate
ssb rsa4096/05F88308DFB71774
created: 2017-05-29 expires: 2018-05-29 usage: S
ssb rsa4096/2929E8A96CBA57C7
created: 2017-05-29 expires: 2018-05-29 usage: E
ssb* rsa4096/4477EB0AEF1C440A
created: 2017-05-29 expires: 2018-05-29 usage: A
[ultimate] (1). Lars Wirzenius (test key) <liw@liw.fi>

gpg> keytocard
Please select where to store the key:
(3) Authentication key
Your selection? 3

pub rsa4096/25FB738D6EE435F7
created: 2017-05-29 expires: 2018-05-29 usage: SC
trust: ultimate validity: ultimate
ssb rsa4096/05F88308DFB71774
created: 2017-05-29 expires: 2018-05-29 usage: S
ssb rsa4096/2929E8A96CBA57C7
created: 2017-05-29 expires: 2018-05-29 usage: E
ssb* rsa4096/4477EB0AEF1C440A
created: 2017-05-29 expires: 2018-05-29 usage: A
[ultimate] (1). Lars Wirzenius (test key) <liw@liw.fi>

gpg> save
  • If you want to use several Yubikeys, or have a spare one just in case, repeat the previous four steps (starting from importing subkeys back into ~/.gnupg).
  • You're now done, as far GnuPG use is concerned. Any time you need to sign, encrypt, or decrypt something, GnuPG will look for your subkeys on the Yubikey, and will tell you to insert it in a USB port if it can't find the key.
Use subkey on Yubikey as your SSH key
  • To actually use the authentication-only subkey on the Yubikey for ssh, you need to configure your system to use gpg-agent as the SSH agent. Add the following line to .gnupg/gpg-agent.conf:
     enable-ssh-support
    
  • On a Debian stretch system with GNOME, edit /etc/xdg/autostart/gnome-keyring-ssh.desktop to have the following line, to prevent the GNOME ssh agent from starting up:
     Hidden=true
    
  • Edit /etc/X11/Xsession.options and remove or comment out the line that says use-ssh-agent. This stops a system-started ssh-agent from being started when the desktop start.
  • Create the file ~/.config/autostart/gpg-agent.desktop with the following content:
     [Desktop Entry]
     Type=Application
     Name=gpg-agent
     Comment=gpg-agent
     Exec=/usr/bin/gpg-agent --daemon
     OnlyShowIn=GNOME;Unity;MATE;
     X-GNOME-Autostart-Phase=PreDisplayServer
     X-GNOME-AutoRestart=false
     X-GNOME-Autostart-Notify=true
     X-GNOME-Bugzilla-Bugzilla=GNOME
     X-GNOME-Bugzilla-Product=gnome-keyring
     X-GNOME-Bugzilla-Component=general
     X-GNOME-Bugzilla-Version=3.20.0
    
  • To test, log out, and back in again, run the following in a terminal:
$ ssh-add -l
The output should contain a line that looks like this:
    4096 SHA256:PDCzyQPpd9tiWsELM8LwaLBsMDMm42J8/eEfezNgnVc cardno:000604626953 (RSA)
  • You need to export the authentication-only subkey in the SSH key format. You need this for adding to .ssh/authorized_keys, if nothing else.
$ gpg --export-ssh-key keyid > ssh.pub
  • Happy hacking.
See also See also the following links. I've used them to learn enough to write the above. Edited to fix:
  • Output of gpg -K after removing secret master key.

17 March 2017

Shirish Agarwal: Science Day at GMRT, Khodad 2017

The whole team posing at the end of day 2 The above picture is the blend of the two communities from foss community and mozilla India. And unless you were there you wouldn t know who is from which community which is what FOSS is all about. But as always I m getting a bit ahead of myself. Akshat, who works at NCRA as a programmer, the standing guy on the left shared with me in January this year that this year too, we should have two stalls, foss community and mozilla India stalls next to each other. While we had the banners, we were missing stickers and flyers. Funds were and are always an issue and this year too, it would have been emptier if we didn t get some money saved from last year minidebconf 2016 that we had in Mumbai. Our major expenses included printing stickers, stationery and flyers which came to around INR 5000/- and couple of LCD TV monitors which came for around INR 2k/- as rent. All the labour was voluntary in nature, but both me and Akshat easily spending upto 100 hours before the event. Next year, we want to raise to around INR 10-15k so we can buy 1 or 2 LCD monitors and we don t have to think for funds for next couple of years. How will we do that I have no idea atm. Printing leaflets Me and Akshat did all the printing and stationery runs and hence had not been using my lappy for about 3-4 days. Come to the evening before the event and the laptop would not start. Coincidentally, or not few months or even last at last year s Debconf people had commented on IBM/Lenovo s obsession with proprietary power cords and adaptors. I hadn t given it much thought but when I got no power even after putting it on AC power for 3-4 hours, I looked up on the web and saw that the power cord and power adaptors were all different even in T440 and even that under existing models. In fact I couldn t find mine hence sharing it via pictures below. thinkpad power cord male thinkpad power adaptor female I knew/suspected that thinkpads would be rare where I was going, it would be rarer still to find the exact power cord and I was unsure whether it was the power cord at fault or adaptor or whatever goes for SMPS in laptop or memory or motherboard/CPU itself. I did look up the documentation at support.lenovo.com and was surprised at the extensive documentation that Lenovo has for remote troubleshooting. I did the usual take out the battery, put it back in, twiddle with the little hole in the bottom of the laptop, trying to switch on without the battery on AC mains, trying to switch on with battery power only but nothing worked. Couple of hours had gone by and with a resigned thought went to bed, convincing myself that anyways it s good I am not taking the lappy as it is extra-dusty there and who needs a dead laptop anyways. Update After the event was over, I did contact Lenovo support and within a week, with one visit from a service engineer, he was able to identify that it was a faulty cable which was at fault and not the the other things which I was afraid of. Another week gone by and lenovo replaced the cable. Going by service standards that I have seen of other companies, Lenovo deserves a gold star here for the prompt service they provided. I probably would end up subscribing to their extended 2-year warranty service when my existing 3 year warranty is about to be over. Next day, woke up early morning, two students from COEP hostel were volunteering and we made our way to NCRA, Pune University Campus. Ironically, though we were under the impression that we would be the late arrivals, it turned out we were the early birds. 5-10 minutes passed by and soon enough we were joined by Aniket and we played catch-up for a while. We hadn t met each other for a while so it was good to catch-up. Then slowly other people starting coming in and around 07:10-07:15 we started for GMRT, Khodad. Now I had been curious as had been hearing for years that the Pune-Nashik NH-50 highway would be concreted and widened to six-lane highways but the experience was below par. Came back and realized the proposal has now been pushed back to 2020. From the mozilla team, only Aniket was with us, the rest of the group was coming straight from Nashik. Interestingly, all the six people who came, came on bikes which depending upon how you look at it was either brave or stupid. Travelling on bikes on Indian highways you either have to be brave or stupid or both, we have more than enough accidents due to quality of road construction, road design, lane-changing drivers and many other issues. This is probably not the place for it hence will use some other blog post to rant about that. We reached around 10:00 hrs. IST and hung around till lunch as Akshat had all the marketing material, monitors etc. The only thing we had were couple of lappies and couple of SBC s, an RPI 3 and a BBB. Aarti Kashyap sharing something about SBC Our find for the event was Aarti Kashyap who you can see above. She is a third-year student at COEP and one of the rare people who chose to interact with hardware rather than software. From last several years, we had been trying, successfully and unsuccessfully to get more Indian women and girls interested into technology. It is a vicious circle as till a girl/woman doesn t volunteer we are unable to share our knowledge to the extent we can which leads them to not have much interest in FOSS or even technology in general. While there are groups are djangogirls, Pyladies and railgirls and even Outreachy which tries to motivate getting girls into computing but it s a long road ahead. We are short of both funds and ideas as to how to motivate more girls to get into computing and then to get into playing with hardware. I don t know where to start and end for whoever wants to play with hardware. From SBC s, routers to blade servers the sky is the limit. Again this probably isn t the place for it, hence probably we can chew it on more at some other blog post. This year, we had a lowish turnout due to the fact that the 12th board exams 1st paper was on the day we had opened. So instead of 20-25k, we probably had 5-7k fewer people pass through. There were two-three things that we were showing, we were showing Debian on one of the systems, we were showing the output from the SBC s on the other monitor but the glare kept hitting the monitors. While the organizers had done exemplary work over last year. They had taped the carpets on the ground so there was hardly any dust moving around. However, I wished the organizers had taken the pains to have two cloth roofs over our head instead of just one, the other roof head could be say 2 feet up, this would have done two things a. It probably would have cooled the place a bit more as b. We could get diffused sunlight which would have lessened the glare and reflection the LCD s kept throwing back. At times we also got people to come to our side as can be seen in Aarti s photo as can be seen above. If these improvements can be made for next year, this would result in everybody in our Pandal would benefit, not just us and mozilla. This would be benefiting around 10-15 organizations which were within the same temporary structure. Of course, it depends very much on the budget they are able to have and people who are executing, we can just advise. The other thing which had been missing last year and this year is writing about Single Board Computers in Marathi. If we are to promote them as something to replace a computer or something for a younger brother/sister to learn computing upon at a lower cost, we need leaflets written in their language to be more effective. And this needs to be in the language and mannerisms that people in that region understand. India, as probably people might have experienced is a dialect-prone country. Which means every 2-5 kms, the way the language is spoken is different from anywhere else. The Marathi spoken by somebody who has lived in Ravivar Peth for his whole life and a person who has lived in say Kothrud are different. The same goes from any place and this place, Khodad, Narayangaon would have its own dialect, its own mini-codespeak. Just to share, we did have one in English but it would have been a vast improvement if we could do it in the local language. Maybe we can discuss about this and ask for help from people. Outside, Looking in Mozillians helping FOSS community and vice-versa What had been interesting about the whole journey were the new people who were bringing all their passion and creativity to the fore. From the mozilla community, we had Akshay who is supposed to be a wizard on graphics, animation, editing anything to do with the visual medium. He shared some of the work he had done and also shared a bit about how blender works with people who wanted to learn about that. Mayur, whom you see in the picture pointing out something about FOSS and this was the culture that we strove to have. I know and love and hate the browser but haven t been able to fathom the recklessness that Mozilla has been doing the last few years, which has just been having one mis-adventure after another. For instance, mozstumbler was an effort which I thought would go places. From what little I understood, it served/serves as a user-friendly interface to a potential user while still sharing all the data with OSM . They (Mozilla) seems/seemed to have a fatalistic take as it provided initial funding but then never fully committing to the project. Later, at night we had the whole free software and open-source sharings where I tried to emphasize that without free software, the term open-source would not have come into existence. We talked and talked and somewhere around 02:00 I slept, the next day was an extension of the first day itself where we ribbed each other good-naturedly and still shared whatever we could share with each other. I do hope that we continue this tradition for great many years to come and engage with more and more people every passing year.
Filed under: Miscellenous Tagged: #budget, #COEP< #volunteering, #debian, #Events, #Expenses, #mozstumbler, #printing, #SBC's, #Science Day 2017, #thinkpad cable issue, FOSS, mozilla

Next.