Search Results: "don"

23 November 2017

Sean Whitton: Using Propellor to provision your Debian development laptop

sbuild is a tool used by those maintaining packages in Debian, and derived distributions such as Ubuntu. When used correctly, it can catch a lot of categories of bugs before packages are uploaded. It does this by building the package in a clean environment, and then running the package through the Lintian, piuparts, adequate and autopkgtest tools. However, configuring sbuild so that it makes use of all of these tools is cumbersome. In response to this complexity, I wrote a module for the Propellor configuration management system to prepare a system such that a user can just go ahead and run the sbuild(1) command. This module is useful on one s development laptop if you need to reinstall your OS, you don t have to look up the instructions for setting up sbuild again. But it s also useful on throwaway build boxes. I can instruct propellor to provision a new virtual machine to build packages with sbuild, and all the different tools mentioned above will be connected together for me. I just uploaded Propellor version 5.1.0 to Debian unstable. The version overhauls the API and internals of the Sbuild module to take better advantage of Propellor s design. I won t get into those details in this post. What I d like to do is demonstrate how you can set up sbuild on your own machines, using Propellor. Getting started with Propellor apt-get install propellor, and then propellor --init. You ll be offered two setups, options A and B. I suggest starting with option B. If you never use Propellor for anything other than provisioning sbuild, you can stick with option B. If this tutorial makes you want to check out more features of Propellor, you might consider switching to option A and importing your old configuration. Open ~/.propellor/config.hs. You will see something like this:
-- The hosts propellor knows about.
hosts :: [Host]
hosts =
        [ mybox
        ]
-- An example host.
mybox :: Host
mybox = host "mybox.example.com" $ props
        & osDebian Unstable X86_64
        & Apt.stdSourcesList
        & Apt.unattendedUpgrades
        & Apt.installed ["etckeeper"]
        & Apt.installed ["ssh"]
        & User.hasSomePassword (User "root")
        & File.dirExists "/var/www"
        & Cron.runPropellor (Cron.Times "30 * * * *")
You ll want to customise this so that it reflects your computer. My laptop is called iris, so I might replace the above with this:
-- The hosts propellor knows about.
hosts :: [Host]
hosts =
        [ iris
        ]
-- My laptop.
iris :: Host
iris = host "iris.silentflame.com" $ props
        & osDebian Testing X86_64
The list of lines beginning with & are the properties of the host iris. Here, I ve removed all properties except the osDebian property, which informs propellor that iris runs Debian testing and has the amd64 architecture. The effect of this is that Propellor will not try to change anything about iris. In this tutorial, we are not going to let Propellor configure anything about iris other than setting up sbuild. (The osDebian property is a pure info property, which means that it tells Propellor information about the host to which other properties might refer, but it doesn t itself change anything about iris.) Telling Propellor to configure sbuild First, add to the import lines at the top of config.hs the lines:
import qualified Propellor.Property.Sbuild as Sbuild
import qualified Propellor.Property.Schroot as Schroot
to enable use of the Sbuild module. Here is the full config for iris, which I ll go through line-by-line:
-- The hosts propellor knows about.
hosts :: [Host]
hosts =
        [ iris
        ]
-- My laptop.
iris :: Host
iris = host "iris.silentflame.com" $ props
        & osDebian Testing X86_64
        & Apt.useLocalCacher
        & sidSchrootBuilt
        & Sbuild.usableBy (User "spwhitton")
        & Schroot.overlaysInTmpfs
        & Cron.runPropellor (Cron.Times "30 * * * *")
  where
        sidSchrootBuilt = Sbuild.built Sbuild.UseCcache $ props
                & osDebian Unstable X86_64
                & Sbuild.update  period  Daily
                & Sbuild.useHostProxy iris
Running Propellor to configure your laptop propellor iris.silentflame.com. In this configuration, you don t need to worry about whether the hostname iris.silentflame.com actually resolves to your laptop. However, it must be possible to ssh root@localhost. This should be enough that spwhitton can:
$ sbuild -A --run-lintian --run-autopkgtest --run-piuparts foo.dsc
Further configuration It is easy to add new schroots; for example, for building backports:
        ...
        & stretchSchrootBuilt
        ...
  where
        ...
        stretchSchrootBuilt = Sbuild.built Sbuild.UseCcache $ props
                & osDebian (Stable "stretch") X86_64
                & Sbuild.update  period  Daily
                & Sbuild.useHostProxy iris
You can also add additional properties to configure your chroot. Perhaps on your LAN you need sbuild to install packages via https, and you already have an apt cacher available. You can replace the apt-cacher-ng configuration like this:
  where
        sidSchrootBuilt = Sbuild.built Sbuild.UseCcache $ props
                & osDebian Unstable X86_64
                & Sbuild.update  period  Daily
                & Apt.mirror "https://foo.mirror/debian/"
                & Apt.installed ["apt-transport-https"]
Thanks Thanks to Propellor s author, Joey Hess, for help navigating Propellor s type system while performing the overhaul included in version 5.1.0. Also for a conversation at DebConf17 which enabled this work by clearing some misconceptions of mine.

Russ Allbery: Holiday haul

Catching up on accumulated book purchases. I'm going to get another burst of reading time over the holidays (and am really looking forward to it). Alfred Bester The Stars My Destination (sff)
James Blish A Case of Conscience (sff)
Leigh Brackett The Long Tomorrow (sff)
Algis Budrys Who? (sff)
Frances Hardinge Fly By Night (sff)
Robert A. Heinlein Double Star (sff)
N.K. Jemisin The Obelisk Gate (sff)
N.K. Jemisin The Stone Sky (sff)
T. Kingfisher Clockwork Boys (sff)
Ursula K. Le Guin City of Illusions (sff)
Ursula K. Le Guin The Complete Orsinia (historical)
Ursula K. Le Guin The Dispossessed (sff)
Ursula K. Le Guin Five Ways to Forgiveness (sff)
Ursula K. Le Guin The Left Hand of Darkness (sff)
Ursula K. Le Guin Planet of Exile (sff)
Ursula K. Le Guin Rocannon's World (sff)
Ursula K. Le Guin The Telling (sff)
Ursula K. Le Guin The World for Word Is Forest (sff)
Fritz Leiber The Big Time (sff)
Melina Marchetta Saving Francesca (mainstream)
Richard Matheson The Shrinking Man (sff)
Foz Meadows An Accident of Stars (sff)
Dexter Palmer Version Control (sff)
Frederick Pohl & C.M. Kornbluth The Space Merchants (sff)
Adam Rex True Meaning of Smekday (sff)
John Scalzi The Dispatcher (sff)
Julia Spencer-Fleming In the Bleak Midwinter (mystery)
R.E. Stearns Barbary Station (sff)
Theodore Sturgeon More Than Human (sff)
I'm listing the individual components except for the Orsinia collection, but the Le Guin are from the Library of America Hainish Novels & Stories two-volume set. I had several of these already, but I have a hard time resisting a high-quality Library of America collection for an author I really like. Now I can donate a bunch of old paperbacks. Similarly, a whole bunch of the older SF novels are from the Library of America American Science Fiction two-volume set, which I finally bought since I was ordering Library of America sets anyway. The rest is a pretty random collection of stuff, although several of them are recommendations from Light. I was reading through her old reviews and getting inspired to read (and review) more.

22 November 2017

Louis-Philippe V ronneau: DebConf Videoteam sprint report - day 3

Erf, I'm tired and it is late so this report will be short and won't include dank memes or funny cat pictures. Come back tomorrow for that. tumbleweed Stefano worked all day long on the metadata project and on YouTube uploads. I think the DebConf7 videos have just finished being uploaded, check them out! RattusRattus Apart from the wonderful lasagna he baked for us, Andy continued working on the scraping scheme, helping tumbleweed. nattie Nattie has been with us for a few days now, but today she did some great QA work on our metadata scraping of the video archive. ivodd More tests, more bugs! Ivo worked quite a bit on the Opsis board today and it seems everything is ready for the mini-conf. \0/ olasd Nicolas built the streaming network today and wrote some Ansible roles to manage TLS cert creation through Let's Encrypt. He also talked with DSA some more about our long term requirements. wouter I forgot to mention it yesterday because he could not come to Cambridge, but Wouter has been sprinting remotely, working on the reviewing system. Everything with regards to reviewing should be in place for the mini-conf. He also generated the intro and outro slides for the videos for us. KiBi and Julien KiBi and Julien arrived late in the evening, but were nonetheless of great assistance. Neither are technically part of the videoteam, but their respective experience with Debian-Installer and general DSA systems helped us a great deal. pollo I'm about 3/4 done documenting our ansible roles. Once I'm done, I'll try to polish some obvious hacks I've seen while documenting.

21 November 2017

Louis-Philippe V ronneau: DebConf Videoteam sprint report - day 2

Another day, another videoteam report! It feels like we did a lot of work today, so let's jump right in: tumbleweed Stefano worked most of the day on the DebConf video archive metadata project. A bunch of videos already have been uploaded to YouTube. Here's some gold you might want to watch. By the end of our sprint, we should have generated metadata for most of our archive and uploaded a bunch of videos to YouTube. Don't worry though, YouTube is only a mirror and we'll keep our current archive as a video master. RattusRattus Andy joined us today! He hacked away with Stefano for most of the day, working on the metadata format for our videos and making schemes for our scraping tools. ivodd Ivo built and tested a good part of our video setup today, fixing bugs left and right in Ansible. We are prepared for the Cambridge Mini-DebConf! olasd Nicolas finished his scripts to automatically spool up and down our streaming mirrors via the DigitalOcean API today and ran our Ansible config against those machines to test our setup. pollo For my part, I completed a huge chunk of my sprint goals: we now have a website documenting our setup! It is currently hosted on Alioth pages, but olasd plans to make a request to DSA to have it hosted on the static.debian.org machine. The final URL will most likely be something like: https://video.debconf.org The documentation is still missing the streaming section (our streaming setup is not final yet, so not point in documenting that) and a section hosting guides for the volunteers. With some luck I might write those later this week. I've now moved on documentation our various Ansible roles. Oh, and we also ate some cheese fondue: Our fondue dinner

20 November 2017

Reproducible builds folks: Reproducible Builds: Weekly report #133

Here's what happened in the Reproducible Builds effort between Sunday November 5 and Saturday November 11 2017: Upcoming events On November 17th Chris Lamb will present at Open Compliance Summit, Yokohama, Japan on how reproducible builds ensures the long-term sustainability of technology infrastructure. We plan to hold an assembly at 34C3 - hope to see you there! LEDE CI tests Thanks to the work of lynxis, Mattia and h01ger, we're now testing all LEDE packages in our setup. This is our first result for the ar71xx target: "502 (100.0%) out of 502 built images and 4932 (94.8%) out of 5200 built packages were reproducible in our test setup." - see below for details how this was achieved. Bootstrapping and Diverse Double Compilation As a follow-up of a discussion on bootstrapping compilers we had on the Berlin summit, Bernhard and Ximin worked on a Proof of Concept for Diverse Double Compilation of tinycc (aka tcc). Ximin Luo did a successful diverse-double compilation of tinycc git HEAD using gcc-7.2.0, clang-4.0.1, icc-18.0.0 and pgcc-17.10-0 (pgcc needs to triple-compile it). More variations are planned for the future, with the eventual aim to reproduce the same binaries cross-distro, and extend it to test GCC itself. Packages reviewed and fixed, and bugs filed Patches filed upstream: Patches filed in Debian: Patches filed in OpenSUSE: Reviews of unreproducible packages 73 package reviews have been added, 88 have been updated and 40 have been removed in this week, adding to our knowledge about identified issues. 4 issue types have been updated: Weekly QA work During our reproducibility testing, FTBFS bugs have been detected and reported by: diffoscope development Mattia Rizzolo uploaded version 88~bpo9+1 to stretch-backports. reprotest development reproducible-website development theunreproduciblepackage development tests.reproducible-builds.org in detail Misc. This week's edition was written by Ximin Luo, Bernhard M. Wiedemann, Chris Lamb and Holger Levsen & reviewed by a bunch of Reproducible Builds folks on IRC & the mailing lists.

Louis-Philippe V ronneau: DebConf Videoteam sprint report - day 1

Another videoteam report! We've now been hacking for a full day and we are slowly starting to be productive. It's always hard to get back in a project when you haven't touched it in a while... Anyway, let's start this report with some important announcement: we finally have been able to snap a good picture of the airbnb's cat! The airbnb's cat No more nagging me about the placeholder image from Wikipedia I used in yesterday's report! Set up Our hacking space Here's what the team did today: tumbleweed Stefano started the day by hacking away on our video archive. We eventually want to upload all our videos to YouTube to give them exposure, but sadly our archive metadata is in a pretty poor shape. With the script tumbleweed wrote, we can scrape the archive for matches against the old DebConf's pentabarf XML we have. tumbleweed also helped Ivo with the ansible PXE setup he's working on. Some recent contributions from a collaborator implemented new features (like a nice menu to choose from) but also came with a few annoying bugs. ivodd Ivo continued working on the PXE setup today. He also tried to break our ansible setup by using fresh installs with different user cases (locales, interfaces, etc.), with some success. The reason he and Stefano are working so hard on the PXE boot is that we had a discussion about the future of our USB install method. The general consensus on this was although we would not remove it, we would not actively maintain it anymore. PXE is less trouble for multiple machines. For single machines or if you don't control the DHCP server, using ansible manually on a fresh Debian install will be the recommended way. olasd After a very long drive, olasd arrived late in the evening with all our gear. Hurray! We were thus able to set up some test boxes and start wiring the airbnb properly. Tomorrow will certainly be more productive with all this stuff at our disposition. pollo Today I mainly worked on setting up our documentation website. After some debate, we decided that sphinx was the right tool for the job. I am a few pages in and if I work well I think we'll have something to show for at the end of the sprint! I also was thrown back into ansible after witnessing a bug in the locale management. I'm still rusty, but it's slowly coming back to me. Let's end this blog post with a picture of the neon pineapple that sits on the wall of the solarium. Upside down this picture is even more troubling

18 November 2017

Matthieu Caneill: MiniDebconf in Toulouse

I attended the MiniDebconf in Toulouse, which was hosted in the larger Capitole du Libre, a free software event with talks, presentation of associations, and a keysigning party. I didn't expect the event to be that big, and I was very impressed by its organization. Cheers to all the volunteers, it has been an amazing week-end! Here's a sum-up of the talks I attended. Du logiciel libre la monnaie libre Speaker: lo s The first talk I attended was, translated to English, "from free software to free money". lo s compared the 4 freedoms of free software with money, and what properties money needs to exhibit in order to be considered free. He then introduced 1, a project of free (as in free speech!) money, started in the region around Toulouse. Contrary to some distributed ledgers such as Bitcoin, 1 isn't based on an hash-based proof-of-work, but rather around a web of trust of people certifying each other, hence limiting the energy consumption required by the network to function. YunoHost Speaker: Jimmy Monin I then attended a presentation of YunoHost. Being an happy user myself, it was very nice to discover the future expected features, and also meet two of the developers. YunoHost is a Debian-based project, aimed at providing all the tools necessary to self-host applications, including email, website, calendar, development tools, and dozens of other packages. Premiers pas dans l'univers de Debian Speaker: Nicolas Dandrimont For the first talk of the MiniDebConf, Nicolas Dandrimont introduced Debian, its philosophy, and how it works with regards to upstreams and downstreams. He gave many details on the teams, the infrastructure, and the internals of Debian. Trusting your computer and system Speaker: Jonas Smedegaard Jonas introduced some security concepts, and how they are abused and often meaningless (to quote his own words, "secure is bullshit"). He described a few projects which lean towards a more secure and open hardware, for both phones and laptops. Automatiser la gestion de configuration de Debian avec Ansible Speaker: J r my Lecour J r my, from Evolix, introduced Ansible, and how they use it to manage hundreds of Debian servers. Ansible is a very powerful tool, and a huge ecosystem, in many ways similar to Puppet or Chef, except it is agent-less, using only ssh connections to communicate with remote machines. Very nice to compare their use of Ansible with mine, since that's the software I use at work for deploying experiments. Making Debian for everybody Speaker: Samuel Thibault Samuel gave a talk about accessibility, and the general availability of the tools in today's operating systems, including Debian. The lesson to take home is that we often don't do enough in this domain, particularly when considering some issues people might have that we don't always think about. Accessibility on computers (and elsewhere) should be the default, and never require complex setups. Retour d'exp rience : mise jour de milliers de terminaux Debian Speaker: Cyril Brulebois Cyril described a problem he was hired for, an update of thousands of Debian servers from wheezy to jessie, which he discovered afterwards was worse than initially thought, since the machines were running the out-of-date squeeze. Since they were not always administered with the best sysadmin practices, they were all exhibiting different configurations and different packages lists, which raised many issues and gave him interesting challenges. They were solved using Ansible, which also had the effect of standardizing their system administration practices. Retour d'exp rience : utilisation de Debian chez Evolix Speaker: Gr gory Colpart Gr gory described Evolix, a company which manages servers for their clients, and how they were inspired by Debian, for both their internal tools and their practices. It is very interesting to see that some of the Debian values can be easily exported for a more open and collaborative business. Lightning talks To close the conference, two lightning talks were presented, describing the switch from Windows XP to Debian in an ecologic association near Toulouse; and how snapshot.debian.org can be used with bisections to find the source of some regressions. Conclusion A big thank you to all the organizers and the associations who contributed to make this event a success. Cheers!

Petter Reinholdtsen: Legal to share more than 3000 movies listed on IMDB?

A month ago, I blogged about my work to automatically check the copyright status of IMDB entries, and try to count the number of movies listed in IMDB that is legal to distribute on the Internet. I have continued to look for good data sources, and identified a few more. The code used to extract information from various data sources is available in a git repository, currently available from github. So far I have identified 3186 unique IMDB title IDs. To gain better understanding of the structure of the data set, I created a histogram of the year associated with each movie (typically release year). It is interesting to notice where the peaks and dips in the graph are located. I wonder why they are placed there. I suspect World War II caused the dip around 1940, but what caused the peak around 2010?

I've so far identified ten sources for IMDB title IDs for movies in the public domain or with a free license. This is the statistics reported when running 'make stats' in the git repository:

  249 entries (    6 unique) with and   288 without IMDB title ID in free-movies-archive-org-butter.json
 2301 entries (  540 unique) with and     0 without IMDB title ID in free-movies-archive-org-wikidata.json
  830 entries (   29 unique) with and     0 without IMDB title ID in free-movies-icheckmovies-archive-mochard.json
 2109 entries (  377 unique) with and     0 without IMDB title ID in free-movies-imdb-pd.json
  291 entries (  122 unique) with and     0 without IMDB title ID in free-movies-letterboxd-pd.json
  144 entries (  135 unique) with and     0 without IMDB title ID in free-movies-manual.json
  350 entries (    1 unique) with and   801 without IMDB title ID in free-movies-publicdomainmovies.json
    4 entries (    0 unique) with and   124 without IMDB title ID in free-movies-publicdomainreview.json
  698 entries (  119 unique) with and   118 without IMDB title ID in free-movies-publicdomaintorrents.json
    8 entries (    8 unique) with and   196 without IMDB title ID in free-movies-vodo.json
 3186 unique IMDB title IDs in total
The entries without IMDB title ID are candidates to increase the data set, but might equally well be duplicates of entries already listed with IMDB title ID in one of the other sources, or represent movies that lack a IMDB title ID. I've seen examples of all these situations when peeking at the entries without IMDB title ID. Based on these data sources, the lower bound for movies listed in IMDB that are legal to distribute on the Internet is between 3186 and 4713. It would be great for improving the accuracy of this measurement, if the various sources added IMDB title ID to their metadata. I have tried to reach the people behind the various sources to ask if they are interested in doing this, without any replies so far. Perhaps you can help me get in touch with the people behind VODO, Public Domain Torrents, Public Domain Movies and Public Domain Review to try to convince them to add more metadata to their movie entries? Another way you could help is by adding pages to Wikipedia about movies that are legal to distribute on the Internet. If such page exist and include a link to both IMDB and The Internet Archive, the script used to generate free-movies-archive-org-wikidata.json should pick up the mapping as soon as wikidata is updates. As usual, if you use Bitcoin and want to show your support of my activities, please send Bitcoin donations to my address 15oWEoG9dUPovwmUL9KWAnYRtNJEkP1u1b.

17 November 2017

Jonathan Carter: I am now a Debian Developer

It finally happened On the 6th of April 2017, I finally took the plunge and applied for Debian Developer status. On 1 August, during DebConf in Montr al, my application was approved. If you re paying attention to the dates you might notice that that was nearly 4 months ago already. I was trying to write a story about how it came to be, but it ended up long. Really long (current draft is around 20 times longer than this entire post). So I decided I d rather do a proper bio page one day and just do a super short version for now so that someone might end up actually reading it. How it started In 1999 no wait, I can t start there, as much as I want to, this is a short post, so In 2003, I started doing some contract work for the Shuttleworth Foundation. I was interested in collaborating with them on tuXlabs, a project to get Linux computers into schools. For the few months before that, I was mostly using SuSE Linux. The open source team at the Shuttleworth Foundation all used Debian though, which seemed like a bizarre choice to me since everything in Debian was really old and its boot-floppies installer program kept crashing on my very vanilla computers.

SLUG (Schools Linux Users Group) group photo. SLUG was founded to support the tuXlab schools that ran Linux.

My contract work then later turned into a full-time job there. This was a big deal for me, because I didn t want to support Windows ever again, and I didn t ever think that it would even be possible for me to get a job where I could work on free software full time. Since everyone in my team used Debian, I thought that I should probably give it another try. I did, and I hated it. One morning I went to talk to my manager, Thomas Black, and told him that I just don t get it and I need some help. Thomas was a big mentor to me during this phase. He told me that I should try upgrading to testing, which I did, and somehow I ended up on unstable, and I loved it. Before that I used to subscribe to a website called freshmeat that listed new releases of upstream software and then, I would download and compile it myself so that I always had the newest versions of everything. Debian unstable made that whole process obsolete, and I became a huge fan of it. Early on I also hit a problem where two packages tried to install the same file, and I was delighted to find how easily I could find package state and maintainer scripts and fix them to get my system going again. Thomas told me that anyone could become a Debian Developer and maintain packages in Debian and that I should check it out and joked that maybe I could eventually snap up highvoltage@debian.org . I just laughed because back then you might as well have told me that I could run for president of the United States, it really felt like something rather far-fetched and unobtainable at that point, but the seed was planted :) Ubuntu and beyond

Ubuntu 4.10 default desktop Image from distrowatch

One day, Thomas told me that Mark is planning to provide official support for Debian unstable. The details were sparse, but this was still exciting news. A few months later Thomas gave me a CD with just warty written on it and said that I should install it on a server so that we can try it out. It was great, it used the new debian-installer and installed fine everywhere I tried it, and the software was nice and fresh. Later Thomas told me that this system is going to be called Ubuntu and the desktop edition has naked people on it. I wasn t sure what he meant and was kind of dumbfounded so I just laughed and said something like Uh ok . At least it made a lot more sense when I finally saw the desktop pre-release version and when it got the byline Linux for Human Beings . Fun fact, one of my first jobs at the foundation was to register the ubuntu.com domain name. Unfortunately I found it was already owned by a domain squatter and it was eventually handled by legal. Closer to Ubuntu s first release, Mark brought over a whole bunch of Debian developers that was working on Ubuntu over to the foundation and they were around for a few days getting some sun. Thomas kept saying Go talk to them! Go talk to them! , but I felt so intimidated by them that I couldn t even bring myself to walk up and say hello. In the interest of keeping this short, I m leaving out a lot of history but later on, I read through the Debian packaging policy and really started getting into packaging and also discovered Daniel Holbach s packaging tutorials on YouTube. These helped me tremendously. Some day (hopefully soon), I d like to do a similar video series that might help a new generation of packagers. I ve also been following DebConf online since DebConf 7, which was incredibly educational for me. Little did I know that just 5 years later I would even attend one, and another 5 years after that I d end up being on the DebConf Committee and have also already been on a local team for one.

DebConf16 Organisers, Photo by Jurie Senekal.

It s been a long journey for me and I would like to help anyone who is also interested in becoming a Debian maintainer or developer. If you ever need help with your package, upload it to https://mentors.debian.net and if I have some spare time I ll certainly help you out and sponsor an upload. Thanks to everyone who have helped me along the way, I really appreciate it!

Michal Čihař: Running Bitcoin node and ElectrumX server

I've been tempted to run own ElectrumX server for quite some. First attempt was to run this on Turris Omnia router, however that turned out to be impossible due to memory requirements both Bitcoind and ElectrumX have. This time I've dedicated host for this and it runs fine: Electrum connecting to btc.cihar.com The server runs Debian sid (probably it would be doable on stretch as well, but I didn't try much) and the setup was pretty simple. First we need to install some things - Bitcoin daemon and ElectrumX dependencies:
# Bitcoin daemon, not available in stretch
apt install bitcoind
# We will checkout ElectrumX from git
apt install git
# ElectrumX deps
apt install python3-aiohttp
# Build environment for ElectrumX deps
apt install build-essentials python3-pip libleveldb-dev
# ElectrumX deps not packaged in Debian
pip3 install plyvel pylru
# Download ElectrumX sources
su - electrumx -c 'git clone https://github.com/kyuupichan/electrumx.git'
Create users which will run the services:
adduser bitcoind
adduser electrumx
Now it's time to prepare configuration for the services. For Bitcoin it's quite simple - we need to configure RPC interface and enable transaction index in /home/bitcoind/.bitcoin/bitcoin.conf:
txindex=1
listen=1
rpcuser=bitcoin
rpcpassword=somerandompassword
The ElectrumX configuration is quite simple as well and it's pretty well documented. I've decided to place it in /etc/electrumx.conf:
COIN=BitcoinSegwit
DB_DIRECTORY=/home/electrumx/.electrumx
DAEMON_URL=http://bitcoin:somerandompassword@localhost:8332/
TCP_PORT=50001
SSL_PORT=50002
HOST=::
DONATION_ADDRESS=3KPccmPtejpMczeog7dcFdqX4oTebYZ3tF
SSL_CERTFILE=/etc/letsencrypt/live/btc.cihar.com/fullchain.pem
SSL_KEYFILE=/etc/letsencrypt/live/btc.cihar.com/privkey.pem
REPORT_HOST=btc.cihar.com
BANNER_FILE=banner
I've decided to control both services using systemd, so it's matter of creating pretty simple units for that. Actually the Bitcoin one closely matches the one I've used on Turris Omnia and the ElectrumX the one they ship, but there are some minor changes. Systemd unit for ElectrumX in /etc/systemd/system/electrumx.service:
[Unit]
Description=Electrumx
After=bitcoind.target
[Service]
EnvironmentFile=/etc/electrumx.conf
ExecStart=/home/electrumx/electrumx/electrumx_server.py
User=electrumx
LimitNOFILE=8192
TimeoutStopSec=30min
[Install]
WantedBy=multi-user.target
And finally systemd unit for Bitcoin daemon in /etc/systemd/system/bitcoind.service:
[Unit]
Description=Bitcoind
After=network.target
[Service]
ExecStart=/usr/bin/bitcoind
User=bitcoind
TimeoutStopSec=30min
Restart=on-failure
RestartSec=30
[Install]
WantedBy=multi-user.target
Now everything should be configured and it's time to start up the services:
# Enable services so that they start on boot 
systemctl enable electrumx.service bitcoind.service
# Start services
systemctl start electrumx.service bitcoind.service
Now you have few days time until Bitcoin fetches whole blockchain and ElectrumX indexes that. If you happen to have another Bitcoin node running (or was running in past), you can speedup the process by copying blocks from that system (located in ~/.bitcoin/blocks/). Only get blocks from sources you trust absolutely as it might change your view of history, see Bitcoin wiki for more information on the topic. There is also magnet link in the ElectrumX docs to download ElectrumX database to speed up this process. This should be safe to download from untrusted source. The last think I'd like to mention is resources usage. You should have at least 4 GB of memory to run this, 8 GB is really preferred (both services consume around 4GB). On disk space, Bitcoin currently consumes 170 GB and ElectrumX 25 GB. Ideally all this should be running on the SSD disk. You can however offload some of the files to slower storage as old blocks are rarely accessed and this can save some space on your storage. Following script will move around 50 GB of blockchain data to /mnt/btc/blocks (use only when Bitcoin daemon is not running):
 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
#!/bin/sh
set -e
DEST=/mnt/btc/blocks
cd ~/.bitcoin/blocks/
find . -type f \( -name 'blk00[0123]*.dat' -o -name 'rev00[0123]*dat' \)   sed 's@^\./@@'   while read name ; do
        mv $name $DEST/$name
        ln -s $DEST/$name $name
done
Anyway if you would like to use this server, configure btc.cihar.com in your Electrum client. If you find this howto useful, you can send some Satoshis to 3KPccmPtejpMczeog7dcFdqX4oTebYZ3tF.

Filed under: Crypto Debian English

Renata Scheibler: Hello, world!

Renata's picture, a white woman profile. She touches her chin with her left hand fingertips About me Hello, world! For those who are meeting me for the first time, I am a 31 year old History teacher from Porto Alegre, Brazil. Some people might know me from the Python community, because I have been leading PyLadies Porto Alegre and helping organize Django Girls workshops in my state since 2016. If you don't, that's okay. Either way, it's nice to have you here. Ever since I learned about Rails Girls Summer of Code, during the International Free Software Forum - FISL 16, I have been wanting to get into a tech internship program. Google Summer of Code made into my radar as well, but I didn't really feel like I knew enough to try and get into those programs... until I found Outreachy. From their site:
Outreachy is an organization that provides three-month internships for people from groups traditionally underrepresented in tech. Interns work remotely with mentors from Free and Open Source Software (FOSS) communities on projects ranging from programming, user experience, documentation, illustration and graphical design, to data science.
There were many good projects to choose from on this round, a lot of them with few requirements - and most with requirements that I believed I could fullfill, such as some knowledge of HTML, CSS, Python or Django. I ought to say that I am not an expert in any of those. And, since you're reading this, I'm going to be completely honest. Coding is hard. Coding is hard to learn, it takes a lot of studying and a lot of practice. Even though I have been messing around computers pretty much since I was a kid, because I was a girl lucky enough to have a father who owned a computer store, I hadn't began learning how to program until mid-2015 - and I am still learning. I think I became such an autodidact because I had to (and, of course, because I was given the conditions to be, such as having spare time to study when I wasn't at school). I had to get any and all information from my surroundings and turn into knowledge that I could use to achieve my goals. In a time when I could only get new computer games through a CD-ROM and the computer I was allowed to use didn't have a CD-ROM drive, I had to try and learn how to open a computer cabinet and connect/disconnect hardware properly, so I could use my brother's CD-ROM drive on the computer I was allowed to and install the games without anyone noticing. When, back in 1998, I couldn't connect to the internet because the computer I was allowed to use didn't have a modem, I had to learn about networks to figure out how to do it from my brother's computer on the LAN (local network). I would go to the community public library and read books and any tech magazines I could get my hands into (libraries didn't usually have computers to be used by the public back then). It was about 2002 when I learned how to create HTML sites by studying at the source code from pages I had saved to read offline in one of the very, very few times I was allowed to access the internet and browse the web. Of course, the site I created back then never saw the light of the day, because I didn't have really have internet access at home. So, how come it is that only now, 14 years later, I am trying to get into tech? Because when I finished high school in 2003, I was still a minor and my family didn't allow me to go to Vocational School and take an IT course. (Never mind that my own oldest brother had graduated in IT and working with for almost a decade.) I ended up going to study... teacher training in History as an undergrad course. A lot has happened since then. I took the exam to become a public school teacher and more than two years had passed without being called to work. I spent 3 years in odd-jobs that paid barely enough to pay rent (and, sometimes, not even that). Since the IT is the new thing and all jobs are in IT, finally, finally it seemed okay for me to take that Vocational School training in a public school - and so I did. I gotta say, I thought that while I studied, I would be able to get some sort of job or internship to help with my learning. After all, I had seen it easily happening with people I met before getting into the course. And by "people", of course, I mean white men. For me, it took a whole year of searching, trying and interviewing for me to get an internship related to the field - tech support in a school computer lab, running GNU/Linux. And, in that very same week, I was hired as a public school teacher. There is a lot more... actually, there is so much more to this story, but I think I have told enough for now. Enough to know where I came from and who I am, as of now. I hope you stick around. I am bound to write here every two weeks, so I guess I will see you then! o/

16 November 2017

Michal Čihař: New projects on Hosted Weblate

Hosted Weblate provides also free hosting for free software projects. The hosting requests queue has grown too long, so it's time to process it and include new project. This time, the newly hosted projects include: If you want to support this effort, please donate to Weblate, especially recurring donations are welcome to make this service alive. You can do that easily on Liberapay or Bountysource.

Filed under: Debian English SUSE Weblate

Colin Watson: Kitten Block equivalent for Firefox 57

I ve been using Kitten Block for years, since I don t really need the blood pressure spike caused by accidentally following links to certain UK newspapers. Unfortunately it hasn t been ported to Firefox 57. I tried emailing the author a couple of months ago, but my email bounced. However, if your primary goal is just to block the websites in question rather than seeing kitten pictures as such (let s face it, the internet is not short of alternative sources of kitten pictures), then it s easy to do with uBlock Origin. After installing the extension if necessary, go to Tools Add-ons Extensions uBlock Origin Preferences My filters, and add www.dailymail.co.uk and www.express.co.uk, each on its own line. (Of course you can easily add more if you like.) Voil : instant tranquility. Incidentally, this also works fine on Android. The fact that it was easy to install a good ad blocker without having to mess about with a rooted device or strange proxy settings was the main reason I switched to Firefox on my phone.

15 November 2017

Steinar H. Gunderson: Introducing Narabu, part 6: Performance

Narabu is a new intraframe video codec. You probably want to read part 1, part 2, part 3, part 4 and part 5 first. Like I wrote in part 5, there basically isn't a big splashy ending where everything is resolved here; you're basically getting some graphs with some open questions and some interesting observations. First of all, though, I'll need to make a correction: In the last part, I wrote that encoding takes 1.2 ms for 720p luma-only on my GTX 950, which isn't correct I remembered the wrong number. The right number is 2.3 ms, which I guess explains even more why I don't think it's acceptable at the current stage. (I'm also pretty sure it's possible to rearchitect the encoder so that it's much better, but I am moving on to other video-related things for the time being.) I encoded a picture straight off my DSLR (luma-only) at various resolutions, keeping the aspect. Then I decoded it a bunch of times on my GTX 950 (low-end last-generation NVIDIA) and on my HD 4400 (ultraportable Haswell laptop) and measured the times. They're normalized for megapixels per second decoded; remember that doubling width (x axis) means quadruple the pixels. Here it is: Narabu decoding performance graph I'm not going to comment much beyond two observations: Encoding only contains the GTX 950 because I didn't finish the work to get that single int64 divide off: Narabu encoding performance graph This is interesting. I have few explanations. Probably more benchmarking and profiling would be needed to make sense of any of it. In fact, it's so strange that I would suspect a bug, but it does indeed seem to create a valid bitstream that is decoded by the decoder. Do note, however, that seemingly even on the smallest resolutions, there's a 1.7 ms base cost (you can't see it on the picture, but you'd see it in an unnormalized graph). I don't have a very good explanation for this either (even though there are some costs that are dependent on the alphabet size instead of the number of pixels), but figuring it out would probably be a great start for getting the performance up. So that concludes the series, on a cliffhanger. :-) Even though it's not in a situation where you can just take it and put it into something useful, I hope it was an interesting introduction to the GPU! And in the meantime, I've released version 1.6.3 of Nageru, my live video mixer (also heavily GPU-based) with various small adjustments and bug fixes found before and during Tr ndisk. And Movit is getting compute shaders for that extra speed boost, although parts of it is bending my head. Exciting times in GPU land :-)

Kees Cook: security things in Linux v4.14

Previously: v4.13. Linux kernel v4.14 was released this last Sunday, and there s a bunch of security things I think are interesting: vmapped kernel stack on arm64
Similar to the same feature on x86, Mark Rutland and Ard Biesheuvel implemented CONFIG_VMAP_STACK for arm64, which moves the kernel stack to an isolated and guard-paged vmap area. With traditional stacks, there were two major risks when exhausting the stack: overwriting the thread_info structure (which contained the addr_limit field which is checked during copy_to/from_user()), and overwriting neighboring stacks (or other things allocated next to the stack). While arm64 previously moved its thread_info off the stack to deal with the former issue, this vmap change adds the last bit of protection by nature of the vmap guard pages. If the kernel tries to write past the end of the stack, it will hit the guard page and fault. (Testing for this is now possible via LKDTM s STACK_GUARD_PAGE_LEADING/TRAILING tests.) One aspect of the guard page protection that will need further attention (on all architectures) is that if the stack grew because of a giant Variable Length Array on the stack (effectively an implicit alloca() call), it might be possible to jump over the guard page entirely (as seen in the userspace Stack Clash attacks). Thankfully the use of VLAs is rare in the kernel. In the future, hopefully we ll see the addition of PaX/grsecurity s STACKLEAK plugin which, in addition to its primary purpose of clearing the kernel stack on return to userspace, makes sure stack expansion cannot skip over guard pages. This stack probing ability will likely also become directly available from the compiler as well. set_fs() balance checking
Related to the addr_limit field mentioned above, another class of bug is finding a way to force the kernel into accidentally leaving addr_limit open to kernel memory through an unbalanced call to set_fs(). In some areas of the kernel, in order to reuse userspace routines (usually VFS or compat related), code will do something like: set_fs(KERNEL_DS); ...some code here...; set_fs(USER_DS);. When the USER_DS call goes missing (usually due to a buggy error path or exception), subsequent system calls can suddenly start writing into kernel memory via copy_to_user (where the to user really means within the addr_limit range ). Thomas Garnier implemented USER_DS checking at syscall exit time for x86, arm, and arm64. This means that a broken set_fs() setting will not extend beyond the buggy syscall that fails to set it back to USER_DS. Additionally, as part of the discussion on the best way to deal with this feature, Christoph Hellwig and Al Viro (and others) have been making extensive changes to avoid the need for set_fs() being used at all, which should greatly reduce the number of places where it might be possible to introduce such a bug in the future. SLUB freelist hardening
A common class of heap attacks is overwriting the freelist pointers stored inline in the unallocated SLUB cache objects. PaX/grsecurity developed an inexpensive defense that XORs the freelist pointer with a global random value (and the storage address). Daniel Micay improved on this by using a per-cache random value, and I refactored the code a bit more. The resulting feature, enabled with CONFIG_SLAB_FREELIST_HARDENED, makes freelist pointer overwrites very hard to exploit unless an attacker has found a way to expose both the random value and the pointer location. This should render blind heap overflow bugs much more difficult to exploit. Additionally, Alexander Popov implemented a simple double-free defense, similar to the fasttop check in the GNU C library, which will catch sequential free()s of the same pointer. (And has already uncovered a bug.) Future work would be to provide similar metadata protections to the SLAB allocator (though SLAB doesn t store its freelist within the individual unused objects, so it has a different set of exposures compared to SLUB). setuid-exec stack limitation
Continuing the various additional defenses to protect against future problems related to userspace memory layout manipulation (as shown most recently in the Stack Clash attacks), I implemented an 8MiB stack limit for privileged (i.e. setuid) execs, inspired by a similar protection in grsecurity, after reworking the secureexec handling by LSMs. This complements the unconditional limit to the size of exec arguments that landed in v4.13. randstruct automatic struct selection
While the bulk of the port of the randstruct gcc plugin from grsecurity landed in v4.13, the last of the work needed to enable automatic struct selection landed in v4.14. This means that the coverage of randomized structures, via CONFIG_GCC_PLUGIN_RANDSTRUCT, now includes one of the major targets of exploits: function pointer structures. Without knowing the build-randomized location of a callback pointer an attacker needs to overwrite in a structure, exploits become much less reliable. structleak passed-by-reference variable initialization
Ard Biesheuvel enhanced the structleak gcc plugin to initialize all variables on the stack that are passed by reference when built with CONFIG_GCC_PLUGIN_STRUCTLEAK_BYREF_ALL. Normally the compiler will yell if a variable is used before being initialized, but it silences this warning if the variable s address is passed into a function call first, as it has no way to tell if the function did actually initialize the contents. So the plugin now zero-initializes such variables (if they hadn t already been initialized) before the function call that takes their address. Enabling this feature has a small performance impact, but solves many stack content exposure flaws. (In fact at least one such flaw reported during the v4.15 development cycle was mitigated by this plugin.) improved boot entropy
Laura Abbott and Daniel Micay improved early boot entropy available to the stack protector by both moving the stack protector setup later in the boot, and including the kernel command line in boot entropy collection (since with some devices it changes on each boot). eBPF JIT for 32-bit ARM
The ARM BPF JIT had been around a while, but it didn t support eBPF (and, as a result, did not provide constant value blinding, which meant it was exposed to being used by an attacker to build arbitrary machine code with BPF constant values). Shubham Bansal spent a bunch of time building a full eBPF JIT for 32-bit ARM which both speeds up eBPF and brings it up to date on JIT exploit defenses in the kernel. seccomp improvements
Tyler Hicks addressed a long-standing deficiency in how seccomp could log action results. In addition to creating a way to mark a specific seccomp filter as needing to be logged with SECCOMP_FILTER_FLAG_LOG, he added a new action result, SECCOMP_RET_LOG. With these changes in place, it should be much easier for developers to inspect the results of seccomp filters, and for process launchers to generate logs for their child processes operating under a seccomp filter. Additionally, I finally found a way to implement an often-requested feature for seccomp, which was to kill an entire process instead of just the offending thread. This was done by creating the SECCOMP_RET_ACTION_FULL mask (n e SECCOMP_RET_ACTION) and implementing SECCOMP_RET_KILL_PROCESS. That s it for now; please let me know if I missed anything. The v4.15 merge window is now open!

2017, Kees Cook. This work is licensed under a Creative Commons Attribution-ShareAlike 3.0 License.
Creative Commons License

Russ Allbery: Review: The Piper's Son

Review: The Piper's Son, by Melina Marchetta
Series: Francesca #2
Publisher: Candlewick Press
Copyright: 2010
Printing: 2011
ISBN: 0-7636-5458-2
Format: Kindle
Pages: 330
Tom Mackee's family has fallen apart. The impetus was the death of his uncle Joe in the London tube terrorist bombings, but that was only the start. He destroyed his chances with the only woman he really loved. His father's drinking got out of control, his mother left with his younger sister to live in a different city, and he refused to go with them and abandon his father. But then, six months later, his father abandoned him anyway. As this novel opens, Tom collapses while performing a music set, high on drugs and no sleep, and wakes up to discover his roommates have been fired from their jobs for stealing, and in turn have thrown him out of their apartment. He's at rock bottom. The one place he can turn for a place to stay is his aunt Georgie, the second (although less frequent) viewpoint character of this book. She was the one who took the trip to the UK to try to find out what happened and retrieve her brother's body, and the one who had to return to Australia with nothing. Her life isn't in much better shape than Tom's. She's kept her job, but she's pregnant by her ex-boyfriend but barely talking to him, since he now has a son by another woman he met during their separation. And she's not even remotely over her grief. The whole Finch/Mackee family is, in short, a disaster. But they have a few family relationships left that haven't broken, some underlying basic decency, and some patient and determined friends. I should warn up-front, despite having read this book without knowing this, that this is a sequel to Saving Francesca, set five years later and focusing on secondary characters from the original novel. I've subsequently read that book as well, though, and I don't think reading it first is necessary. This is one of the rare books where being a sequel made it a better stand-alone novel. I never felt a gap of missing story, just a rich and deep background of friendships and previous relationships that felt realistic. People are embedded in networks of relationships even when they feel the most alone, and I really enjoyed seeing that surface in this book. All those patterns from Tom's past didn't feel like information I was missing. They felt like glimpses of what you'd see if you looked into any other person's life. The plot summary above might make The Piper's Son sound like a depressing drama fest, but Marchetta made an excellent writing decision: the worst of this has already happened before the start of the book, and the rest is in the first chapter. This is not at all a book about horrible things happening to people. It's a book about healing. An authentic, prickly, angry healing that doesn't forget and doesn't turn into simple happily-ever-after stories, but does involve a lot of recognition that one has been an ass, and that it's possible to be less of an ass in the future, and maybe some things can be fixed. A plot summary might fool you into thinking that this is a book about a boy and his father, or about dealing with a drunk you still love. It's not. The bright current under this whole story is not father-son bonding. It's female friendships. Marchetta pulls off a beautiful double-story, writing a book that's about Tom, and Georgie, and the layered guilt and tragedy of the Finch/Mackee family, but whose emotional heart is their friends. Francesca, Justine, absent Siobhan. Georgie's friend Lucia. Ned, the cook, and his interactions with Tom's friends. And Tara Finke, also mostly absent, but perfectly written into the story in letters and phone calls. Marchetta never calls unnecessary attention to this, keeping the camera on Tom and Georgie, but the process of reading this book is a dawning realization of just how much work friendship is doing under the surface, how much back-channel conversation is happening off the page, and how much careful and thoughtful and determined work went into providing Tom a floor, a place to get his feet under him, and enough of a shove for him to pull himself together. Pulling that off requires a deft and subtle authorial touch, and I'm in awe at how well it worked. This is a beautifully written novel. Marchetta never belabors an emotional point, sticking with a clear and spare description of actions and thoughts, with just the right sentences scattered here and there to expose the character's emotions. Tom's family is awful at communication, which is much of the reason why they start the book in the situation they're in, but Marchetta somehow manages to write that in a way that didn't just frustrate me or make me want to start banging their heads together. She somehow conveys the extent to which they're trying, even when they're failing, and adds just the right descriptions so that the reader can follow the wordless messages they send each other even when they can't manage to talk directly. I usually find it very hard to connect with people who can only communicate by doing things rather than saying them. It's a high compliment to the author that I felt I understood Tom and his family as well as I did. One bit of warning: while this is not a story of a grand reunion with an alcoholic father where all is forgiven because family, thank heavens, there is an occasional wiggle in that direction. There is also a steady background assumption that one should always try to repair family relationships, and a few tangential notes about the Finches and Mackees that made me think there was a bit more abuse here than anyone involved wants to admit. I don't think the book is trying to make apologies for this, and instead is trying to walk the fine line of talking about realistically messed up families, but I also don't have a strong personal reaction to that type of story. If you have an aversion to "we should all get along because faaaaamily" stories, you may want to skip this book, or at least go in pre-warned. That aside, the biggest challenge I had in reading this book was not breaking into tears. The emotional arc is just about perfect. Tom and Georgie never stay stuck in the same emotional cycle for too long, Marchetta does a wonderful job showing irritating characters from a slightly different angle and having them become much less irritating, and the interactions between Tom, Tara, and Francesca are just perfect. I don't remember reading another book that so beautifully captures that sensation of knowing that you've been a total ass, knowing that you need to stop, but realizing just how much work you're going to have to do, and how hard that work will be, once you own up to how much you fucked up. That point where you keep being an ass for a few moments longer, because stopping is going to hurt so much, but end up stopping anyway because you can't stand yourself any more. And stopping and making amends is hard and hurts badly, and yet somehow isn't quite as bad as you thought it was going to be. This is really great stuff. One final complaint, though: what is it with mainstream fiction and the total lack of denouement? I don't read very much mainstream fiction, but this is the second really good mainstream book I've read (after The Death of Bees) that hits its climax and then unceremoniously dumps the reader on the ground and disappears. Come back here! I wasn't done with these people! I don't need a long happily-ever-after story, but give me at least a handful of pages to be happy with the characters after crying with them for hours! ARGH. But, that aside, the reader does get that climax, and it's note-perfect to the rest of the book. Everyone is still themselves, no one gets suddenly transformed, and yet everything is... better. It's the kind of book you can trust. Highly, highly recommended. Rating: 9 out of 10

13 November 2017

Steve Kemp: Paternity-leave is half-over

I'm taking the month of November off work, so that I can exclusively take care of our child. Despite it being a difficult time, with him teething, it has been a great half-month so far. During the course of the month I've found my interest in a lot of technological things waning, so I've killed my account(s) on a few platforms, and scaled back others - if I could exclusively do child-care for the next 20 years I'd be very happy, but sadly I don't think that is terribly realistic. My interest in things hasn't entirely vanished though, to the extent that I found the time to replace my use of etcd with consul yesterday, and I'm trying to work out how to simplify my hosting setup. Right now I have a bunch of servers doing two kinds of web-hosting: Hosting static-sites is trivial, whether with a virtual machine, via Amazons' S3-service, or some other static-host such as netlify. Hosting for "dynamic stuff" is harder. These days a trend for "serverless" deployments allows you to react to events and be dynamic, but not everything can be a short-lived piece of ruby/javascript/lambda. It feels like I could setup a generic platform for launching containers, or otherwise modernising FastCGI, etc, but I'm not sure what the point would be. (I'd still be the person maintaining it, and it'd still be a hassle. I've zero interest in selling things to people, as that only means more support.) In short I have a bunch of servers, they mostly tick over unattended, but I'm not really sure I want to keep them running for the next 10+ years. Over time our child will deserve, demand, and require more attention which means time for personal stuff is only going to diminish. Simplify things now wouldn't be a bad thing to do, before it is too late.

Markus Koschany: My Free Software Activities in October 2017

Welcome to gambaru.de. Here is my monthly report that covers what I have been doing for Debian. If you re interested in Java, Games and LTS topics, this might be interesting for you. Debian Games Debian Java Debian LTS This was my twentieth month as a paid contributor and I have been paid to work 19 hours on Debian LTS, a project started by Rapha l Hertzog. I will catch up with the remaining 1,75 hours in November. In that time I did the following: Misc Thanks for reading and see you next time.

Fran ois Marier: Test mail server on Ubuntu and Debian

I wanted to setup a mail service on a staging server that would send all outgoing emails to a local mailbox. This avoids sending emails out to real users when running the staging server using production data. First, install the postfix mail server:
apt install postfix
and choose the "Local only" mail server configuration type. Then change the following in /etc/postfix/main.cf:
default_transport = error
to:
default_transport = local:root
and restart postfix:
systemctl restart postfix.service
Once that's done, you can find all of the emails in /var/mail/root. So you can install mutt:
apt install mutt
and then view the mailbox like this:
mutt -f /var/mail/root

12 November 2017

Ben Armstrong: The Joy of Cat Intelligence

As a cat owner, being surprised by cat intelligence delights me. They re not exactly smart like a human, but they are smart in cattish ways. The more I watch them and try to sort out what they re thinking, the more it pleases me to discover they can solve problems and adapt in recognizably intelligent ways, sometimes unique to each individual cat. Each time that happens, it evokes in me affectionate wonder. Today, I had one of those joyful moments. First, you need to understand that some months ago, I thought I had my male cat all figured out with respect to mealtimes. I had been cleaning up after my oafish boy who made a watery mess on the floor from his mother s bowl each morning. I was slightly annoyed, but was mostly curious, and had a hunch. A quick search of the web confirmed it: my cat was left-handed. Not only that, but I learned this is typical for males, whereas females tend to be right-handed. Right away, I knew what I had to do: I adjusted the position of their water bowls relative to their food, swapping them from right to left; the messy morning feedings ceased. I congratulated myself for my cleverness. You see, after the swap, as he hooked the kibbles with his left paw out of the right-hand bowl, they would land immediately on the floor where he could give them chase. The swap caused the messes to cease because before, his left-handed scoops would land the kibbles in the water to the right; he would then have to scoop the kibble out onto the floor, sprinkling water everywhere! Furthermore, the sodden kibble tended to not skitter so much, decreasing his fun. Or so I thought. Clearly, I reasoned, having sated himself on the entire contents of his own bowl, he turned to pilfering his mother s leftovers for some exciting kittenish play. I had evidence to back it up, too: he and his mother both seem to enjoy this game, a regular fixture of their mealtime routines. She, too, is adept at hooking out the kibbles, though mysteriously, without making a mess in her water, whichever way the bowls are oriented. I chalked this up to his general clumsiness of movement vs. her daintiness and precision, something I had observed many times before. Come to think of it, lately, I ve been seeing more mess around his mother s bowl again. Hmm. I don t know why I didn t stop to consider why And then my cat surprised me again. This morning, with Shadow behind my back as I sat at my computer, finishing up his morning meal at his mother s bowl, I thought I heard something odd. Or rather, I didn t hear something. The familiar skitter-skitter sound of kibbles evading capture was missing. So I turned and looked. My dear, devious boy had squished his overgrown body behind his mother s bowls, nudging them ever so slightly askew to fit the small space. Now the bowl orientation was swapped back again. Stunned, I watched him carefully flip out a kibble with his left paw. Plop! Into the water on the right. Concentrating, he fished for it. A miss! He casually licked the water from his paw. Another try. Swoop! Plop, onto the floor. No chase now, just satisfied munching of his somewhat mushy kibble. And then it dawned on me that I had got it somewhat wrong. Yes, he enjoyed Chase the Kibble, like his mom, but I never recognized he had been indulging in a favourite pastime, peculiarly his own I had judged his mealtime messes as accidents, a very human way of thinking about my problem. Little did I know, it was deliberate! His private game was Bobbing for Kibbles. I don t know if it s the altered texture, or dabbling in the bowl, but whatever the reason, due to my meddling, he had been deprived of this pleasure. No worries, a thwarted cat will find a way. And that is the joy of cat intelligence.

Next.