debian/
subdirectory containing the packaging control
files. When upstream makes a new release, I simply merge their
release tag into master
: git merge 1.2.3
(after reviewing the
diff!).
Packaging things for Debian turns out to be a great way to find small
bugs that need to be fixed, and I end up forwarding a lot of patches
upstream. Since the projects are on GitHub, that means forking the
repo and submitting pull requests. So I end up with three remotes:
origin
upstream
fork
git push -u fork fix-gcc-6
. However, it is
also useful to have a command that pushes everything to the places it
should be: pushes bugfix branches to fork
, my master packaging
branch to origin
, and definitely doesn t try to push anything to
upstream
(recently an upstream project gave me push access because I
was sending so many patches, and then got a bit annoyed when I pushed
a series of Debian release tags to their GitHub repo by mistake).
I spent quite a lot of time reading git-config(1)
and git-push(1)
,
and came to the conclusion that there is no combination of git
settings and a push command that do the right thing in all cases.
Candidates, and why they re insufficient:
git push --all
remote.pushDefault
and branch.*.pushRemote
configuration options. The problem is that git push --all
pushes to only one remote, and it selects it by looking at the current branch. If I ran this command for all remotes, it would push everything everywhere.git push <remote> :
for each remotefork
and upstream
remotes have upstream s master branch, and the origin
remote has my packaging branch.git push-all
, which does the
right thing. As you will see from the description at the top of the
script, it uses remote.pushDefault
and branch.*.pushRemote
to
determine where it should push, falling back to pushing to the remote
the branch is tracking. If won t push something when all three of
these are unspecified, and more generally, it won t create new remote
branches except in the case where the branch-specific setting
branch.*.pushRemote
has been specified. Magit
makes it easy to set remote.pushDefault
and branch.*.pushRemote
.
I have this in my ~/.mrconfig:
git_push = git push-all
so that I can just run mr push
to ensure that all of my work has
been sent where it needs to be (see
myrepos).
#!/usr/bin/perl
# git-push-all -- intelligently push most branches
# Copyright (C) 2016 Sean Whitton
#
# This program is free software: you can redistribute it and/or modify
# it under the terms of the GNU General Public License as published by
# the Free Software Foundation, either version 3 of the License, or (at
# your option) any later version.
#
# This program is distributed in the hope that it will be useful, but
# WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU
# General Public License for more details.
#
# You should have received a copy of the GNU General Public License
# along with this program. If not, see <http://www.gnu.org/licenses/>.
# Prerequisites:
# The Git::Wrapper, Config::GitLike, and List::MoreUtils perl
# libraries. On a Debian system,
# apt-get install libgit-wrapper-perl libconfig-gitlike-perl \
# liblist-moreutils-perl
# Description:
# This script will try to push all your branches to the places they
# should be pushed, with --follow-tags. Specifically, for each branch,
#
# 1. If branch.pushRemote is set, push it there
#
# 2. Otherwise, if remote.pushDefault is set, push it there
#
# 3. Otherwise, if it is tracking a remote branch, push it there
#
# 4. Otherwise, exit non-zero.
#
# If a branch is tracking a remote that you cannot push to, be sure to
# set at least one of branch.pushRemote and remote.pushDefault.
use strict;
use warnings;
no warnings "experimental::smartmatch";
use Git::Wrapper;
use Config::GitLike;
use List::MoreUtils qw uniq apply ;
my $git = Git::Wrapper->new(".");
my $config = Config::GitLike->new( confname => 'config' );
$config->load_file('.git/config');
my @branches = apply s/[ \*]//g $git->branch;
my @allBranches = apply s/[ \*]//g $git->branch( all => 1 );
my $pushDefault = $config->get( key => "remote.pushDefault" );
my %pushes;
foreach my $branch ( @branches )
my $pushRemote = $config->get( key => "branch.$branch.pushRemote" );
my $tracking = $config->get( key => "branch.$branch.remote" );
if ( defined $pushRemote )
print "I: pushing $branch to $pushRemote (its pushRemote)\n";
push @ $pushes $pushRemote , $branch;
# don't push unless it already exists on the remote: this script
# avoids creating branches
elsif ( defined $pushDefault
&& "remotes/$pushDefault/$branch" ~~ @allBranches )
print "I: pushing $branch to $pushDefault (the remote.pushDefault)\n";
push @ $pushes $pushDefault , $branch;
elsif ( !defined $pushDefault && defined $tracking )
print "I: pushing $branch to $tracking (probably to its tracking branch)\n";
push @ $pushes $tracking , $branch;
else
die "E: couldn't find anywhere to push $branch";
foreach my $remote ( keys %pushes )
my @branches = @ $pushes $remote ;
system "git push --follow-tags $remote @branches";
exit 1 if ( $? != 0 );
This doesn t appear to cover the other kind of comment-moderation problem: that where overmoderation and attachment to poster identity leads to an environment of stifling conventionalism. Photography communities in particular (e.g. flickr, instagram, 500px) are vulnerable to turning into circlejerks where no-one is willing to say what they mean for fear of appearing the negative nancy (no pun intended) and where high post-count contributors poorly-supported opinions become elevated above said views merits. In such communities the typical discussion is at the level of tepid platitude: good exposure! , nice depth of field! , or cool HDR! . On the other end of the scale there s the imageboard style of community where anonymity is the norm, feedback is uncompromisingly harsh, and uselessly opaque criticism appears such on its face; unsuited to the overly sensitive but hideously valuable to the advancing novice. Ordinary web forums, with tools oriented towards a punitive he said the n-word! delete his account and everything he s posted! persona non grata, in damnatio memoriae! school of moderation, strongly tend to the former.ksandstr on LWN
native_libs.txt
files (#825857).SOURCE_DATE_EPOCH
.
Packages fixed
The following 9 packages have become reproducible due to changes in their
build dependencies:
cclib
librun-parts-perl
llvm-toolchain-snapshot
python-crypto
python-openid
r-bioc-shortread
r-bioc-variantannotation
ruby-hdfeos5
sqlparse
The following packages have become reproducible after being fixed:
SHELL
to static value.__DATE__
/__TIME__
macros, since gcc can handle it nowSOURCE_DATE_EPOCH
for embedded timestamp.SOURCE_DATE_EPOCH
for timestamps embedded into manpages.SOURCE_DATE_EPOCH
for embedded timestamp.SOURCE_DATE_EPOCH
for timestamps embedded into manpages.SOURCE_DATE_EPOCH
for embedded timestamp.printf
instead of non-portable echo
.postconf -e maximal_queue_lifetime=10d
Then I created a new user:
adduser mailq-check
with a password straight out of pwgen -s 32
.
I gave ssh
permission to
that user:
adduser mailq-check sshuser
and then authorized my new ssh key (see next section):
sudo -u mailq-check -i
mkdir ~/.ssh/
cat - > ~/.ssh/authorized_keys
ssh-keygen -t ed25519 -f .ssh/egilsstadir-mailq-check
cat ~/.ssh/egilsstadir-mailq-check.pub
which I then installed on the server.
Then I added this cronjob in /etc/cron.d/egilsstadir-mailq-check
:
0 2 * * * francois /usr/bin/ssh -i /home/francois/.ssh/egilsstadir-mailq-check mailq-check@egilsstadir mailq grep -v "Mail queue is empty"
and that's it. I get a (locally delivered) email whenever the mail queue on
the server is non-empty.
There is a race condition built into this setup since it's possible that the
server will want to send an email at 2am. However, all that does is send a
spurious warning email in that case and so it's a pretty small price to pay
for a dirt simple setup that's unlikely to break.
dpkg-deb --control skype-debian_4.3.0.37-1_i386.deb
and
confirm that there s nothing executable in there. You should also
list the contents with dpkg-deb --contents
skype-debian_4.3.0.37-1_i386.deb
, and confirm that it doesn t install
anything to places that will be executed by the system, such as to
/etc/cron.d
. For my own reference the safe .deb has sha256 hash
a820e641d1ee3fece3fdf206f384eb65e764d7b1ceff3bc5dee818beb319993c
,
but you should perform these checks yourself.
Then install Firejail and Xephyr. You can hook Firejail and Xephyr
together manually, but Firejail version 0.9.40-rc1 can do it for you,
which is very convenient, so we install that from the Debian
Experimental archive:
# apt-get install xserver-xephyr firejail/experimental
Here s an invocation to use the jail:
$ firejail --x11=xephyr --private --private-tmp openbox
$ DISPLAY=$(firemon --x11 grep "DISPLAY" sed 's/ DISPLAY //') \
firejail --private --private-tmp skype
This takes advantage of Firejail s existing jail profile for Skype.
We get the following:
/home/you
so that Skype cannot access any of your files
(disadvantage is that Skype can t remember your username and
password; you can look at --private=directory
to do something
persistent).package.el
a few years ago, and last year David
Bremner wrote the dh_elpa
tool to simplify packaging addons for
Debian by leveraging package.el
features. Packaging a series of
addons for Debian left me with a wishlist of features for dh_elpa and
I was recently able to implement them.
Debian tooling generally uses Perl, a language I didn t know before
starting on this project. I was fortunate enough to receive a free
review copy of Perl 5 by Example when I attended a meeting of the
Bay Area Linux Users Group while I was visiting San Francisco a few
months ago. I accepted the book with the intent of doing this work.
dh_make_elpa
dh_make_elpa
(at present available from Debian experimental) is a
Perl script to convert a git repository cloned from the upstream of an
Emacs Lisp addon to a rudimentary Debian package. It performs a lot
of guesswork, and its simple heuristics are something I hope to
improve on. Since I am new to object-oriented program design in Perl
and I wanted to leverage object-oriented Debian tooling library code,
I took the structure of my project from dh_make_perl
. In this
manner I found it easy and pleasant to write a maintainable script.
dh_elpa_test
A lot of Emacs Lisp addon packages use a program called Cask to manage
the Emacs Lisp dependencies needed to run their test suites. That
meant that dh_auto_test
often fails to run Emacs Lisp addon package
test suites. Since the Debian packaging toolchain already has
advanced dependency management, it s undesirable to involve Cask in
the package build pipeline if it can be avoided. I had been copying
and pasting the code needed to make the tests run in our environment
to the debian/rules
files of each package whose test suite I wanted
to run.
dh_elpa_test
tries to detect Emacs Lisp addon package test suites
and run them with the workarounds needed in our environment. This
avoids boilerplate in debian/rules
. dh_elpa_test
also disables
dh_auto_test
to avoid a inadvertent Cask invocation.
Future & acknowledgements
My hope for this work was to make it easier and faster to package
Emacs Lisp addon packages for Debian, for my own sake and for anyone
new who is interested in joining the
pkg-emacsen team. In the
future, I want to have dh_elpa_test
generate an autopkgtest
definition so that a Testsuite: pkg-emacsen
line in debian/control
is enough to have an Emacs Lisp addon package test suite run on
Debian CI.
I m very grateful to David Bremner for reviewing and supporting this
work, and also for supporting my Emacs Lisp addon packaging work more
generally.
grep "status=expired, returned to sender" /var/log/mail.log \
awk ' print $6 ' \
while read id; do grep "$id" -m1 /var/log/mail.log; done
The first grep determines the queue id of the messages that were
expired, and then the second grep finds the first entry in the mail
log for that message, which provides the time the message was sent.
Replacing -m1
with -m4
gave me the message-id of the messages and
the intended recipient of the messages. This allowed me to restore
them from backups or bounce them from my sent mail folder for those
that I tried to send myself.
To prevent this from happening again, I ve extended the maximum
lifetime of messages in the queue from 5 days to 10:
postconf -e maximal_queue_lifetime=10d
I ve incorporated a check for clogged mail queues on my machines into
my weekly backup routine.
dcmd
supports .buildinfo files. Original patch by josch.qch
reproducible by using a fixed date instead of the current time. Original patch by Dhole.CreationDate
not appear in comments of DVI / PS files produced by TeX. He also mentioned that some timestamps can be replaced by using the -output-comment
option and that the next version of pdftex
will have patches inspired by reproducible build to mitigate the effects (see SOURCE_DATE_EPOCH patches) .
SOURCE_DATE_EPOCH
.armhf
build node has been added (thanks to Vagrant Cascadian) and integrated into the Jenkins setup for 4 new armhf
builder jobs. (h01ger)
All packages for Debian testing (Stretch) have been tested on armhf
in just 42 days. It took 114 days to get the same point for unstable back when the armhf
test infrastructure was much smaller.
Package sets have been enabled for testing on armhf
. (h01ger)
Packages producing architecture-independent ( Arch:all ) binary packages together with architecture dependent packages targeted for specific architectures will now only be tested on matching architectures. (Steven Chamberlain, h01ger)
As the Jenkins setup is now made of 252 different jobs, the overview has been split into 11 different smalller views. (h01ger)
clean-github-pr.py
, forks a repository
and then sets various attributes of it to make it as obvious as GitHub
allows that it s just a temporary fork made in order to submit a pull
request. Invoke it like this:
$ clean-github-pr.py upstream-owner/repo-to-fork
You will need the PyGitHub python library, which on a Debian Stretch
system can be installed with apt-get install python-github
.
#!/usr/bin/python
# clean-github-pr --- Create tidy repositories for pull requests
#
# Copyright (C) 2016 Sean Whitton
#
# clean-github-pr is free software: you can redistribute it and/or modify
# it under the terms of the GNU General Public License as published by
# the Free Software Foundation, either version 3 of the License, or
# (at your option) any later version.
#
# clean-github-pr is distributed in the hope that it will be useful,
# but WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
# GNU General Public License for more details.
#
# You should have received a copy of the GNU General Public License
# along with clean-github-pr. If not, see <http://www.gnu.org/licenses/>.
import github
import sys
import time
import tempfile
import shutil
import subprocess
import os
CREDS_FILE = os.getenv("HOME") + "/.cache/clean-github-pr-creds"
def main():
# check arguments
if len(sys.argv) != 2:
print sys.argv[0] + ": usage: " + sys.argv[0] + " USER/REPO"
sys.exit(1)
# check creds file
try:
f = open(CREDS_FILE, 'r')
except IOError:
print sys.argv[0] + ": please put your github username and password, separated by a colon, in the file ~/.cache/clean-github-pr-creds"
sys.exit(1)
# just to be sure
os.chmod(CREDS_FILE, 0600)
# make the fork
creds = f.readline()
username = creds.split(":")[0]
pword = creds.split(":")[1].strip()
g = github.Github(username, pword)
u = g.get_user()
source = sys.argv[1]
fork = sys.argv[1].split("/")[1]
print "forking repo " + source
u.create_fork(g.get_repo(source))
while True:
try:
r = u.get_repo(fork)
except github.UnknownObjectException:
print "still waiting"
time.sleep(5)
else:
break
# set up & push github branch
user_work_dir = os.getcwd()
work_area = tempfile.mkdtemp()
os.chdir(work_area)
subprocess.call(["git", "clone", "https://github.com/" + username + "/" + fork])
os.chdir(work_area + "/" + fork)
subprocess.call(["git", "checkout", "--orphan", "github"])
subprocess.call(["git", "rm", "-rf", "."])
with open("README.md", 'w') as f:
f.write("This repository is just a fork made in order to submit a pull request; please ignore.")
subprocess.call(["git", "add", "README.md"])
subprocess.call(["git", "commit", "-m", "fork for a pull request; please ignore"])
subprocess.call(["git", "push", "origin", "github"])
os.chdir(user_work_dir)
shutil.rmtree(work_area)
# set clean repository settings
r.edit(fork,
has_wiki=False,
description="Fork for a pull request; please ignore",
homepage="",
has_issues=False,
has_downloads=False,
default_branch="github")
if __name__ == "__main__":
main()
clean-github-pr.py
, please send me a
patch or a pull request against the version in
my dotfiles repository.
apt-get
it. It s hard
to get the systems and processes to make this possible right,
especially without a team being paid full-time to set it all up.
Debian has managed it on the backs of volunteers. That s something I
want to be a part of.
So far, most of my efforts have been confined to packaging addons for
the Emacs text editor and the Firefox web browser. Debian has common
frameworks for packaging these and lots of scripts that make it pretty
easy to produce new packages (I did one yesterday in about 30
minutes). It s valuable to package these addons because there are a
great many advantages for a user in obtaining them from their local
Debian mirror rather than downloading them from
the de facto Emacs addons repository or the
Mozilla addons site. Users know that trusted Debian project
volunteers have reviewed the software I cannot yet upload my
packages to the Debian archive by myself and the whole Debian
infrastructure for reporting and classifying bugs can be brought to
bear. The quality assurance standards built into these processes are
higher than your average addon author s, not that I mean to suggest
anything about authors of the particular addons I ve packaged so far.
And automating the installation of such addons is easy as there are
all sorts of tools to automate installations of Debian systems and
package sets.
I hope that I can expand my work beyond packaging Emacs and Firefox
addons in the future. It s been great, though, to build my general
knowledge of the Debian toolchain and the project s social
organisation while working on something that is both relatively simple
and valuable to package. Now I said at the beginning of this post
that it was following the work of Joey Hess that brought me to Debian
development. One thing that worries me about becoming involved in
more contentious parts of the Debian project is the dysfunction that
he saw in the Debian decision-making process, dysfunction which
eventually led to his
resignation
from the project in 2014. I hope that I can avoid getting quagmired
and demotivated.
Next.