Search Results: "francois"

30 September 2023

Fran ois Marier: Things I do after uploading a new package to Debian

There are a couple of things I tend to do after packaging a piece of software for Debian, filing an Intent To Package bug and uploading the package. This is both a checklist for me and (hopefully) a way to inspire other maintainers to go beyond the basic package maintainer duties as documented in the Debian Developer's Reference. If I've missed anything, please leave an comment or send me an email!

Salsa for collaborative development To foster collaboration and allow others to contribute to the packaging, I upload my package to a new subproject on Salsa. By doing this, I enable other Debian contributors to make improvements and propose changes via merge requests. I also like to upload the project logo in the settings page (i.e. https://salsa.debian.org/debian/packagename/edit) since that will show up on some dashboards like the Package overview.

Launchpad for interacting with downstream Ubuntu users While Debian is my primary focus, I also want to keep an eye on how my package is doing on derivative distributions like Ubuntu. To do this, I subscribe to bugs related to my package on Launchpad. Ubuntu bugs are rarely Ubuntu-specific and so I will often fix them in Debian. I also set myself as the answer contact on Launchpad Answers since these questions are often the sign of a Debian or a lack of documentation. I don't generally bother to fix bugs on Ubuntu directly though since I've not had much luck with packages in universe lately. I'd rather not spend much time preparing a package that's not going to end up being released to users as part of a Stable Release Update. On the other hand, I have succesfully requested simple Debian syncs when an important update was uploaded after the Debian Import Freeze.

Screenshots and tags I take screenshots of my package and upload them on https://screenshots.debian.net to help users understand what my package offers and how it looks. I believe that these screenshots end up in software "stores" type of applications. Similarly, I add tags to my package using https://debtags.debian.org. I'm not entirely sure where these tags are used, but they are visible from apt show packagename.

Monitoring Upstream Releases Staying up-to-date with upstream releases is one of the most important duties of a software packager. There are a lot of different ways that upstream software authors publicize their new releases. Here are some of the things I do to monitor these releases:
  • I have a cronjob which run uscan once a day to check for new upstream releases using the information specified in my debian/watch files:
      0 12 * * 1-5   francois  test -e /home/francois/devel/deb && HTTPS_PROXY= https_proxy= uscan --report /home/francois/devel/deb   true
    
  • I subscribe to the upstream project's releases RSS feed, if available. For example, I subscribe to the GitHub tags feed for git-secrets and Launchpad announcements for email-reminder.
  • If the upstream project maintains an announcement mailing list, I subscribe to it (e.g. rkhunter-announce or tor release announcements).
When nothing else is available, I write a cronjob that downloads the upstream changelog once a day and commits it to a local git repo:
#!/bin/bash
pushd /home/francois/devel/zlib-changelog > /dev/null
wget --quiet -O ChangeLog.txt https://zlib.net/ChangeLog.txt   exit 1
git diff
git commit -a -m "Updated changelog" > /dev/null
popd > /dev/null
This sends me a diff by email when a new release is added (and no emails otherwise).

27 September 2022

Fran ois Marier: Upgrading from chan_sip to res_pjsip in Asterisk 18

After upgrading to Ubuntu Jammy and Asterisk 18.10, I saw the following messages in my logs:
WARNING[360166]: loader.c:2487 in load_modules: Module 'chan_sip' has been loaded but was deprecated in Asterisk version 17 and will be removed in Asterisk version 21.
WARNING[360174]: chan_sip.c:35468 in deprecation_notice: chan_sip has no official maintainer and is deprecated.  Migration to
WARNING[360174]: chan_sip.c:35469 in deprecation_notice: chan_pjsip is recommended.  See guides at the Asterisk Wiki:
WARNING[360174]: chan_sip.c:35470 in deprecation_notice: https://wiki.asterisk.org/wiki/display/AST/Migrating+from+chan_sip+to+res_pjsip
WARNING[360174]: chan_sip.c:35471 in deprecation_notice: https://wiki.asterisk.org/wiki/display/AST/Configuring+res_pjsip
and so I decided it was time to stop postponing the overdue migration of my working setup from chan_sip to res_pjsip. It turns out that it was not as painful as I expected, though the conversion script bundled with Asterisk didn't work for me out of the box.

Debugging Before you start, one very important thing to note is that the SIP debug information you used to see when running this in the asterisk console (asterisk -r):
sip set debug on
now lives behind this command:
pjsip set logger on

SIP phones The first thing I migrated was the config for my two SIP phones (Snom 300 and Snom D715). The original config for them in sip.conf was:
[2000]
; Snom 300
type=friend
qualify=yes
secret=password123
encryption=no
context=full
host=dynamic
nat=no
directmedia=no
mailbox=10@internal
vmexten=707
dtmfmode=rfc2833
call-limit=2
disallow=all
allow=g722
allow=ulaw
[2001]
; Snom D715
type=friend
qualify=yes
secret=password456
encryption=no
context=full
host=dynamic
nat=no
directmedia=yes
mailbox=10@internal
vmexten=707
dtmfmode=rfc2833
call-limit=2
disallow=all
allow=g722
allow=ulaw
and that became the following in pjsip.conf:
[transport-udp]
type = transport
protocol = udp
bind = 0.0.0.0
external_media_address = myasterisk.dyn.example.com
external_signaling_address = myasterisk.dyn.example.com
local_net = 192.168.0.0/255.255.0.0
[2000]
type = aor
max_contacts = 1
[2000]
type = auth
username = 2000
password = password123
[2000]
type = endpoint
context = full
dtmf_mode = rfc4733
disallow = all
allow = g722
allow = ulaw
direct_media = no
mailboxes = 10@internal
auth = 2000
outbound_auth = 2000
aors = 2000
[2001]
type = aor
max_contacts = 1
[2001]
type = auth
username = 2001
password = password456
[2001]
type = endpoint
context = full
dtmf_mode = rfc4733
disallow = all
allow = g722
allow = ulaw
direct_media = yes
mailboxes = 10@internal
auth = 2001
outbound_auth = 2001
aors = 2001
The different direct_media line between the two phones has to do with how they each connect to my Asterisk server and whether or not they have access to the Internet.

Internal calls For some reason, my internal calls (from one SIP phone to the other) didn't work when using "aliases". I fixed it by changing this blurb in extensions.conf from:
[speeddial]
exten => 1000,1,Dial(SIP/2000,20)
exten => 1001,1,Dial(SIP/2001,20)
to:
[speeddial]
exten => 1000,1,Dial($ PJSIP_DIAL_CONTACTS(2000) ,20)
exten => 1001,1,Dial($ PJSIP_DIAL_CONTACTS(2001) ,20)
I have not yet dug into what this changes or why it's necessary and so feel free to leave a comment if you know more here.

PSTN trunk Once I had the internal phones working, I moved to making and receiving phone calls over the PSTN, for which I use VoIP.ms with encryption. I had to change the following in my sip.conf:
[general]
register => tls://555123_myasterisk:password789@vancouver2.voip.ms
externhost=myasterisk.dyn.example.com
localnet=192.168.0.0/255.255.0.0
tcpenable=yes
tlsenable=yes
tlscertfile=/etc/asterisk/asterisk.cert
tlsprivatekey=/etc/asterisk/asterisk.key
tlscapath=/etc/ssl/certs/
[voipms]
type=peer
host=vancouver2.voip.ms
secret=password789
defaultuser=555123_myasterisk
context=from-voipms
disallow=all
allow=ulaw
allow=g729
insecure=port,invite
canreinvite=no
trustrpid=yes
sendrpid=yes
transport=tls
encryption=yes
to the following in pjsip.conf:
[transport-tls]
type = transport
protocol = tls
bind = 0.0.0.0
external_media_address = myasterisk.dyn.example.com
external_signaling_address = myasterisk.dyn.example.com
local_net = 192.168.0.0/255.255.0.0
cert_file = /etc/asterisk/asterisk.cert
priv_key_file = /etc/asterisk/asterisk.key
ca_list_path = /etc/ssl/certs/
method = tlsv1_2
[voipms]
type = registration
transport = transport-tls
outbound_auth = voipms
client_uri = sip:555123_myasterisk@vancouver2.voip.ms
server_uri = sip:vancouver2.voip.ms
[voipms]
type = auth
password = password789
username = 555123_myasterisk
[voipms]
type = aor
contact = sip:555123_myasterisk@vancouver2.voip.ms
[voipms]
type = identify
endpoint = voipms
match = vancouver2.voip.ms
[voipms]
type = endpoint
context = from-voipms
disallow = all
allow = ulaw
allow = g729
from_user = 555123_myasterisk
trust_id_inbound = yes
media_encryption = sdes
auth = voipms
outbound_auth = voipms
aors = voipms
rtp_symmetric = yes
rewrite_contact = yes
send_rpid = yes
timers = no
The TLS method line is needed since the default in Debian OpenSSL is too strict. The timers line is to prevent outbound calls from getting dropped after 15 minutes. Finally, I changed the Dial() lines in these extensions.conf blurbs from:
[from-voipms]
exten => 5551231000,1,Goto(2000,1)
exten => 2000,1,Dial(SIP/2000&SIP/2001,20)
exten => 2000,n,Goto(in2000-$ DIALSTATUS ,1)
exten => 2000,n,Hangup
exten => in2000-BUSY,1,VoiceMail(10@internal,su)
exten => in2000-BUSY,n,Hangup
exten => in2000-CONGESTION,1,VoiceMail(10@internal,su)
exten => in2000-CONGESTION,n,Hangup
exten => in2000-CHANUNAVAIL,1,VoiceMail(10@internal,su)
exten => in2000-CHANUNAVAIL,n,Hangup
exten => in2000-NOANSWER,1,VoiceMail(10@internal,su)
exten => in2000-NOANSWER,n,Hangup
exten => _in2000-.,1,Hangup(16)
[pstn-voipms]
exten => _1NXXNXXXXXX,1,Set(CALLERID(all)=Francois Marier <5551231000>)
exten => _1NXXNXXXXXX,n,Dial(SIP/voipms/$ EXTEN )
exten => _1NXXNXXXXXX,n,Hangup()
exten => _NXXNXXXXXX,1,Set(CALLERID(all)=Francois Marier <5551231000>)
exten => _NXXNXXXXXX,n,Dial(SIP/voipms/1$ EXTEN )
exten => _NXXNXXXXXX,n,Hangup()
exten => _011X.,1,Set(CALLERID(all)=Francois Marier <5551231000>)
exten => _011X.,n,Authenticate(1234)
exten => _011X.,n,Dial(SIP/voipms/$ EXTEN )
exten => _011X.,n,Hangup()
exten => _00X.,1,Set(CALLERID(all)=Francois Marier <5551231000>)
exten => _00X.,n,Authenticate(1234)
exten => _00X.,n,Dial(SIP/voipms/$ EXTEN )
exten => _00X.,n,Hangup()
to:
[from-voipms]
exten => 5551231000,1,Goto(2000,1)
exten => 2000,1,Dial(PJSIP/2000&PJSIP/2001,20)
exten => 2000,n,Goto(in2000-$ DIALSTATUS ,1)
exten => 2000,n,Hangup
exten => in2000-BUSY,1,VoiceMail(10@internal,su)
exten => in2000-BUSY,n,Hangup
exten => in2000-CONGESTION,1,VoiceMail(10@internal,su)
exten => in2000-CONGESTION,n,Hangup
exten => in2000-CHANUNAVAIL,1,VoiceMail(10@internal,su)
exten => in2000-CHANUNAVAIL,n,Hangup
exten => in2000-NOANSWER,1,VoiceMail(10@internal,su)
exten => in2000-NOANSWER,n,Hangup
exten => _in2000-.,1,Hangup(16)
[pstn-voipms]
exten => _1NXXNXXXXXX,1,Set(CALLERID(all)=Francois Marier <5551231000>)
exten => _1NXXNXXXXXX,n,Dial(PJSIP/$ EXTEN @voipms)
exten => _1NXXNXXXXXX,n,Hangup()
exten => _NXXNXXXXXX,1,Set(CALLERID(all)=Francois Marier <5551231000>)
exten => _NXXNXXXXXX,n,Dial(PJSIP/1$ EXTEN @voipms)
exten => _NXXNXXXXXX,n,Hangup()
exten => _011X.,1,Set(CALLERID(all)=Francois Marier <5551231000>)
exten => _011X.,n,Authenticate(1234)
exten => _011X.,n,Dial(PJSIP/$ EXTEN @voipms)
exten => _011X.,n,Hangup()
exten => _00X.,1,Set(CALLERID(all)=Francois Marier <5551231000>)
exten => _00X.,n,Authenticate(1234)
exten => _00X.,n,Dial(PJSIP/$ EXTEN @voipms)
exten => _00X.,n,Hangup()
Note that it's not just replacing SIP/ with PJSIP/, but it was also necessary to use a format supported by pjsip for the channel since SIP/trunkname/extension isn't supported by pjsip.

2 October 2021

Fran ois Marier: Setting up a JMP SIP account on Asterisk

JMP offers VoIP calling via XMPP, but it's also possibly to use the VoIP using SIP. The underlying VoIP calling functionality in JMP is provided by Bandwidth, but their old Asterisk instructions didn't quite work for me. Here's how I set it up in my Asterisk server.

Get your SIP credentials After signing up for JMP and setting it up in your favourite XMPP client, send the following message to the cheogram.com gateway contact:
reset sip account
In response, you will receive a message containing:
  • a numerical username
  • a password (e.g. three lowercase words separated by spaces)

Add SIP account to your Asterisk config First of all, I added the following near the top of my /etc/asterisk/sip.conf:
[general]
register => username:three secret words@jmp.cbcbc7.auth.bandwidth.com:5008
The other non-default options I have set in [general] are:
context=public
allowoverlap=no
udpbindaddr=0.0.0.0
tcpenable=yes
tcpbindaddr=0.0.0.0
tlsenable=yes
transport=udp
srvlookup=no
vmexten=voicemail
relaxdtmf=yes
useragent=Asterisk PBX
tlscertfile=/etc/asterisk/asterisk.cert
tlsprivatekey=/etc/asterisk/asterisk.key
tlscapath=/etc/ssl/certs/
externhost=machinename.dyndns.org
localnet=192.168.0.0/255.255.0.0
Note that you can have more than one register line in your config if you use more than one SIP provider, but you must register with the server whether you want to receive incoming calls or not. Then I added a new blurb to the bottom of the same file:
[jmp]
type=peer
host=mp.cbcbc7.auth.bandwidth.com
port=5008
secret=three secret words
defaultuser=username
context=from-jmp
disallow=all
allow=ulaw
allow=g729
insecure=port,invite
canreinvite=no
dtmfmode=rfc2833
and for reference, here's the blurb for my Snom 300 SIP phone:
[1001]
; Snom 300
type=friend
qualify=yes
secret=password
encryption=no
context=full
host=dynamic
nat=no
directmedia=no
mailbox=1000@internal
vmexten=707
dtmfmode=rfc2833
call-limit=2
disallow=all
allow=g722
allow=ulaw
I checked that the registration was successful by running asterisk -r and then typing:
sip set debug on
before reloading the configuration using:
reload

Create Asterisk extensions to send and receive calls Once I got registration to work, I hooked this up with my other extensions so that I could send and receive calls using my JMP number. In /etc/asterisk/extensions.conf, I added the following:
[from-jmp]
include => home
exten => s,1,Goto(1000,1)
where home is the context which includes my local SIP devices and 1000 is the extension I want to ring. Then I added the following to enable calls to any destination within the North American Numbering Plan:
[pstn-jmp]
exten => _1NXXNXXXXXX,1,Set(CALLERID(all)=Francois Marier <5551231434>)
exten => _1NXXNXXXXXX,n,Dial(SIP/jmp/$ EXTEN )
exten => _1NXXNXXXXXX,n,Hangup()
exten => _NXXNXXXXXX,1,Set(CALLERID(all)=Francois Marier <5551231234>)
exten => _NXXNXXXXXX,n,Dial(SIP/jmp/1$ EXTEN )
exten => _NXXNXXXXXX,n,Hangup()
Here 5551231234 is my JMP phone number, not my bwsip numerical username. For reference, here's the rest of my dialplan in /etc/asterisk/extensions.conf:
[general]
static=yes
writeprotect=no
clearglobalvars=no
[public]
exten => _X.,1,Hangup(3)
[sipdefault]
exten => _X.,1,Hangup(3)
[default]
exten => _X.,1,Hangup(3)
[internal]
include => home
[full]
include => internal
include => pstn-jmp
exten => 707,1,VoiceMailMain(1000@internal)
[home]
; Internal extensions
exten => 1000,1,Dial(SIP/1001,20)
exten => 1000,n,Goto(in1000-$ DIALSTATUS ,1)
exten => 1000,n,Hangup
exten => in1000-BUSY,1,Hangup(17)
exten => in1000-CONGESTION,1,Hangup(3)
exten => in1000-CHANUNAVAIL,1,VoiceMail(1000@internal,su)
exten => in1000-CHANUNAVAIL,n,Hangup(3)
exten => in1000-NOANSWER,1,VoiceMail(1000@internal,su)
exten => in1000-NOANSWER,n,Hangup(16)
exten => _in1000-.,1,Hangup(16)

Firewall Finally, I opened a few ports in my firewall by putting the following in /etc/network/iptables.up.rules:
# SIP and RTP on UDP (jmp.cbcbc7.auth.bandwidth.com)
-A INPUT -s 67.231.2.13/32 -p udp --dport 5008 -j ACCEPT
-A INPUT -s 216.82.238.135/32 -p udp --dport 5008 -j ACCEPT
-A INPUT -s 67.231.2.13/32 -p udp --sport 5004:5005 --dport 10001:20000 -j ACCEPT
-A INPUT -s 216.82.238.135/32 -p udp --sport 5004:5005 --dport 10001:20000 -j ACCEPT

Outbound calls not working While the above setup works for me for inbound calls, it doesn't currently work for outbound calls. The hostname currently resolves to one of two IP addresses:
$ dig +short jmp.cbcbc7.auth.bandwidth.com
67.231.2.13
216.82.238.135
If I pin it to the first one by putting the following in my /etc/hosts file:
67.231.2.13 jmp.cbcbc7.auth.bandwidth.com
then I get a 486 error back from the server when I dial 1-555-456-4567:
<--- SIP read from UDP:67.231.2.13:5008 --->
SIP/2.0 486 Busy Here
Via: SIP/2.0/UDP 127.0.0.1:5060;branch=z9hG4bK03210a30
From: "Francois Marier" <sip:5551231234@127.0.0.1>
To: <sip:15554564567@jmp.cbcbc7.auth.bandwidth.com:5008>
Call-ID: 575f21f36f57951638c1a8062f3a5201@127.0.0.1:5060
CSeq: 103 INVITE
Content-Length: 0
On the other hand, if I pin it to 216.82.238.135, then I get a 600 error:
<--- SIP read from UDP:216.82.238.135:5008 --->
SIP/2.0 600 Busy Everywhere
Via: SIP/2.0/UDP 127.0.0.1:5060;branch=z9hG4bK7b7f7ed9
From: "Francois Marier" <sip:5551231234@127.0.0.1>
To: <sip:15554564567@jmp.cbcbc7.auth.bandwidth.com:5008>
Call-ID: 5bebb8d05902c1732c6b9f4776844c66@127.0.0.1:5060
CSeq: 103 INVITE
Content-Length: 0
If you have any idea what might be wrong here, or if you got outbound calls to work on Bandwidth.com, please leave a comment!

14 June 2021

Fran ois Marier: Self-hosting an Ikiwiki blog

8.5 years ago, I moved my blog to Ikiwiki and Branchable. It's now time for me to take the next step and host my blog on my own server. This is how I migrated from Branchable to my own Apache server.

Installing Ikiwiki dependencies Here are all of the extra Debian packages I had to install on my server:
apt install ikiwiki ikiwiki-hosting-common gcc libauthen-passphrase-perl libcgi-formbuilder-perl libcrypt-sslauthen-passphrase-perl libcgi-formbuilder-perl libcrypt-ssleay-perl libjson-xs-perl librpc-xml-perl python-docutils libxml-feed-perl libsearch-xapian-perl libmailtools-perl highlight-common libsearch-xapian-perl xapian-omega
apt install --no-install-recommends ikiwiki-hosting-web libgravatar-url-perl libmail-sendmail-perl libcgi-session-perl
apt purge libnet-openid-consumer-perl
Then I enabled the CGI module in Apache:
a2enmod cgi
and un-commented the following in /etc/apache2/mods-available/mime.conf:
AddHandler cgi-script .cgi

Creating a separate user account Since Ikiwiki needs to regenerate my blog whenever a new article is pushed to the git repo or a comment is accepted, I created a restricted user account for it:
adduser blog
adduser blog sshuser
chsh -s /usr/bin/git-shell blog

git setup Thanks to Branchable storing blogs in git repositories, I was able to import my blog using a simple git clone in /home/blog (the srcdir):
git clone --bare git://feedingthecloud.branchable.com/ source.git
Note that the name of the directory (source.git) is important for the ikiwikihosting plugin to work. Then I pulled the .setup file out of the setup branch in that repo and put it in /home/blog/.ikiwiki/FeedingTheCloud.setup. After that, I deleted the setup branch and the origin remote from that clone:
git branch -d setup
git remote rm origin
Following the recommended git configuration, I created a working directory (the repository) for the blog user to modify the blog as needed:
cd /home/blog/
git clone /home/blog/source.git FeedingTheCloud
I added my own ssh public key to /home/blog/.ssh/authorized_keys so that I could push to the srcdir from my laptop. Finaly, I generated a new ssh key without a passphrase:
ssh-keygen -t ed25519
and added it as deploy key to the GitHub repo which acts as a read-only mirror of my blog.

Ikiwiki config While I started with the Branchable setup file, I changed the following things in it:
adminemail: webmaster@fmarier.org
srcdir: /home/blog/FeedingTheCloud
destdir: /var/www/blog
url: https://feeding.cloud.geek.nz
cgiurl: https://feeding.cloud.geek.nz/blog.cgi
cgi_wrapper: /var/www/blog/blog.cgi
cgi_wrappermode: 675
add_plugins:
- goodstuff
- lockedit
- comments
- blogspam
- sidebar
- attachment
- favicon
- format
- highlight
- search
- theme
- moderatedcomments
- flattr
- calendar
- headinganchors
- notifyemail
- anonok
- autoindex
- date
- relativedate
- htmlbalance
- pagestats
- sortnaturally
- ikiwikihosting
- gitpush
- emailauth
disable_plugins:
- brokenlinks
- fortune
- more
- openid
- orphans
- passwordauth
- progress
- recentchanges
- repolist
- toggle
- txt
sslcookie: 1
cookiejar:
  file: /home/blog/.ikiwiki/cookies
useragent: ikiwiki
git_wrapper: /home/blog/source.git/hooks/post-update
urlalias:
- http://feeds.cloud.geek.nz/
- http://www.feeding.cloud.geek.nz/
owner: francois@fmarier.org
hostname: feeding.cloud.geek.nz
emailauth_sender: login@fmarier.org
allowed_attachments: admin()
Then I created the destdir:
mkdir /var/www/blog
chown blog:blog /var/www/blog
and generated the initial copy of the blog as the blog user:
ikiwiki --setup .ikiwiki/FeedingTheCloud.setup --wrappers --rebuild
One thing that failed to generate properly was the tag cloug (from the pagestats plugin). I have not been able to figure out why it fails to generate any output when run this way, but if I push to the repo and let the git hook handle the rebuilding of the wiki, the tag cloud is generated correctly. Consequently, fixing this is not high on my list of priorities, but if you happen to know what the problem is, please reach out.

Apache config Here's the Apache config I put in /etc/apache2/sites-available/blog.conf:
<VirtualHost *:443>
    ServerName feeding.cloud.geek.nz
    SSLEngine On
    SSLCertificateFile /etc/letsencrypt/live/feeding.cloud.geek.nz/fullchain.pem
    SSLCertificateKeyFile /etc/letsencrypt/live/feeding.cloud.geek.nz/privkey.pem
    Header set Strict-Transport-Security: "max-age=63072000; includeSubDomains; preload"
    Include /etc/fmarier-org/blog-common
</VirtualHost>
<VirtualHost *:443>
    ServerName www.feeding.cloud.geek.nz
    ServerAlias feeds.cloud.geek.nz
    SSLEngine On
    SSLCertificateFile /etc/letsencrypt/live/feeding.cloud.geek.nz/fullchain.pem
    SSLCertificateKeyFile /etc/letsencrypt/live/feeding.cloud.geek.nz/privkey.pem
    Redirect permanent / https://feeding.cloud.geek.nz/
</VirtualHost>
<VirtualHost *:80>
    ServerName feeding.cloud.geek.nz
    ServerAlias www.feeding.cloud.geek.nz
    ServerAlias feeds.cloud.geek.nz
    Redirect permanent / https://feeding.cloud.geek.nz/
</VirtualHost>
and the common config I put in /etc/fmarier-org/blog-common:
ServerAdmin webmaster@fmarier.org
DocumentRoot /var/www/blog
LogLevel core:info
CustomLog $ APACHE_LOG_DIR /blog-access.log combined
ErrorLog $ APACHE_LOG_DIR /blog-error.log
AddType application/rss+xml .rss
<Location /blog.cgi>
        Options +ExecCGI
</Location>
before enabling all of this using:
a2ensite blog
apache2ctl configtest
systemctl restart apache2.service
The feeds.cloud.geek.nz domain used to be pointing to Feedburner and so I need to maintain it in order to avoid breaking RSS feeds from folks who added my blog to their reader a long time ago.

Server-side improvements Since I'm now in control of the server configuration, I was able to make several improvements to how my blog is served. First of all, I enabled the HTTP/2 and Brotli modules:
a2enmod http2
a2enmod brotli
and enabled Brotli compression by putting the following in /etc/apache2/conf-available/compression.conf:
<IfModule mod_brotli.c>
  <IfDefine !TRANSFER_COMPRESSION>
    Define TRANSFER_COMPRESSION BROTLI_COMPRESS
  </IfDefine>
</IfModule>
<IfModule mod_deflate.c>
  <IfDefine !TRANSFER_COMPRESSION>
    Define TRANSFER_COMPRESSION DEFLATE
  </IfDefine>
</IfModule>
<IfDefine TRANSFER_COMPRESSION>
  <IfModule mod_filter.c>
    AddOutputFilterByType $ TRANSFER_COMPRESSION  text/html text/plain text/xml text/css text/javascript
    AddOutputFilterByType $ TRANSFER_COMPRESSION  application/x-javascript application/javascript application/ecmascript
    AddOutputFilterByType $ TRANSFER_COMPRESSION  application/rss+xml
    AddOutputFilterByType $ TRANSFER_COMPRESSION  application/xml
  </IfModule>
</IfDefine>
and replacing /etc/apache2/mods-available/deflate.conf with the following:
# Moved to /etc/apache2/conf-available/compression.conf as per https://bugs.debian.org/972632
before enabling this new config:
a2enconf compression
Next, I made my blog available as a Tor onion service by putting the following in /etc/apache2/sites-available/blog.conf:
<VirtualHost *:443>
    ServerName feeding.cloud.geek.nz
    ServerAlias xfdug5vmfi6oh42fp6ahhrqdjcf7ysqat6fkp5dhvde4d7vlkqixrsad.onion
    Header set Onion-Location "http://xfdug5vmfi6oh42fp6ahhrqdjcf7ysqat6fkp5dhvde4d7vlkqixrsad.onion% REQUEST_URI s"
    Header set alt-svc 'h2="xfdug5vmfi6oh42fp6ahhrqdjcf7ysqat6fkp5dhvde4d7vlkqixrsad.onion:443"; ma=315360000; persist=1'
    ... 
<VirtualHost *:80>
    ServerName xfdug5vmfi6oh42fp6ahhrqdjcf7ysqat6fkp5dhvde4d7vlkqixrsad.onion
    Include /etc/fmarier-org/blog-common
</VirtualHost>
Then I followed the Mozilla Observatory recommendations and enabled the following security headers:
Header set Content-Security-Policy: "default-src 'none'; report-uri https://fmarier.report-uri.com/r/d/csp/enforce ; style-src 'self' 'unsafe-inline' ; img-src 'self' https://seccdn.libravatar.org/ ; script-src https://feeding.cloud.geek.nz/ikiwiki/ https://xfdug5vmfi6oh42fp6ahhrqdjcf7ysqat6fkp5dhvde4d7vlkqixrsad.onion/ikiwiki/ http://xfdug5vmfi6oh42fp6ahhrqdjcf7ysqat6fkp5dhvde4d7vlkqixrsad.onion/ikiwiki/ 'unsafe-inline' 'sha256-pA8FbKo4pYLWPDH2YMPqcPMBzbjH/RYj0HlNAHYoYT0=' 'sha256-Kn5E/7OLXYSq+EKMhEBGJMyU6bREA9E8Av9FjqbpGKk=' 'sha256-/BTNlczeBxXOoPvhwvE1ftmxwg9z+WIBJtpk3qe7Pqo=' ; base-uri 'self'; form-action 'self' ; frame-ancestors 'self'"
Header set X-Frame-Options: "SAMEORIGIN"
Header set Referrer-Policy: "same-origin"
Header set X-Content-Type-Options: "nosniff"
Note that the Mozilla Observatory is mistakenly identifying HTTP onion services as insecure, so you can ignore that failure. I also used the Mozilla TLS config generator to improve the TLS config for my server. Then I added security.txt and gpc.json to the root of my git repo and then added the following aliases to put these files in the right place:
Alias /.well-known/gpc.json /var/www/blog/gpc.json
Alias /.well-known/security.txt /var/www/blog/security.txt
I also followed these instructions to create a sitemap for my blog with the following alias:
Alias /sitemap.xml /var/www/blog/sitemap/index.rss
Finally, I simplified a few error pages to save bandwidth:
ErrorDocument 301 " "
ErrorDocument 302 " "
ErrorDocument 404 "Not Found"

Monitoring 404s Another advantage of running my own web server is that I can monitor the 404s easily using logcheck by putting the following in /etc/logcheck/logcheck.logfiles:
/var/log/apache2/blog-error.log 
Based on that, I added a few redirects to point bots and users to the location of my RSS feed:
Redirect permanent /atom /index.atom
Redirect permanent /comments.rss /comments/index.rss
Redirect permanent /comments.atom /comments/index.atom
Redirect permanent /FeedingTheCloud /index.rss
Redirect permanent /feed /index.rss
Redirect permanent /feed/ /index.rss
Redirect permanent /feeds/posts/default /index.rss
Redirect permanent /rss /index.rss
Redirect permanent /rss/ /index.rss
and to tell them to stop trying to fetch obsolete resources:
Redirect gone /~ff/FeedingTheCloud
Redirect gone /gittip_button.png
Redirect gone /ikiwiki.cgi
I also used these 404s to discover a few old Feedburner URLs that I could redirect to the right place using archive.org:
Redirect permanent /feeds/1572545745827565861/comments/default /posts/watch-all-of-your-logs-using-monkeytail/comments.atom
Redirect permanent /feeds/1582328597404141220/comments/default /posts/news-feeds-rssatom-for-mythtvorg-and/comments.atom
...
Redirect permanent /feeds/8490436852808833136/comments/default /posts/recovering-lost-git-commits/comments.atom
Redirect permanent /feeds/963415010433858516/comments/default /posts/debugging-openwrt-routers-by-shipping/comments.atom
I also put the following robots.txt in the git repo in order to stop a bunch of authentication errors coming from crawlers:
User-agent: *
Disallow: /blog.cgi
Disallow: /ikiwiki.cgi

Future improvements There are a few things I'd like to improve on my current setup. The first one is to remove the iwikihosting and gitpush plugins and replace them with a small script which would simply git push to the read-only GitHub mirror. Then I could uninstall the ikiwiki-hosting-common and ikiwiki-hosting-web since that's all I use them for. Next, I would like to have proper support for signed git pushes. At the moment, I have the following in /home/blog/source.git/config:
[receive]
    advertisePushOptions = true
    certNonceSeed = "(random string)"
but I'd like to also reject unsigned pushes. While my blog now has a CSP policy which doesn't rely on unsafe-inline for scripts, it does rely on unsafe-inline for stylesheets. I tried to remove this but the actual calls to allow seemed to be located deep within jQuery and so I gave up. Update: now fixed. Finally, I'd like to figure out a good way to deal with articles which don't currently have comments. At the moment, if you try to subscribe to their comment feed, it returns a 404. For example:
[Sun Jun 06 17:43:12.336350 2021] [core:info] [pid 30591:tid 140253834704640] [client 66.249.66.70:57381] AH00128: File does not exist: /var/www/blog/posts/using-iptables-with-network-manager/comments.atom
This is obviously not ideal since many feed readers will refuse to add a feed which is currently not found even though it could become real in the future. If you know of a way to fix this, please let me know.

14 February 2021

Fran ois Marier: Creating a Kodi media PC using a Raspberry Pi 4

Here's how I set up a media PC using Kodi (formerly XMBC) and a Raspberry Pi 4.

Hardware The hardware is fairly straightforward, but here's what I ended up getting: You'll probably want to add a remote control to that setup. I used an old Streamzap I had lying around.

Installing the OS on the SD-card Plug the SD card into a computer using a USB adapter. Download the imager and use it to install Raspbian on the SDcard. Then you can simply plug the SD card into the Pi and boot.

System configuration Using sudo raspi-config, I changed the following:
  • Set hostname (System Options)
  • Wait for network at boot (System Options): needed for NFS
  • Disable screen blanking (Display Options)
  • Enable ssh (Interface Options)
  • Configure locale, timezone and keyboard (Localisation Options)
  • Set WiFi country (Localisation Options)
Then I enabled automatic updates:
apt install unattended-upgrades anacron
echo 'Unattended-Upgrade::Origins-Pattern  
        "origin=Debian,codename=$ distro_codename ,label=Debian";
        "origin=Debian,codename=$ distro_codename ,label=Debian-Security";
        "origin=Raspbian,codename=$ distro_codename ,label=Raspbian";
        "origin=Raspberry Pi Foundation,codename=$ distro_codename ,label=Raspberry Pi Foundation";
 ;'   sudo tee /etc/apt/apt.conf.d/51unattended-upgrades-raspbian

Headless setup Should you need to do the setup without a monitor, you can enable ssh by inserting the SD card into a computer and then creating an empty file called ssh in the boot partition. Plug it into your router and boot it up. Check the IP that it received by looking at the active DHCP leases in your router's admin panel. Then login:
ssh -o PreferredAuthentications=password -o PubkeyAuthentication=no pi@192.168.1.xxx
using the default password of raspberry.

Hardening In order to secure the Pi, I followed most of the steps I usually take when setting up a new Linux server. I created a new user account for admin and ssh access:
adduser francois
addgroup sshuser
adduser francois sshuser
adduser francois sudo
and changed the pi user password to a random one:
pwgen -sy 32
sudo passwd pi
before removing its admin permissions:
deluser pi adm
deluser pi sudo
deluser pi dialout
deluser pi cdrom
deluser pi lpadmin
Finally, I enabled the Uncomplicated Firewall by installing its package:
apt install ufw
and only allowing ssh connections. After starting ufw using systemctl start ufw.service, you can check that it's configured as expected using ufw status. It should display the following:
Status: active
To                         Action      From
--                         ------      ----
22/tcp                     ALLOW       Anywhere
22/tcp (v6)                ALLOW       Anywhere (v6)

Installing Kodi Kodi is very straightforward to install since it's now part of the Raspbian repositories:
apt install kodi
To make it start at boot/login, while still being able to exit and use other apps if needed:
cp /etc/xdg/lxsession/LXDE-pi/autostart ~/.config/lxsession/LXDE-pi/
echo "@kodi" >> ~/.config/lxsession/LXDE-pi/autostart

Network File System In order to avoid having to have all media storage connected directly to the Pi via USB, I setup an NFS share over my local network. First, give static IP allocations to the server and the Pi in your DHCP server, then add it to the /etc/hosts file on your NFS server:
192.168.1.3    pi
Install the NFS server package:
apt instal nfs-kernel-server
Setup the directories to share in /etc/exports:
/pub/movies    pi(ro,insecure,all_squash,subtree_check)
/pub/tv_shows  pi(ro,insecure,all_squash,subtree_check)
Open the right ports on your firewall by putting this in /etc/network/iptables.up.rules:
-A INPUT -s 192.168.1.3 -p udp -j ACCEPT
-A INPUT -s 192.168.1.0/24 -p tcp --dport 111 -j ACCEPT
-A INPUT -s 192.168.1.0/24 -p udp --dport 111 -j ACCEPT
-A INPUT -s 192.168.1.0/24 -p udp --dport 123 -j ACCEPT
-A INPUT -s 192.168.1.0/24 -p tcp --dport 600:1124 -j ACCEPT
-A INPUT -s 192.168.1.0/24 -p udp --dport 600:1124 -j ACCEPT
-A INPUT -s 192.168.1.0/24 -p tcp --dport 2049 -j ACCEPT
-A INPUT -s 192.168.1.0/24 -p udp --dport 2049 -j ACCEPT
Finally, apply all of these changes:
iptables-apply
systemctl restart nfs-kernel-server.service
On the Pi, put the server's static IP in /etc/hosts:
192.168.1.2    fileserver
and this in /etc/fstab:
fileserver:/data/movies  /kodi/movies  nfs  ro,bg,hard,noatime,async,nolock  0  0
fileserver:/data/tv      /kodi/tv      nfs  ro,bg,hard,noatime,async,nolock  0  0
Then create the mount points and mount everything:
mkdir -p /kodi/movies
mkdir /kodi/tv
mount /kodi/movies
mount /kodi/tv

22 November 2020

Fran ois Marier: Removing a corrupted data pack in a Restic backup

I recently ran into a corrupted data pack in a Restic backup on my GnuBee. It led to consistent failures during the prune operation:
incomplete pack file (will be removed): b45afb51749c0778de6a54942d62d361acf87b513c02c27fd2d32b730e174f2e
incomplete pack file (will be removed): c71452fa91413b49ea67e228c1afdc8d9343164d3c989ab48f3dd868641db113
incomplete pack file (will be removed): 10bf128be565a5dc4a46fc2fc5c18b12ed2e77899e7043b28ce6604e575d1463
incomplete pack file (will be removed): df282c9e64b225c2664dc6d89d1859af94f35936e87e5941cee99b8fbefd7620
incomplete pack file (will be removed): 1de20e74aac7ac239489e6767ec29822ffe52e1f2d7f61c3ec86e64e31984919
hash does not match id: want 8fac6efe99f2a103b0c9c57293a245f25aeac4146d0e07c2ab540d91f23d3bb5, got 2818331716e8a5dd64a610d1a4f85c970fd8ae92f891d64625beaaa6072e1b84
github.com/restic/restic/internal/repository.Repack
        github.com/restic/restic/internal/repository/repack.go:37
main.pruneRepository
        github.com/restic/restic/cmd/restic/cmd_prune.go:242
main.runPrune
        github.com/restic/restic/cmd/restic/cmd_prune.go:62
main.glob..func19
        github.com/restic/restic/cmd/restic/cmd_prune.go:27
github.com/spf13/cobra.(*Command).execute
        github.com/spf13/cobra/command.go:838
github.com/spf13/cobra.(*Command).ExecuteC
        github.com/spf13/cobra/command.go:943
github.com/spf13/cobra.(*Command).Execute
        github.com/spf13/cobra/command.go:883
main.main
        github.com/restic/restic/cmd/restic/main.go:86
runtime.main
        runtime/proc.go:204
runtime.goexit
        runtime/asm_amd64.s:1374
Thanks to the excellent support forum, I was able to resolve this issue by dropping a single snapshot. First, I identified the snapshot which contained the offending pack:
$ restic -r sftp:hostname.local: find --pack 8fac6efe99f2a103b0c9c57293a245f25aeac4146d0e07c2ab540d91f23d3bb5
repository b0b0516c opened successfully, password is correct
Found blob 2beffa460d4e8ca4ee6bf56df279d1a858824f5cf6edc41a394499510aa5af9e
 ... in file /home/francois/.local/share/akregator/Archive/http___udd.debian.org_dmd_feed_
     (tree 602b373abedca01f0b007fea17aa5ad2c8f4d11f1786dd06574068bf41e32020)
 ... in snapshot 5535dc9d (2020-06-30 08:34:41)
Then, I could simply drop that snapshot:
$ restic -r sftp:hostname.local: forget 5535dc9d
repository b0b0516c opened successfully, password is correct
[0:00] 100.00%  1 / 1 files deleted
and run the prune command to remove the snapshot, as well as the incomplete packs that were also mentioned in the above output but could never be removed due to the other error:
$ restic -r sftp:hostname.local: prune
repository b0b0516c opened successfully, password is correct
counting files in repo
building new index for repo
[20:11] 100.00%  77439 / 77439 packs
incomplete pack file (will be removed): b45afb51749c0778de6a54942d62d361acf87b513c02c27fd2d32b730e174f2e
incomplete pack file (will be removed): c71452fa91413b49ea67e228c1afdc8d9343164d3c989ab48f3dd868641db113
incomplete pack file (will be removed): 10bf128be565a5dc4a46fc2fc5c18b12ed2e77899e7043b28ce6604e575d1463
incomplete pack file (will be removed): df282c9e64b225c2664dc6d89d1859af94f35936e87e5941cee99b8fbefd7620
incomplete pack file (will be removed): 1de20e74aac7ac239489e6767ec29822ffe52e1f2d7f61c3ec86e64e31984919
repository contains 77434 packs (2384522 blobs) with 367.648 GiB
processed 2384522 blobs: 1165510 duplicate blobs, 47.331 GiB duplicate
load all snapshots
find data that is still in use for 15 snapshots
[1:11] 100.00%  15 / 15 snapshots
found 1006062 of 2384522 data blobs still in use, removing 1378460 blobs
will remove 5 invalid files
will delete 13728 packs and rewrite 15140 packs, this frees 142.285 GiB
[4:58:20] 100.00%  15140 / 15140 packs rewritten
counting files in repo
[18:58] 100.00%  50164 / 50164 packs
finding old index files
saved new indexes as [340cb68f 91ff77ef ee21a086 3e5fa853 084b5d4b 3b8d5b7a d5c385b4 5eff0be3 2cebb212 5e0d9244 29a36849 8251dcee 85db6fa2 29ed23f6 fb306aba 6ee289eb 0a74829d]
remove 190 old index files
[0:00] 100.00%  190 / 190 files deleted
remove 28868 old packs
[1:23] 100.00%  28868 / 28868 files deleted
done

18 October 2020

Fran ois Marier: Using a Let's Encrypt TLS certificate with Asterisk 16.2

In order to fix the following error after setting up SIP TLS in Asterisk 16.2:
asterisk[8691]: ERROR[8691]: tcptls.c:966 in __ssl_setup: TLS/SSL error loading cert file. <asterisk.pem>
I created a Let's Encrypt certificate using certbot:
apt install certbot
certbot certonly --standalone -d hostname.example.com
To enable the asterisk user to load the certificate successfuly (it doesn't permission to access to the certificates under /etc/letsencrypt/), I copied it to the right directory:
cp /etc/letsencrypt/live/hostname.example.com/privkey.pem /etc/asterisk/asterisk.key
cp /etc/letsencrypt/live/hostname.example.com/fullchain.pem /etc/asterisk/asterisk.cert
chown asterisk:asterisk /etc/asterisk/asterisk.cert /etc/asterisk/asterisk.key
chmod go-rwx /etc/asterisk/asterisk.cert /etc/asterisk/asterisk.key
Then I set the following variables in /etc/asterisk/sip.conf:
tlscertfile=/etc/asterisk/asterisk.cert
tlsprivatekey=/etc/asterisk/asterisk.key

Automatic renewal The machine on which I run asterisk has a tricky Apache setup:
  • a webserver is running on port 80
  • port 80 is restricted to the local network
This meant that the certbot domain ownership checks would get blocked by the firewall, and I couldn't open that port without exposing the private webserver to the Internet. So I ended up disabling the built-in certbot renewal mechanism:
systemctl disable certbot.timer certbot.service
systemctl stop certbot.timer certbot.service
and then writing my own script in /etc/cron.daily/certbot-francois:
#!/bin/bash
TEMPFILE= mktemp 
# Stop Apache and backup firewall.
/bin/systemctl stop apache2.service
/usr/sbin/iptables-save > $TEMPFILE
# Open up port 80 to the whole world.
/usr/sbin/iptables -D INPUT -j LOGDROP
/usr/sbin/iptables -A INPUT -p tcp --dport 80 -j ACCEPT
/usr/sbin/iptables -A INPUT -j LOGDROP
# Renew all certs.
/usr/bin/certbot renew --quiet
# Restore firewall and restart Apache.
/usr/sbin/iptables -D INPUT -p tcp --dport 80 -j ACCEPT
/usr/sbin/iptables-restore < $TEMPFILE
/bin/systemctl start apache2.service
# Copy certificate into asterisk.
cp /etc/letsencrypt/live/hostname.example.com/privkey.pem /etc/asterisk/asterisk.key
cp /etc/letsencrypt/live/hostname.example.com/fullchain.pem /etc/asterisk/asterisk.cert
chown asterisk:asterisk /etc/asterisk/asterisk.cert /etc/asterisk/asterisk.key
chmod go-rwx /etc/asterisk/asterisk.cert /etc/asterisk/asterisk.key
/bin/systemctl restart asterisk.service
# Commit changes to etckeeper.
pushd /etc/ > /dev/null
/usr/bin/git add letsencrypt asterisk
DIFFSTAT="$(/usr/bin/git diff --cached --stat)"
if [ -n "$DIFFSTAT" ] ; then
    /usr/bin/git commit --quiet -m "Renewed letsencrypt certs."
    echo "$DIFFSTAT"
fi
popd > /dev/null

14 October 2020

Sven Hoexter: Nice Helper to Sanitize File Names - sanity.pl

One of the most awesome helpers I carry around in my ~/bin since the early '00s is the sanity.pl script written by Andreas Gohr. It just recently came back to use when I started to archive some awesome Corona enforced live session music with youtube-dl. Update: Francois Marier pointed out that Debian contains the detox package, which has a similar functionality.

15 November 2017

Daniel Pocock: Linking hackerspaces with OpenDHT and Ring

Francois and Nemen at the FIXME hackerspace (Lausanne) weekly meeting are experimenting with the Ring peer-to-peer softphone: Francois is using Raspberry Pi and PiCam to develop a telepresence network for hackerspaces (the big screens in the middle of the photo). The original version of the telepresence solution is using WebRTC. Ring's OpenDHT potentially offers more privacy and resilience.

8 November 2017

Dirk Eddelbuettel: RQuantLib 0.4.4: Several smaller updates

A shiny new (mostly-but-not-completely maintenance) release of RQuantLib, now at version 0.4.4, arrived on CRAN overnight, and will get to Debian shortly. This is the first release in over a year, and it it contains (mostly) a small number of fixes throughout. It also includes the update to the new DateVector and DatetimeVector classes which become the default with the upcoming Rcpp 0.12.14 release (just like this week's RcppQuantuccia release). One piece of new code is due to Fran ois Cocquemas who added support for discrete dividends to both European and American options. See below for the complete set of changes reported in the NEWS file. As with release 0.4.3 a little over a year ago, we will not have new Windows binaries from CRAN as I apparently have insufficient powers of persuasion to get CRAN to update their QuantLib libraries. So we need a volunteer. If someone could please build a binary package for Windows from the 0.4.4 sources, I would be happy to once again host it on the GHRR drat repo. Please contact me directly if you can help. Changes are listed below:

Changes in RQuantLib version 0.4.4 (2017-11-07)
  • Changes in RQuantLib code:
    • Equity options can now be analyzed via discrete dividends through two vectors of dividend dates and values (Francois Cocquemas in #73 fixing #72)
    • Some package and dependency information was updated in files DESCRIPTION and NAMESPACE.
    • The new Date(time)Vector classes introduced with Rcpp 0.12.8 are now used when available.
    • Minor corrections were applied to BKTree, to vanilla options for the case of intraday time stamps, to the SabrSwaption documentation, and to bond utilities for the most recent QuantLib release.

Courtesy of CRANberries, there is also a diffstat report for the this release. As always, more detailed information is on the RQuantLib page. Questions, comments etc should go to the rquantlib-devel mailing list off the R-Forge page. Issue tickets can be filed at the GitHub repo.

This post by Dirk Eddelbuettel originated on his Thinking inside the box blog. Please report excessive re-aggregation in third-party for-profit settings.

12 July 2017

Francois Marier: Toggling Between Pulseaudio Outputs when Docking a Laptop

In addition to selecting the right monitor after docking my ThinkPad, I wanted to set the correct sound output since I have headphones connected to my Ultra Dock. This can be done fairly easily using Pulseaudio.

Switching to a different pulseaudio output To find the device name and the output name I need to provide to pacmd, I ran pacmd list-sinks:
2 sink(s) available.
...
  * index: 1
    name: <alsa_output.pci-0000_00_1b.0.analog-stereo>
    driver: <module-alsa-card.c>
...
    ports:
        analog-output: Analog Output (priority 9900, latency offset 0 usec, available: unknown)
            properties:
        analog-output-speaker: Speakers (priority 10000, latency offset 0 usec, available: unknown)
            properties:
                device.icon_name = "audio-speakers"
From there, I extracted the soundcard name (alsa_output.pci-0000_00_1b.0.analog-stereo) and the names of the two output ports (analog-output and analog-output-speaker). To switch between the headphones and the speakers, I can therefore run the following commands:
pacmd set-sink-port alsa_output.pci-0000_00_1b.0.analog-stereo analog-output
pacmd set-sink-port alsa_output.pci-0000_00_1b.0.analog-stereo analog-output-speaker

Listening for headphone events Then I looked for the ACPI event triggered when my headphones are detected by the laptop after docking. After looking at the output of acpi_listen, I found jack/headphone HEADPHONE plug. Combining this with the above pulseaudio names, I put the following in /etc/acpi/events/thinkpad-dock-headphones:
event=jack/headphone HEADPHONE plug
action=su francois -c "pacmd set-sink-port alsa_output.pci-0000_00_1b.0.analog-stereo analog-output"
to automatically switch to the headphones when I dock my laptop.

Finding out whether or not the laptop is docked While it is possible to hook into the docking and undocking ACPI events and run scripts, there doesn't seem to be an easy way from a shell script to tell whether or not the laptop is docked. In the end, I settled on detecting the presence of USB devices. I ran lsusb twice (once docked and once undocked) and then compared the output:
lsusb  > docked 
lsusb  > undocked 
colordiff -u docked undocked 
This gave me a number of differences since I have a bunch of peripherals attached to the dock:
--- docked  2017-07-07 19:10:51.875405241 -0700
+++ undocked    2017-07-07 19:11:00.511336071 -0700
@@ -1,15 +1,6 @@
 Bus 001 Device 002: ID 8087:8000 Intel Corp. 
 Bus 001 Device 001: ID 1d6b:0002 Linux Foundation 2.0 root hub
-Bus 003 Device 081: ID 0424:5534 Standard Microsystems Corp. Hub
-Bus 003 Device 080: ID 17ef:1010 Lenovo 
 Bus 003 Device 001: ID 1d6b:0003 Linux Foundation 3.0 root hub
-Bus 002 Device 041: ID xxxx:xxxx ...
-Bus 002 Device 040: ID xxxx:xxxx ...
-Bus 002 Device 039: ID xxxx:xxxx ...
-Bus 002 Device 038: ID 17ef:100f Lenovo 
-Bus 002 Device 037: ID xxxx:xxxx ...
-Bus 002 Device 042: ID 0424:2134 Standard Microsystems Corp. Hub
-Bus 002 Device 036: ID 17ef:1010 Lenovo 
 Bus 002 Device 002: ID xxxx:xxxx ...
 Bus 002 Device 004: ID xxxx:xxxx ...
 Bus 002 Device 001: ID 1d6b:0002 Linux Foundation 2.0 root hub
I picked 17ef:1010 as it appeared to be some internal bus on the Ultra Dock (none of my USB devices were connected to Bus 003) and then ended up with the following port toggling script:
#!/bin/bash
if /usr/bin/lsusb   grep 17ef:1010 > /dev/null ; then
    # docked
    pacmd set-sink-port alsa_output.pci-0000_00_1b.0.analog-stereo analog-output
else
    # undocked
    pacmd set-sink-port alsa_output.pci-0000_00_1b.0.analog-stereo analog-output-speaker
fi

11 June 2017

Francois Marier: Mysterious 400 Bad Request in Django debug mode

While upgrading Libravatar to a more recent version of Django, I ran into a mysterious 400 error. In debug mode, my site was working fine, but with DEBUG = False, I would only a page containing this error:
Bad Request (400)
with no extra details in the web server logs.

Turning on extra error logging To see the full error message, I configured logging to a file by adding this to settings.py:
LOGGING =  
    'version': 1,
    'disable_existing_loggers': False,
    'handlers':  
        'file':  
            'level': 'DEBUG',
            'class': 'logging.FileHandler',
            'filename': '/tmp/debug.log',
         ,
     ,
    'loggers':  
        'django':  
            'handlers': ['file'],
            'level': 'DEBUG',
            'propagate': True,
         ,
     ,
 
Then I got the following error message:
Invalid HTTP_HOST header: 'www.example.com'. You may need to add u'www.example.com' to ALLOWED_HOSTS.

Temporary hack Sure enough, putting this in settings.py would make it work outside of debug mode:
ALLOWED_HOSTS = ['*']
which means that there's a mismatch between the HTTP_HOST from Apache and the one that Django expects.

Root cause The underlying problem was that the Libravatar config file was missing the square brackets around the ALLOWED_HOSTS setting. I had this:
ALLOWED_HOSTS = 'www.example.com'
instead of:
ALLOWED_HOSTS = ['www.example.com']

16 May 2017

Francois Marier: Recovering from an unbootable Ubuntu encrypted LVM root partition

A laptop that was installed using the default Ubuntu 16.10 (xenial) full-disk encryption option stopped booting after receiving a kernel update somewhere on the way to Ubuntu 17.04 (zesty). After showing the boot screen for about 30 seconds, a busybox shell pops up:
BusyBox v.1.21.1 (Ubuntu 1:1.21.1-1ubuntu1) built-in shell (ash)
Enter 'help' for list of built-in commands.
(initramfs)
Typing exit will display more information about the failure before bringing us back to the same busybox shell:
Gave up waiting for root device. Common problems:
  - Boot args (cat /proc/cmdline)
    - Check rootdelay= (did the system wait long enough?)
    - Check root= (did the system wait for the right device?)
  - Missing modules (cat /proc/modules; ls /dev)
ALERT! /dev/mapper/ubuntu--vg-root does not exist. Dropping to a shell! 
BusyBox v.1.21.1 (Ubuntu 1:1.21.1-1ubuntu1) built-in shell (ash)   
Enter 'help' for list of built-in commands.  
(initramfs)
which now complains that the /dev/mapper/ubuntu--vg-root root partition (which uses LUKS and LVM) cannot be found. There is some comprehensive advice out there but it didn't quite work for me. This is how I ended up resolving the problem.

Boot using a USB installation disk First, create bootable USB disk using the latest Ubuntu installer:
  1. Download an desktop image.
  2. Copy the ISO directly on the USB stick (overwriting it in the process):
     dd if=ubuntu.iso of=/dev/sdc1
    
and boot the system using that USB stick (hold the option key during boot on Apple hardware).

Mount the encrypted partition Assuming a drive which is partitioned this way:
  • /dev/sda1: EFI partition
  • /dev/sda2: unencrypted boot partition
  • /dev/sda3: encrypted LVM partition
Open a terminal and mount the required partitions:
cryptsetup luksOpen /dev/sda3 sda3_crypt
vgchange -ay
mount /dev/mapper/ubuntu--vg-root /mnt
mount /dev/sda2 /mnt/boot
mount -t proc proc /mnt/proc
mount -o bind /dev /mnt/dev
Note:
  • When running cryptsetup luksOpen, you must use the same name as the one that is in /etc/crypttab on the root parition (sda3_crypt in this example).
  • All of these partitions must be present (including /proc and /dev) for the initramfs scripts to do all of their work. If you see errors or warnings, you must resolve them.

Regenerate the initramfs on the boot partition Then "enter" the root partition using:
chroot /mnt
and make sure that the lvm2 package is installed:
apt install lvm2
before regenerating the initramfs for all of the installed kernels:
update-initramfs -c -k all

13 April 2017

Francois Marier: Automatically renewing Let's Encrypt TLS certificates on Debian using Certbot

I use Let's Encrypt TLS certificates on my Debian servers along with the Certbot tool. Since I use the "temporary webserver" method of proving domain ownership via the ACME protocol, I cannot use the cert renewal cronjob built into Certbot. Instead, this is the script I put in /etc/cron.daily/certbot-renew:
#!/bin/bash
/usr/bin/certbot renew --quiet --pre-hook "/bin/systemctl stop apache2.service" --post-hook "/bin/systemctl start apache2.service"
pushd /etc/ > /dev/null
/usr/bin/git add letsencrypt
DIFFSTAT="$(/usr/bin/git diff --cached --stat)"
if [ -n "$DIFFSTAT" ] ; then
    /usr/bin/git commit --quiet -m "Renewed letsencrypt certs"
    echo "$DIFFSTAT"
fi
popd > /dev/null
It temporarily disables my Apache webserver while it renews the certificates and then only outputs something to STDOUT (since my cronjob will email me any output) if certs have been renewed. Since I'm using etckeeper to keep track of config changes on my servers, my renewal script also commits to the repository if any certs have changed.

External Monitoring In order to catch mistakes or oversights, I use ssl-cert-check to monitor my domains once a day:
ssl-cert-check -s fmarier.org -p 443 -q -a -e francois@fmarier.org
I also signed up with Cert Spotter which watches the Certificate Transparency log and notifies me of any newly-issued certificates for my domains. In other words, I get notified:
  • if my cronjob fails and a cert is about to expire, or
  • as soon as a new cert is issued.
The whole thing seems to work well, but if there's anything I could be doing better, feel free to leave a comment!

1 April 2017

Francois Marier: Manually expanding a RAID1 array on Ubuntu

Here are the notes I took while manually expanding an non-LVM encrypted RAID1 array on an Ubuntu machine. My original setup consisted of a 1 TB drive along with a 2 TB drive, which meant that the RAID1 array was 1 TB in size and the second drive had 1 TB of unused capacity. This is how I replaced the old 1 TB drive with a new 3 TB drive and expanded the RAID1 array to 2 TB (leaving 1 TB unused on the new 3 TB drive).

Partition the new drive In order to partition the new 3 TB drive, I started by creating a temporary partition on the old 2 TB drive (/dev/sdc) to use up all of the capacity on that drive:
$ parted /dev/sdc
unit s
print
mkpart
print
Then I initialized the partition table and creating the EFI partition partition on the new drive (/dev/sdd):
$ parted /dev/sdd
unit s
mktable gpt
mkpart
Since I want to have the RAID1 array be as large as the smaller of the two drives, I made sure that the second partition (/home) on the new 3 TB drive had:
  • the same start position as the second partition on the old drive
  • the end position of the third partition (the temporary one I just created) on the old drive
I created the partition and flagged it as a RAID one:
mkpart
toggle 2 raid
and then deleted the temporary partition on the old 2 TB drive:
$ parted /dev/sdc
print
rm 3
print

Create a temporary RAID1 array on the new drive With the new drive properly partitioned, I created a new RAID array for it:
mdadm /dev/md10 --create --level=1 --raid-devices=2 /dev/sdd1 missing
and added it to /etc/mdadm/mdadm.conf:
mdadm --detail --scan >> /etc/mdadm/mdadm.conf
which required manual editing of that file to remove duplicate entries.

Create the encrypted partition With the new RAID device in place, I created the encrypted LUKS partition:
cryptsetup -h sha256 -c aes-xts-plain64 -s 512 luksFormat /dev/md10
cryptsetup luksOpen /dev/md10 chome2
I took the UUID for the temporary RAID partition:
blkid /dev/md10
and put it in /etc/crypttab as chome2. Then, I formatted the new LUKS partition and mounted it:
mkfs.ext4 -m 0 /dev/mapper/chome2
mkdir /home2
mount /dev/mapper/chome2 /home2

Copy the data from the old drive With the home paritions of both drives mounted, I copied the files over to the new drive:
eatmydata nice ionice -c3 rsync -axHAX --progress /home/* /home2/
making use of wrappers that preserve system reponsiveness during I/O-intensive operations.

Switch over to the new drive After the copy, I switched over to the new drive in a step-by-step way:
  1. Changed the UUID of chome in /etc/crypttab.
  2. Changed the UUID and name of /dev/md1 in /etc/mdadm/mdadm.conf.
  3. Rebooted with both drives.
  4. Checked that the new drive was the one used in the encrypted /home mount using: df -h.

Add the old drive to the new RAID array With all of this working, it was time to clear the mdadm superblock from the old drive:
mdadm --zero-superblock /dev/sdc1
and then change the second partition of the old drive to make it the same size as the one on the new drive:
$ parted /dev/sdc
rm 2
mkpart
toggle 2 raid
print
before adding it to the new array:
mdadm /dev/md1 -a /dev/sdc1

Rename the new array To change the name of the new RAID array back to what it was on the old drive, I first had to stop both the old and the new RAID arrays:
umount /home
cryptsetup luksClose chome
mdadm --stop /dev/md10
mdadm --stop /dev/md1
before running this command:
mdadm --assemble /dev/md1 --name=mymachinename:1 --update=name /dev/sdd2
and updating the name in /etc/mdadm/mdadm.conf. The last step was to regenerate the initramfs:
update-initramfs -u
before rebooting into something that looks exactly like the original RAID1 array but with twice the size.

5 February 2017

Francois Marier: IPv6 and OpenVPN on Linode Debian/Ubuntu VPS

Here is how I managed to extend my OpenVPN setup on my Linode VPS to include IPv6 traffic. This ensures that clients can route all of their traffic through the VPN and avoid leaking IPv6 traffic, for example. It also enables clients on IPv4-only networks to receive a routable IPv6 address and connect to IPv6-only servers (i.e. running your own IPv6 broker).

Request an additional IPv6 block The first thing you need to do is get a new IPv6 address block (or "pool" as Linode calls it) from which you can allocate a single address to each VPN client that connects to the server. If you are using a Linode VPS, there are instructions on how to request a new IPv6 pool. Note that you need to get an address block between /64 and /112. A /116 like Linode offers won't work in OpenVPN. Thankfully, Linode is happy to allocate you an extra /64 for free.

Setup the new IPv6 address If your server only has an single IPv4 address and a single IPv6 address, then a simple DHCP-backed network configuration will work fine. To add the second IPv6 block on the other hand, I had to change my network configuration (/etc/network/interfaces) to this:
auto lo
iface lo inet loopback
allow-hotplug eth0
iface eth0 inet dhcp
    pre-up iptables-restore /etc/network/iptables.up.rules
iface eth0 inet6 static
    address 2600:3c01::xxxx:xxxx:xxxx:939f/64
    gateway fe80::1
    pre-up ip6tables-restore /etc/network/ip6tables.up.rules
iface tun0 inet6 static
    address 2600:3c01:xxxx:xxxx::/64
    pre-up ip6tables-restore /etc/network/ip6tables.up.rules
where 2600:3c01::xxxx:xxxx:xxxx:939f/64 (bound to eth0) is your main IPv6 address and 2600:3c01:xxxx:xxxx::/64 (bound to tun0) is the new block you requested. Once you've setup the new IPv6 block, test it from another IPv6-enabled host using:
ping6 2600:3c01:xxxx:xxxx::1

OpenVPN configuration The only thing I had to change in my OpenVPN configuration (/etc/openvpn/server.conf) was to change:
proto udp
to:
proto udp6
in order to make the VPN server available over both IPv4 and IPv6, and to add the following lines:
server-ipv6 2600:3c01:xxxx:xxxx::/64
push "route-ipv6 2000::/3"
to bind to the right V6 address and to tell clients to tunnel all V6 Internet traffic through the VPN. In addition to updating the OpenVPN config, you will need to add the following line to /etc/sysctl.d/openvpn.conf:
net.ipv6.conf.all.forwarding=1
and the following to your firewall (e.g. /etc/network/ip6tables.up.rules):
# openvpn
-A INPUT -p udp --dport 1194 -j ACCEPT
-A FORWARD -m state --state NEW -i tun0 -o eth0 -s 2600:3c01:xxxx:xxxx::/64 -j ACCEPT
-A FORWARD -m state --state NEW -i eth0 -o tun0 -d 2600:3c01:xxxx:xxxx::/64 -j ACCEPT
-A FORWARD -m state --state ESTABLISHED,RELATED -j ACCEPT
in order to ensure that IPv6 packets are forwarded from the eth0 network interface to tun0 on the VPN server. With all of this done, apply the settings by running:
sysctl -p /etc/sysctl.d/openvpn.conf
ip6tables-apply
systemctl restart openvpn.service

Testing the connection Now connect to the VPN using your desktop client and check that the default IPv6 route is set correctly using ip -6 route. Then you can ping the server's new IP address:
ping6 2600:3c01:xxxx:xxxx::1
and from the server, you can ping the client's IP (which you can see in the network settings):
ping6 2600:3c01:xxxx:xxxx::1002
Once both ends of the tunnel can talk to each other, you can try pinging an IPv6-only server from your client:
ping6 ipv6.google.com
and then pinging your client from an IPv6-enabled host somewhere:
ping6 2600:3c01:xxxx:xxxx::1002
If that works, other online tests should also work.

30 January 2017

Francois Marier: Creating a home music server using mpd

I recently setup a music server on my home server using the Music Player Daemon, a cross-platform free software project which has been around for a long time.

Basic setup Start by installing the server and the client package:
apt install mpd mpc
then open /etc/mpd.conf and set these:
music_directory    "/path/to/music/"
bind_to_address    "192.168.1.2"
bind_to_address    "/run/mpd/socket"
zeroconf_enabled   "yes"
password           "Password1"
before replacing the alsa output:
audio_output  
   type    "alsa"
   name    "My ALSA Device"
 
with a pulseaudio one:
audio_output  
   type    "pulse"
   name    "Pulseaudio Output"
 
In order for the automatic detection (zeroconf) of your music server to work, you need to prevent systemd from creating the network socket:
systemctl stop mpd.service
systemctl stop mpd.socket
systemctl disable mpd.socket
otherwise you'll see this in /var/log/mpd/mpd.log:
zeroconf: No global port, disabling zeroconf
Once all of that is in place, start the mpd daemon:
systemctl start mpd.service
and create an index of your music files:
MPD_HOST=Password1@/run/mpd/socket mpc update
while watching the logs to notice any files that the mpd user doesn't have access to:
tail -f /var/log/mpd/mpd.log

Enhancements I also added the following in /etc/logcheck/ignore.server.d/local-mpd to silence unnecessary log messages in logcheck emails:
^\w 3  [ :0-9] 11  [._[:alnum:]-]+ systemd\[1\]: Started Music Player Daemon.$
^\w 3  [ :0-9] 11  [._[:alnum:]-]+ systemd\[1\]: Stopped Music Player Daemon.$
^\w 3  [ :0-9] 11  [._[:alnum:]-]+ systemd\[1\]: Stopping Music Player Daemon...$
and created a cronjob in /etc/cron.d/mpd-francois to update the database daily and stop the music automatically in the evening:
# Refresh DB once a day
5 1 * * *  mpd  MPD_HOST=Password1@/run/mpd/socket /usr/bin/mpc --quiet update
# Think of the neighbours
0 22 * * 0-4  mpd  MPD_HOST=Password1@/run/mpd/socket /usr/bin/mpc --quiet stop
0 23 * * 5-6  mpd  MPD_HOST=Password1@/run/mpd/socket /usr/bin/mpc --quiet stop

Clients To let anybody on the local network connect, I opened port 6600 on the firewall (/etc/network/iptables.up.rules since I'm using Debian's iptables-apply):
-A INPUT -s 192.168.1.0/24 -p tcp --dport 6600 -j ACCEPT
Then I looked at the long list of clients on the mpd wiki.

Desktop The official website suggests two clients which are available in Debian and Ubuntu: Both of them work well, but haven't had a release since 2011, even though there is some activity in 2013 and 2015 in their respective source control repositories. Ario has a simpler user interface but gmpc has cover art download working out of the box, which is why I might stick with it. In both cases, it is possible to configure a polipo proxy so that any external resources are fetched via Tor.

Android On Android, I got these two to work: I picked M.A.L.P. since it includes a nice widget for the homescreen.

iOS On iOS, these are the most promising clients I found: since MPoD and MPaD don't appear to be available on the AppStore anymore.

26 December 2016

Francois Marier: Using iptables with NetworkManager

I used to rely on ifupdown to bring up my iptables firewall automatically using a config like this in /etc/network/interfaces:
allow-hotplug eth0
iface eth0 inet dhcp
    pre-up /sbin/iptables-restore /etc/network/iptables.up.rules
    pre-up /sbin/ip6tables-restore /etc/network/ip6tables.up.rules
allow-hotplug wlan0
iface wlan0 inet dhcp
    pre-up /sbin/iptables-restore /etc/network/iptables.up.rules
    pre-up /sbin/ip6tables-restore /etc/network/ip6tables.up.rules
but that doesn't seem to work very well in the brave new NetworkManager world. What does work reliably is a "pre-up" NetworkManager script, something that gets run before a network interface is brought up. However, despite what the documentation says, a dispatcher script in /etc/NetworkManager/dispatched.d/ won't work on my Debian and Ubuntu machines. Instead, I had to create a new iptables script in /etc/NetworkManager/dispatcher.d/pre-up.d/:
#!/bin/sh
LOGFILE=/var/log/iptables.log
if [ "$1" = lo ]; then
    echo "$0: ignoring $1 for \ $2'" >> $LOGFILE
    exit 0
fi
case "$2" in
    pre-up)
        echo "$0: restoring iptables rules for $1" >> $LOGFILE
        /sbin/iptables-restore /etc/network/iptables.up.rules >> $LOGFILE 2>&1
        /sbin/ip6tables-restore /etc/network/ip6tables.up.rules >> $LOGFILE 2>&1
        ;;
    *)
        echo "$0: nothing to do with $1 for \ $2'" >> $LOGFILE
        ;;
esac
exit 0
and then make that script executable:
chmod a+x /etc/NetworkManager/dispatcher.d/pre-up.d/iptables
With this in place, I can put my iptables rules in the usual place (/etc/network/iptables.up.rules and /etc/network/ip6tables.up.rules) and use the handy iptables-apply and ip6tables-apply commands to test any changes to my firewall rules.

24 October 2016

Francois Marier: Tweaking Referrers For Privacy in Firefox

The Referer header has been a part of the web for a long time. Websites rely on it for a few different purposes (e.g. analytics, ads, CSRF protection) but it can be quite problematic from a privacy perspective. Thankfully, there are now tools in Firefox to help users and developers mitigate some of these problems.

Description In a nutshell, the browser adds a Referer header to all outgoing HTTP requests, revealing to the server on the other end the URL of the page you were on when you placed the request. For example, it tells the server where you were when you followed a link to that site, or what page you were on when you requested an image or a script. There are, however, a few limitations to this simplified explanation. First of all, by default, browsers won't send a referrer if you place a request from an HTTPS page to an HTTP page. This would reveal potentially confidential information (such as the URL path and query string which could contain session tokens or other secret identifiers) from a secure page over an insecure HTTP channel. Firefox will however include a Referer header in HTTPS to HTTPS transitions unless network.http.sendSecureXSiteReferrer (removed in Firefox 52) is set to false in about:config. Secondly, using the new Referrer Policy specification web developers can override the default behaviour for their pages, including on a per-element basis. This can be used both to increase or reduce the amount of information present in the referrer.

Legitimate Uses Because the Referer header has been around for so long, a number of techniques rely on it. Armed with the Referer information, analytics tools can figure out:
  • where website traffic comes from, and
  • how users are navigating the site.
Another place where the Referer is useful is as a mitigation against cross-site request forgeries. In that case, a website receiving a form submission can reject that form submission if the request originated from a different website. It's worth pointing out that this CSRF mitigation might be better implemented via a separate header that could be restricted to particularly dangerous requests (i.e. POST and DELETE requests) and only include the information required for that security check (i.e. the origin).

Problems with the Referrer Unfortunately, this header also creates significant privacy and security concerns. The most obvious one is that it leaks part of your browsing history to sites you visit as well as all of the resources they pull in (e.g. ads and third-party scripts). It can be quite complicated to fix these leaks in a cross-browser way. These leaks can also lead to exposing private personally-identifiable information when they are part of the query string. One of the most high-profile example is the accidental leakage of user searches by healthcare.gov.

Solutions for Firefox Users While web developers can use the new mechanisms exposed through the Referrer Policy, Firefox users can also take steps to limit the amount of information they send to websites, advertisers and trackers. In addition to enabling Firefox's built-in tracking protection by setting privacy.trackingprotection.enabled to true in about:config, which will prevent all network connections to known trackers, users can control when the Referer header is sent by setting network.http.sendRefererHeader to:
  • 0 to never send the header
  • 1 to send the header only when clicking on links and similar elements
  • 2 (default) to send the header on all requests (e.g. images, links, etc.)
It's also possible to put a limit on the maximum amount of information that the header will contain by setting the network.http.referer.trimmingPolicy to:
  • 0 (default) to send the full URL
  • 1 to send the URL without its query string
  • 2 to only send the scheme, host and port
or using the network.http.referer.XOriginTrimmingPolicy option (added in Firefox 52) to only restrict the contents of referrers attached to cross-origin requests. Site owners can opt to share less information with other sites, but they can't share any more than what the user trimming policies allow. Another approach is to disable the Referer when doing cross-origin requests (from one site to another). The network.http.referer.XOriginPolicy preference can be set to:
  • 0 (default) to send the referrer in all cases
  • 1 to send a referrer only when the base domains are the same
  • 2 to send a referrer only when the full hostnames match

Breakage If you try to remove all referrers (i.e. network.http.sendRefererHeader = 0, you will most likely run into problems on a number of sites, for example: The first two have been worked-around successfully by setting network.http.referer.spoofSource to true, an advanced setting which always sends the destination URL as the referrer, thereby not leaking anything about the original page. Unfortunately, the last two are examples of the kind of breakage that can only be fixed through a whitelist (an approach supported by the smart referer add-on) or by temporarily using a different browser profile.

My Recommended Settings As with my cookie recommendations, I recommend strengthening your referrer settings but not disabling (or spoofing) it entirely. While spoofing does solve many the breakage problems mentioned above, it also effectively disables the anti-CSRF protections that some sites may rely on and that have tangible user benefits. A better approach is to limit the amount of information that leaks through cross-origin requests. If you are willing to live with some amount of breakage, you can simply restrict referrers to the same site by setting:
network.http.referer.XOriginPolicy = 2
or to sites which belong to the same organization (i.e. same ETLD/public suffix) using:
network.http.referer.XOriginPolicy = 1
This prevent leaks to third-parties while giving websites all of the information that they can already see in their own server logs. On the other hand, if you prefer a weaker but more compatible solution, you can trim cross-origin referrers down to just the scheme, hostname and port:
network.http.referer.XOriginTrimmingPolicy = 2
I have not yet found user-visible breakage using this last configuration. Let me know if you find any!

25 August 2016

Francois Marier: Debugging gnome-session problems on Ubuntu 14.04

After upgrading an Ubuntu 14.04 ("trusty") machine to the latest 16.04 Hardware Enablement packages, I ran into login problems. I could log into my user account and see the GNOME desktop for a split second before getting thrown back into the LightDM login manager. The solution I found was to install this missing package:
apt install libwayland-egl1-mesa-lts-xenial

Looking for clues in the logs The first place I looked was the log file for the login manager (/var/log/lightdm/lightdm.log) where I found the following:
DEBUG: Session pid=12743: Running command /usr/sbin/lightdm-session gnome-session --session=gnome
DEBUG: Creating shared data directory /var/lib/lightdm-data/username
DEBUG: Session pid=12743: Logging to .xsession-errors
This told me that the login manager runs the gnome-session command and gets it to create a session of type gnome. That command line is defined in /usr/share/xsessions/gnome.desktop (look for Exec=):
[Desktop Entry]
Name=GNOME
Comment=This session logs you into GNOME
Exec=gnome-session --session=gnome
TryExec=gnome-shell
X-LightDM-DesktopName=GNOME
I couldn't see anything unexpected there, but it did point to another log file (~/.xsession-errors) which contained the following:
Script for ibus started at run_im.
Script for auto started at run_im.
Script for default started at run_im.
init: Le processus gnome-session (GNOME) main (11946) s'est achev  avec l' tat 1
init: D connect  du bus D-Bus notifi 
init: Le processus logrotate main (11831) a  t  tu  par le signal TERM
init: Le processus update-notifier-crash (/var/crash/_usr_bin_unattended-upgrade.0.crash) main (11908) a  t  tu  par le signal TERM
Seaching for French error messages isn't as useful as searching for English ones, so I took a look at /var/log/syslog and found this:
gnome-session[4134]: WARNING: App 'gnome-shell.desktop' exited with code 127
gnome-session[4134]: WARNING: App 'gnome-shell.desktop' exited with code 127
gnome-session[4134]: WARNING: App 'gnome-shell.desktop' respawning too quickly
gnome-session[4134]: CRITICAL: We failed, but the fail whale is dead. Sorry....
It looks like gnome-session is executing gnome-shell and that this last command is terminating prematurely. This would explain why gnome-session exits immediately after login.

Increasing the amount of logging In order to get more verbose debugging information out of gnome-session, I created a new type of session (GNOME debug) by copying the regular GNOME session:
cp /usr/share/xsessions/gnome.desktop /usr/share/xsessions/gnome-debug.desktop
and then adding --debug to the command line inside gnome-debug.desktop:
[Desktop Entry]
Name=GNOME debug
Comment=This session logs you into GNOME debug
Exec=gnome-session --debug --session=gnome
TryExec=gnome-shell
X-LightDM-DesktopName=GNOME debug
After restarting LightDM (service lightdm restart), I clicked the GNOME logo next to the password field and chose GNOME debug before trying to login again. This time, I had a lot more information in ~/.xsession-errors:
gnome-session[12878]: DEBUG(+): GsmAutostartApp: starting gnome-shell.desktop: command=/usr/bin/gnome-shell startup-id=10d41f1f5c81914ec61471971137183000000128780000
gnome-session[12878]: DEBUG(+): GsmAutostartApp: started pid:13121
...
/usr/bin/gnome-shell: error while loading shared libraries: libwayland-egl.so.1: cannot open shared object file: No such file or directory
gnome-session[12878]: DEBUG(+): GsmAutostartApp: (pid:13121) done (status:127)
gnome-session[12878]: WARNING: App 'gnome-shell.desktop' exited with code 127
which suggests that gnome-shell won't start because of a missing library.

Finding the missing library To find the missing library, I used the apt-file command:
apt-file update
apt-file search libwayland-egl.so.1
and found that this file is provided by the following packages:
  • libhybris
  • libwayland-egl1-mesa
  • libwayland-egl1-mesa-dbg
  • libwayland-egl1-mesa-lts-utopic
  • libwayland-egl1-mesa-lts-vivid
  • libwayland-egl1-mesa-lts-wily
  • libwayland-egl1-mesa-lts-xenial
Since I installed the LTS Enablement stack, the package I needed to install to fix this was libwayland-egl1-mesa-lts-xenial. I filed a bug for this on Launchpad.

Next.