Search Results: "Tim Retout"

1 January 2024

Tim Retout: Prevent DOM-XSS with Trusted Types - a smarter DevSecOps approach

It can be incredibly easy for a frontend developer to accidentally write a client-side cross-site-scripting (DOM-XSS) security issue, and yet these are hard for security teams to detect. Vulnerability scanners are slow, and suffer from false positives. Can smarter collaboration between development, operations and security teams provide a way to eliminate these problems altogether? Google claims that Trusted Types has all but eliminated DOM-XSS exploits on those of their sites which have implemented it. Let s find out how this can work!

DOM-XSS vulnerabilities are easy to write, but hard for security teams to catch It is very easy to accidentally introduce a client-side XSS problem. As an example of what not to do, suppose you are setting an element s text to the current URL, on the client side:
// Don't do this
para.innerHTML = location.href;
Unfortunately, an attacker can now manipulate the URL (and e.g. send this link in a phishing email), and any HTML tags they add will be interpreted by the user s browser. This could potentially be used by the attacker to send private data to a different server. Detecting DOM-XSS using vulnerability scanning tools is challenging - typically this requires crawling each page of the website and attempting to detect problems such as the one above, but there is a significant risk of false positives, especially as the complexity of the logic increases. There are already ways to avoid these exploits developers should validate untrusted input before making use of it. There are libraries such as DOMPurify which can help with sanitization.1 However, if you are part of a security team with responsibility for preventing these issues, it can be complex to understand whether you are at risk. Different developer teams may be using different techniques and tools. It may be impossible for you to work closely with every developer so how can you know that the frontend team have used these libraries correctly?

Trusted Types closes the DevSecOps feedback loop for DOM-XSS, by allowing Ops and Security to verify good Developer practices Trusted Types enforces sanitization in the browser2, by requiring the web developer to assign a particular kind of JavaScript object rather than a native string to .innerHTML and other dangerous properties. Provided these special types are created in an appropriate way, then they can be trusted not to expose XSS problems. This approach will work with whichever tools the frontend developers have chosen to use, and detection of issues can be rolled out by infrastructure engineers without requiring frontend code changes.

Content Security Policy allows enforcement of security policies in the browser itself Because enforcing this safer approach in the browser for all websites would break backwards-compatibility, each website must opt-in through Content Security Policy headers. Content Security Policy (CSP) is a mechanism that allows web pages to restrict what actions a browser should execute on their page, and a way for the site to receive reports if the policy is violated. Diagram showing a browser communicating with a web server. Content-Security-Policy headers are returned by the URL “/”, and the browser reports any security violations to “/csp”. Figure 1: Content-Security-Policy browser communication This is revolutionary, because it allows servers to receive feedback in real time on errors that may be appearing in the browser s console.

Trusted Types can be rolled out incrementally, with continuous feedback Web.dev s article on Trusted Types explains how to safely roll out the feature using the features of CSP itself:
  • Deploy a CSP collector if you haven t already
  • Switch on CSP reports without enforcement (via Content-Security-Policy-Report-Only headers)
  • Iteratively review and fix the violations
  • Switch to enforcing mode when there are a low enough rate of reports
Static analysis in a continuous integration pipeline is also sensible you want to prevent regressions shipping in new releases before they trigger a flood of CSP reports. This will also give you a chance of finding any low-traffic vulnerable pages.

Smart security teams will use techniques like Trusted Types to eliminate entire classes of bugs at a time Rather than playing whack-a-mole with unreliable vulnerability scanning or bug bounties, techniques such as Trusted Types are truly in the spirit of Secure by Design build high quality in from the start of the engineering process, and do this in a way which closes the DevSecOps feedback loop between your Developer, Operations and Security teams.

  1. Sanitization libraries are especially needed when the examples become more complex, e.g. if the application must manipulate the input. DOMPurify version 1.0.9 also added Trusted Types support, so can still be used to help developers adopt this feature.
  2. Trusted Types has existed in Chrome and Edge since 2020, and should soon be coming to Firefox as well. However, it s not necessary to wait for Firefox or Safari to add support, because the large market share of Chrome and Edge will let you identify and fix your site s DOM-XSS issues, even if you do not set enforcing mode, and users of all browsers will benefit. Even so, it is great that Mozilla is now on board.

6 December 2023

Tim Retout: Nostalgia for my attention span

This post was possibly inspired by my daughter s homework assignment to interview an old person about technology change. Guess who s old now? Sometimes I look back at how life used to be, and remember what it was like. There are two key nostalgia points for me: before internet, and before smartphones.

Before the internet Before the internet, there were computers. There were always computers in my life, they were just less fancy, with mainly text and fewer graphics at first. These days we adapt our user interfaces to look more like DOS and call it retro , although claiming it s more efficient that way. Sometimes it is. The written word is a fundamental mode of human communication it resonates. Some of my earliest programming memories were typing lines of BASIC code into some sort of BBC Micro, usually not successfully. Most of my computer learning was largely theoretical, through Usborne books (now available online! Thanks internet for reducing the marginal cost of publishing to approximately zero) but without the benefit of a computer of my own. I went for a walk this morning, with a slight bite to the air, and bought a newspaper, like the old person I am (I m leaning in). Imagine that - a physical paper full of news. When I was an inquisitive sixth-former, TheGuardian cost 50 pence today it was 2.80. This increase far outstrips the rate of inflation, which would bring it to around 90 pence. No, this must surely reflect the drop in circulation of the physical edition, and the rise in advertisement-filled web content.

Before smartphones As the internet became mainstream, I remember dial-up, and Freeserve, and even the tail end of USENET, and IRC. This was still a time when email was a thing you had to be at your desk to check. Feature phones were very useful, but it was still possible to get lost. My university halls had a shared land-line phone on each floor. And I wrote on the internet. I remember commenting on blogs, and I think my writing was better back then; more free, less inhibited. I do find myself giving less attention to news squeezed into the small form-factor of a mobile phone. Chat messages and notification pings. I m not convinced this is great progress for humanity.

18 April 2023

Tim Retout: Data Diodes

At ArgoCon today, Thomas Fricke gave a nice talk on Cloud Native Deployments in Air Gapped Environments describing container vulnerability scanning in the German energy sector and since he didn t mention data diodes, and since some of my colleagues at Oakdoor/PA Consulting make data diodes for a living, I thought this might be interesting to write about! It s one thing to have an air-gapped system, but eventually in order to be useful you re going to have to move data into it, and this is going to need something better than just plugging a USB stick into your critical system. Just ask Iran how well this goes. Eight years after Stuxnet, the UK National Cyber Security Centre published the NCSC Safely Importing Data Pattern - but I found this a bit cryptic on first reading, because it s not clear what type of systems the pattern applies to, and deliberately uses technology-neutral language. Also, this was published around the same time GDPR was being implemented, mentions sensitive or personal data , and claims to be aimed at small to medium organisations - but I don t know how many small businesses implement a MILS security architecture. So without picking up on the mention of data diode , you can be left scratching your head about how to actually implement the pattern. One answer using Oakdoor components: The Oakdoor diodes themselves are quite interesting - they re electrical rather than optical like most data diodes. The other thing I d always wondered is how on earth you could even establish a TCP handshake across one - the answer is, you can t, so you use a UDP-based protocol like TFTP for file transfer. In this way, you build the transform/verify and protocol break that the NCSC pattern requires. Congratulations, you can now import your documents to your otherwise air-gapped system without also importing malicious code, and without risking data exfiltration. Note carefully that the Safely Importing Data pattern makes no guarantees about the integrity of your documents - they could be severely modified going through this process. For the same reason, I anticipate challenges applying this pattern to software binaries.

5 March 2023

Reproducible Builds: Reproducible Builds in February 2023

Welcome to the February 2023 report from the Reproducible Builds project. As ever, if you are interested in contributing to our project, please visit the Contribute page on our website.
FOSDEM 2023 was held in Brussels on the 4th & 5th of February and featured a number of talks related to reproducibility. In particular, Akihiro Suda gave a talk titled Bit-for-bit reproducible builds with Dockerfile discussing deterministic timestamps and deterministic apt-get (original announcement). There was also an entire track of talks on Software Bill of Materials (SBOMs). SBOMs are an inventory for software with the intention of increasing the transparency of software components (the US National Telecommunications and Information Administration (NTIA) published a useful Myths vs. Facts document in 2021).
On our mailing list this month, Larry Doolittle was puzzled why the Debian verilator package was not reproducible [ ], but Chris Lamb pointed out that this was due to the use of Python s datetime.fromtimestamp over datetime.utcfromtimestamp [ ].
James Addison also was having issues with a Debian package: in this case, the alembic package. Chris Lamb was also able to identify the Sphinx documentation generator as the cause of the problem, and provided a potential patch that might fix it. This was later filed upstream [ ].
Anthony Harrison wrote to our list twice, first by introducing himself and their background and later to mention the increasing relevance of Software Bill of Materials (SBOMs):
As I am sure everyone is aware, there is a growing interest in [SBOMs] as a way of improving software security and resilience. In the last two years, the US through the Exec Order, the EU through the proposed Cyber Resilience Act (CRA) and this month the UK has issued a consultation paper looking at software security and SBOMs appear very prominently in each publication. [ ]

Tim Retout wrote a blog post discussing AlmaLinux in the context of CentOS, RHEL and supply-chain security in general [ ]:
Alma are generating and publishing Software Bill of Material (SBOM) files for every package; these are becoming a requirement for all software sold to the US federal government. What s more, they are sending these SBOMs to a third party (CodeNotary) who store them in some sort of Merkle tree system to make it difficult for people to tamper with later. This should theoretically allow end users of the distribution to verify the supply chain of the packages they have installed?

Debian

F-Droid & Android

diffoscope diffoscope is our in-depth and content-aware diff utility. Not only can it locate and diagnose reproducibility issues, it can provide human-readable diffs from many kinds of binary formats. This month, Chris Lamb released versions 235 and 236; Mattia Rizzolo later released version 237. Contributions include:
  • Chris Lamb:
    • Fix compatibility with PyPDF2 (re. issue #331) [ ][ ][ ].
    • Fix compatibility with ImageMagick version 7.1 [ ].
    • Require at least version 23.1.0 to run the Black source code tests [ ].
    • Update debian/tests/control after merging changes from others [ ].
    • Don t write test data during a test [ ].
    • Update copyright years [ ].
    • Merged a large number of changes from others.
  • Akihiro Suda edited the .gitlab-ci.yml configuration file to ensure that versioned tags are pushed to the container registry [ ].
  • Daniel Kahn Gillmor provided a way to migrate from PyPDF2 to pypdf (#1029741).
  • Efraim Flashner updated the tool metadata for isoinfo on GNU Guix [ ].
  • FC Stegerman added support for Android resources.arsc files [ ], improved a number of file-matching regular expressions [ ][ ] and added support for Android dexdump [ ]; they also fixed a test failure (#1031433) caused by Debian s black package having been updated to a newer version.
  • Mattia Rizzolo:
    • updated the release documentation [ ],
    • fixed a number of Flake8 errors [ ][ ],
    • updated the autopkgtest configuration to only install aapt and dexdump on architectures where they are available [ ], making sure that the latest diffoscope release is in a good fit for the upcoming Debian bookworm freeze.

reprotest Reprotest version 0.7.23 was uploaded to both PyPI and Debian unstable, including the following changes:
  • Holger Levsen improved a lot of documentation [ ][ ][ ], tidied the documentation as well [ ][ ], and experimented with a new --random-locale flag [ ].
  • Vagrant Cascadian adjusted reprotest to no longer randomise the build locale and use a UTF-8 supported locale instead [ ] (re. #925879, #1004950), and to also support passing --vary=locales.locale=LOCALE to specify the locale to vary [ ].
Separate to this, Vagrant Cascadian started a thread on our mailing list questioning the future development and direction of reprotest.

Upstream patches The Reproducible Builds project detects, dissects and attempts to fix as many currently-unreproducible packages as possible. We endeavour to send all of our patches upstream where appropriate. This month, we wrote a large number of such patches, including:

Testing framework The Reproducible Builds project operates a comprehensive testing framework (available at tests.reproducible-builds.org) in order to check packages and other artifacts for reproducibility. In February, the following changes were made by Holger Levsen:
  • Add three new OSUOSL nodes [ ][ ][ ] and decommission the osuosl174 node [ ].
  • Change the order of listed Debian architectures to show the 64-bit ones first [ ].
  • Reduce the frequency that the Debian package sets and dd-list HTML pages update [ ].
  • Sort Tested suite consistently (and Debian unstable first) [ ].
  • Update the Jenkins shell monitor script to only query disk statistics every 230min [ ] and improve the documentation [ ][ ].

Other development work disorderfs version 0.5.11-3 was uploaded by Holger Levsen, fixing a number of issues with the manual page [ ][ ][ ].
Bernhard M. Wiedemann published another monthly report about reproducibility within openSUSE.
If you are interested in contributing to the Reproducible Builds project, please visit the Contribute page on our website. You can get in touch with us via:

4 February 2023

Tim Retout: AlmaLinux and SBOMs

At CentOS Connect yesterday, Jack Aboutboul and Javier Hernandez presented a talk about AlmaLinux and SBOMs [video], where they are exploring a novel supply-chain security effort in the RHEL ecosystem. Now, I have unfortunately ignored the Red Hat ecosystem for a long time, so if you are in a similar position to me: CentOS used to produce debranded rebuilds of RHEL; but Red Hat changed the project round so that CentOS Stream now sits in between Fedora Rawhide and RHEL releases, allowing the wider community to try out/contribute to RHEL builds before their release. This is credited with making early RHEL point releases more stable, but left a gap in the market for debranded rebuilds of RHEL; AlmaLinux and Rocky Linux are two distributions that aim to fill that gap. Alma are generating and publishing Software Bill of Material (SBOM) files for every package; these are becoming a requirement for all software sold to the US federal government. What s more, they are sending these SBOMs to a third party (CodeNotary) who store them in some sort of Merkle tree system to make it difficult for people to tamper with later. This should theoretically allow end users of the distribution to verify the supply chain of the packages they have installed? I am currently unclear on the differences between CodeNotary/ImmuDB vs. Sigstore/Rekor, but there s an SBOM devroom at FOSDEM tomorrow so maybe I ll soon be learning that. This also makes me wonder if a Sigstore-based approach would be more likely to be adopted by Fedora/CentOS/RHEL, and whether someone should start a CentOS Software Supply Chain Security SIG to figure this out, or whether such an effort would need to live with the build system team to be properly integrated. It would be nice to understand the supply-chain story for CentOS and RHEL. As I write this, I m also reflecting that perhaps it would be helpful to explain what happens next in the SBOM consumption process; i.e. can this effort demonstrate tangible end user value, like enabling AlmaLinux to integrate with a vendor-neutral approach to vulnerability management? Aside from the value of being able to sell it to the US government!

29 June 2022

Tim Retout: Git internals and SHA-1

LWN reminds us that Git still uses SHA-1 by default. Commit or tag signing is not a mitigation, and to understand why you need to know a little about Git s internal structure. Git internally looks rather like a content-addressable filesystem, with four object types: tags, commits, trees and blobs. Content-addressable means changing the content of an object changes the way you address or reference it, and this is achieved using a cryptographic hash function. Here is an illustration of the internal structure of an example repository I created, containing two files (./foo.txt and ./bar/bar.txt) committed separately, and then tagged: Graphic showing an example Git internal structure featuring tags, commits, trees and blobs, and how these relate to each other. You can see how trees represent directories, blobs represent files, and so on. Git can avoid internal duplication of files or directories which remain identical. The hash function allows very efficient lookup of each object within git s on-disk storage. Tag and commit signatures do not directly sign the files in the repository; that is, the input to the signature function is the content of the tag/commit object, rather than the files themselves. This is analogous to the way that GPG signatures actually sign a cryptographic hash of your email, and there was a time when this too defaulted to SHA-1. An attacker who can break that hash function can bypass the guarantees of the signature function. A motivated attacker might be able to replace a blob, commit or tree in a git repository using a SHA-1 collision. Replacing a blob seems easier to me than a commit or tree, because there is no requirement that the content of the files must conform to any particular format. There is one key technical mitigation to this in Git, which is the SHA-1DC algorithm; this aims to detect and prevent known collision attacks. However, I will have to leave the cryptanalysis of this to the cryptographers! So, is this in your threat model? Do we need to lobby GitHub for SHA-256 support? Either way, I look forward to the future operational challenge of migrating the entire world s git repositories across to SHA-256.

26 April 2022

Tim Retout: Exploring StackRox

At the end of March, the source code to StackRox was released, following the 2021 acquisition by Red Hat. StackRox is a Kubernetes security tool which is now badged as Red Hat Advanced Cluster Security (RHACS), offering features such as vulnerability management, validating cluster configurations against CIS benchmarks, and some runtime behaviour analysis. In fact, it s such a diverse range of features that I have trouble getting my head round it from the product page or even the documentation. Source code is available via the StackRox organisation on GitHub, and the most obviously interesting repositories seem to be: My initial curiosity has been around the collector , to better understand what runtime behaviour the tool can actually pick up. I was intrigued to find that the actual kernel component is a patched version of Falco s kernel module/eBPF probes; a few features are disabled compared to Falco, e.g. page faults and signal events. There s a list of supported syscalls in driver/syscall_table.c, which seems to have drifted slightly or be slightly behind the upstream Falco version? In particular I note the absence of io_uring, but given RHACS is mainly deployed on Linux 4.18 at the moment (RHEL 8) this is probably a non-issue. (But relevant if anyone were to run it on newer kernels.) That s as far as I ve got for now. Red Hat are making great efforts to reach out to the community; there s a Slack channel, and office hours recordings, and a community hub to explore further. It s great to see new free software projects created through acquisition in this way - I m not sure I remember seeing a comparable example.

Tim Retout: Blog Posts

14 August 2017

Tim Retout: Jenkins milestone steps do not work yet

Public Service Announcement for anyone relying on Jenkins for continuous deployment - the milestone step plugin as of version 1.3.1 will not function correctly if you could have more than two builds running at once - older builds could get deployed after newer builds. See JENKINS-46097. A possible workaround is to add an initial milestone at the start of the pipeline, which will then allow builds to be killed early. (Builds are only killed early once they have passed their first milestone.) Going by the source history, I reckon this bug has been present since the milestone-step plugin was created.

25 April 2017

Tim Retout: Packet.net arm64 servers

Packet.net offer an ARMv8 server with 96 cores for $0.50/hour. I signed up and tried building Libreoffice to see what would happen. Debian isn't officially supported there yet, but they offer Ubuntu, which suffices for testing the hardware. Screenshot of htop showing one core in use and 95 idle. Final build time: around 12 hours, compared to 2hr 55m on the official arm64 buildd. Most of the Libreoffice build appeared to consist of "touch /some/file" repeated endlessly - I have a suspicion that the I/O performance might be low on this server (although I have no further evidence to offer for this). I think the next thing to try is building on a tmpfs, because the server has 128GB RAM available, and it's a shame not to use it.

1 January 2017

Tim Retout: Happy New Year!

Happy New Year! Apparently I failed to write a blog entry in all of 2016, and almost all of 2015. Probably says something profound about the rise of social media, or perhaps I was just very busy. I bet my writing has suffered. I have spent the last few days tidying up and clearing out clothes, bits of paper, and wires. I think there's light at the end of the tunnel.

14 January 2016

Petter Reinholdtsen: Always download Debian packages using Tor - the simple recipe

During his DebConf15 keynote, Jacob Appelbaum observed that those listening on the Internet lines would have good reason to believe a computer have a given security hole if it download a security fix from a Debian mirror. This is a good reason to always use encrypted connections to the Debian mirror, to make sure those listening do not know which IP address to attack. In August, Richard Hartmann observed that encryption was not enough, when it was possible to interfere download size to security patches or the fact that download took place shortly after a security fix was released, and proposed to always use Tor to download packages from the Debian mirror. He was not the first to propose this, as the apt-transport-tor package by Tim Retout already existed to make it easy to convince apt to use Tor, but I was not aware of that package when I read the blog post from Richard. Richard discussed the idea with Peter Palfrader, one of the Debian sysadmins, and he set up a Tor hidden service on one of the central Debian mirrors using the address vwakviie2ienjx6t.onion, thus making it possible to download packages directly between two tor nodes, making sure the network traffic always were encrypted. Here is a short recipe for enabling this on your machine, by installing apt-transport-tor and replacing http and https urls with tor+http and tor+https, and using the hidden service instead of the official Debian mirror site. I recommend installing etckeeper before you start to have a history of the changes done in /etc/.
apt install apt-transport-tor
sed -i 's% http://ftp.debian.org/%tor+http://vwakviie2ienjx6t.onion/%' /etc/apt/sources.list
sed -i 's% http% tor+http%' /etc/apt/sources.list
If you have more sources listed in /etc/apt/sources.list.d/, run the sed commands for these too. The sed command is assuming your are using the ftp.debian.org Debian mirror. Adjust the command (or just edit the file manually) to match your mirror. This work in Debian Jessie and later. Note that tools like apt-file only recently started using the apt transport system, and do not work with these tor+http URLs. For apt-file you need the version currently in experimental, which need a recent apt version currently only in unstable. So if you need a working apt-file, this is not for you. Another advantage from this change is that your machine will start using Tor regularly and at fairly random intervals (every time you update the package lists or upgrade or install a new package), thus masking other Tor traffic done from the same machine. Using Tor will become normal for the machine in question. On Freedombox, APT is set up by default to use apt-transport-tor when Tor is enabled. It would be great if it was the default on any Debian system.

17 January 2015

Tim Retout: CPAN PR Challenge - January - IO-Digest

I signed up to the CPAN Pull Request Challenge - apparently I'm entrant 170 of a few hundred. My assigned dist for January was IO-Digest - this seems a fairly stable module. To get the ball rolling, I fixed the README, but this was somehow unsatisfying. :) To follow-up, I added Travis-CI support, with a view to validating the other open pull request - but that one looks likely to be a platform-specific problem. Then I extended the Travis file to generate coverage reports, and separately realised the docs weren't quite fully complete, so fixed this and added a test. Two of these have already been merged by the author, who was very responsive. Part of me worries that Github is a centralized, proprietary platform that we now trust most of our software source code to. But activities such as this are surely a good thing - how much harder would it be to co-ordinate 300 volunteers to submit patches in a distributed fashion? I suppose you could do something similar with the list of Debian source packages and metadata about the upstream VCS, say...

15 January 2015

Tim Retout: Docker London Meetup - January 2015

Last week, I visited London for the January Docker meetup, which was the first time I'd attended this group. It was a talk-oriented format, with around 200 attendees packed into Shoreditch Village Hall; free pizza and beer was provided thanks to the sponsors, which was awesome (and makes logistics easier when you're travelling there from work). There were three talks. First, Andrew Martin from British Gas spoke about how they use Docker for testing and continuous deployment of their Node.js microservices - buzzword bingo! But it's helpful to see how companies approach these things. Second, Johan Euphrosine from Google gave a short demo of Google Cloud Platform for running Docker containers (mostly around Container Engine, but also briefly App Engine). This was relevant to my interests, but I'd already seen this sort of talk online. Third, Dan Williams presented his holiday photos featuring a journey on a container ship, which wins points from me for liberal interpretation of the meetup topic, and was genuinely very entertaining/interesting - I just regret having to leave to catch a train halfway through. In summary, this was worth attending, but as someone just getting started with containers I'd love some sort of smaller meetings with opportunities for interaction/activity. There's such a variety of people/use cases for Docker that I'm not sure how much everyone had in common with each other; it would be interesting to find out.

2 January 2015

Tim Retout: Decluttering

Kate's been reading some book or other by KonMari. Hence we've rehomed lots of clothes, books and DVDs to charity and various places. I am told the key is to ask, "Does this item bring me joy?" Then if it doesn't bring you enough joy, it goes. The nice thing was, it was actually exciting to reveal the gems among my bookshelves, which were previously hidden by a load of second-rate books. True story: I was sitting downstairs deciding whether to splash out 25 for a particular book. Was called upstairs to make some 'joy decisions', and saw the very same book on the shelf already. Fast delivery!

1 January 2015

Tim Retout: Looking back at 2014

I have a tendency to forget what I've been up to - so I made a list for 2014. I started the year having recently watched many 30c3 videos online - these were fantastic, and I really should get round to the ones from 31c3. January is traditionally the peak time for the recruitment industry, so at work we were kept busy dealing with all the traffic. We'd recently switched the main job search to use Solr rather than MySQL, which helped - but we did spend a lot of time during the early months of the year converting tables from MyISAM to InnoDB. At the start of February was FOSDEM, and Kate and I took Sophie (then aged 10 months) to her first software conference. I grabbed a spot in the Go devroom for the Sunday afternoon, which was awesome. Downside: we got horribly ill while in Brussels. At work I was sorting out configuration management - this led to some Perl module backporting for Debian, and I uploaded Zookeeper at some point during the year as well. We currently make use of vagrant, chef and a combination of Debian packages and cpanm for Perl modules, but I have big plans to improve on that this year. Over a break from work I hacked up apt-transport-tor, which lets you install Debian packages over the Tor network. (This was inspired by videos from 30c3 and/or LibrePlanet, I think?) Continuing the general theme of paranoia, I attended the Don't Spy On Us campaign's day of action in June. Over the summer at work I was experimenting with Statsd and Graphite for monitoring. I also wrote Toggle, a Perl module for feature flags. In July I attended a London.pm meeting for the first time, and heard Thomas Klausner talk about OX - this nudged me into various talks at LPW (see below). Pubs have a lot to answer for. At some point I got an IPv6 tunnel working at home (although my ISP-provided router's wireless doesn't forward it), and I had an XBMC install going on a Raspberry Pi as another fun hack. In August and September I worked on packaging pump.io for Debian, and attended IndieWebCamp Brighton, where I delivered a talk/workshop on setting up TLS. (This all ties in to the paranoia theme.) I stalled the work on pump.io, partly because of licensing issues at build-dependency time (if you want to run all the tests) - but I expect I'll pick this up in 2015 once jessie is released. November was the London Perl Workshop, where I presented my work from the summer on statsd/graphite and Toggle, and a Bread::Board lightning talk. LPW was more enjoyable for me this year than previous years, probably because of the interesting people discussing various aspects of how feature flags ought to work. Simultaneously was the Cambridge MiniDebConf (why do these always clash?) where I think I fixed at least one RC bug. This is not an exhaustive list of everything I've done this year - there are more changes now lined up for 2015 which I haven't shared yet. But looking back, I'm pleased that the many small experiments I get up to do add up to something over time, and I can see that I'm achieving something. Here's to another year!

31 August 2014

Tim Retout: Website revamp

This weekend I moved my blog to a different server. This meant I could: I've tested it, and it's working. I'm hoping that I can swap out the Node.js modules one-by-one for the Debian-packaged versions.

28 August 2014

Tim Retout: Pump.io update 1

[The story so far: I'm packaging pump.io for Debian.] 4 packages uploaded to NEW: 2 packages eliminated as not needed: 1 package in progress: Got my eye on: Thoughts Currently I'm averaging around one package upload a day, I think? Which would mean ~1 month to go? But there may be challenges around getting packages through the NEW queue in time to build-depend on them. Someone has asked my temporary Twitter account whether I have a pump.io account. Technically, yes, I do - but I don't post anything on it, because I want to run my own server in the long term. As part of running my own server, I always find that easier if I'm installing software from Debian packages. Hence this work. Sledgehammer, meet nut.

Tim Retout: Pump.io update 1

[The story so far: I'm packaging pump.io for Debian.] 4 packages uploaded to NEW: 2 packages eliminated as not needed: 1 package in progress: Got my eye on: Thoughts Currently I'm averaging around one package upload a day, I think? Which would mean ~1 month to go? But there may be challenges around getting packages through the NEW queue in time to build-depend on them. Someone has asked my temporary Twitter account whether I have a pump.io account. Technically, yes, I do - but I don't post anything on it, because I want to run my own server in the long term. As part of running my own server, I always find that easier if I'm installing software from Debian packages. Hence this work. Sledgehammer, meet nut.

23 August 2014

Tim Retout: Packaging pump.io for Debian

I intend to intend to package pump.io for Debian. It's going to take a long time, but I don't know whether that's weeks or years yet. The world needs decentralized social networking. I discovered the tools that let me create this wiki summary of the progress in pump.io packaging. There are at least 35 dependencies that need uploading, so this would go a lot faster if it weren't a solo effort - if anyone else has some time, please let me know! But meanwhile I'm hoping to build some momentum. I think it's important to keep the quality of the packaging as high as possible, even while working through so many. It would cost a lot of time later if I had to go back and fix bugs in everything. I really want to be running the test suites in these builds, but it's not always easy. One of the milestones along the way might be packaging nodeunit. Nodeunit is a Nodejs unit testing framework (duh), used by node-bcrypt (and, unrelatedly, statsd, which would be pretty cool to have in Debian too). Last night I filed eight pull requests to try and fix up copyright/licensing issues in dependencies of nodeunit. Missing copyright statements are one of the few things I can't fix by myself as a packager. All I can do is wait, and package other dependencies in the meantime. Fortunately there are plenty of those. And I have not seen so many issues in direct dependencies of pump.io itself - or at least they've been fixed in git.

Next.