Search Results: "Thomas Goirand"

28 December 2024

Thomas Goirand: Running a Lenovo Legion pro 7 laptop under Debian

As I was tired of long build times, so I convinced my boss to buy me a Lenovo Legion pro 7. The reason is: this laptop has an AMD Ryzen 9 7945HX that has 16 cores (32 threads). This reduces a lot the time I have to just wait for my laptop to compile, or run unit tests, especially for big packages like Ceph, OpenVSwitch, and so on. When buying it, I knew it would not be a good fit for Debian, as this type of laptop is aimed at gaming, and the support under Linux is rather bad. I wish Lenovo had other policies, but that is the way it is: if you re a Linux user, you re not suppose to be needing a big CPU, apparently. Anyways, I slowly have been able to fix all issues over this year. In this blog post I ll explain how I fixed all problems, in the hope it can be useful to others. And I ll explain what the src:lenovolegionlinux package (that I now maintain in Debian) does. Video The laptop comes with an nVidia RTX-4080 and a Radeon. I quickly tried the radeon, but couldn t make it work with an external monitor. So I gave up on it, disabled it, and now I m using the proprietary nVidia driver from non-free. I don t like it: the nVidia card drains too much power, and I don t care at all 3D acceleration. I would have prefer an intel board, but no choice: all laptops with this kind of CPU comes with gamer s 3D card. Anyways, apart from the power issue, it works out well. Fan control This sounds like a non-issue, but it is a huge one. Indeed, if not controlling the fan, it is impossible to get the full potential of the CPUs that are otherwise throttling. One may end up using the laptop at a few hundred MHz instead of 5GHz+. More on this later. Sound It took me a really long time to figure out what to do. Indeed, while the sound card works out of the box, the issue was that my laptop came with a TI (Texas Instrument) speaker firmware that isn t on by default. I suppose the purpose is to save on power when it isn t in use. Anyways, to have sound working, one need in Debian, to run at least kernel 6.10, which means for me, running the Bookworm backport, so that there s a kernel module for the speakers. But that s not it. The speakers also need a proprietary firmware in /lib/firmware/TAS2XXX38*.bin. I was able to find that in the ti.com forum. As I tried so many packages, I wouldn t be able to tell which one was the correct one. Once that was done, the firmware needs to be initialized through the i2c interface. I could find a script that did that, which I pushed in my lenovolegionlinux package (see below). WiFi WiFi worked out of the box for me, just it wouldn t wake up if I closed the laptop lead. This fixed it for me in /etc/modprobe.d/rtw8852be.conf: options rtw89_pci disable_aspm_l1=y disable_aspm_l1ss=y
options rtw89_core disable_ps_mode=y lenovolegionlinux package I came across https://github.com/johnfanv2/LenovoLegionLinux which I packaged. The result is now 4 binary packages: lenovolegionlinux-dkms that provides the kernel module for accessing the fan control. python3-legion-linux that provides legion_cli and legion_gui, written in Python, that make it possible to control the kernel module. I often use sudo legion_gui, click on Other options and then switch the power profile from quiet to balanced. Many things on this GUI do no work for me, like the fancurve thingy, but should be working for other flavors of Legion laptops. Please feel free to contribute. There s also legiond that provides a daemon for setting-up the fan curve on wake up. And finally, I pushed my i2c speaker script to a new lenovolegionlinux-sound debian binary package that I have just uploaded today, in the hope it may be useful for others. Conclusion Finally, almost everything is (almost) working as expected. Just my webcam (lsusb says it s a Luxvisions Innotech Limited Integrated Camera) went dark at some point (it did work previously). It is now as if it is working, but just transmitting a black picture. If anyone knows how to fix, please tell me. Also, I only get 40 minutes of battery time if I m lucky, I hope this could be fixed. But overall, I m happy of the laptop. Thanks to Ding Shenghao for his support of many people in the ti.com forum. Thanks to the people maintaining the LenovoLegionLinux that helped me a lot writing this Debian package. Please try and report issue with lenovolegionlinux in Debian, and help me improving it. It is in Salsa s debian namespace in the hope that others may push contributions.

22 August 2024

Thomas Goirand: Packaging Home Assistant

During Debconf, Edward Betts and myself started packaging Home Assistant for Debian. It consists of hundreds of Python packages. So far, we counted at least 675 packages. That s a lot, though most packages are just libraries to talk with some IoT devices and some APIs. It s fairly easy to create a new package: it takes me about 15 to 20 minutes, probably half that time to Edward. And it s a lot of fun. So far in one month of time, we managed to package about 1 third of the list (probably 200+ Python packages already). Once we ve done all the dependencies, we may start to have fun with the core of the application! At the current speed, hopefully we ll be done before the end of the year. Edward and myself have swear to make at least one package a day, which I ve been doing so far, and Edward did a way more We also received contributions from Silton0506, Tianyu, piotr, EiPi Fun, sourabhtk37, and Count-Dracula, as per the very bottom of the TODO list in the wiki (see link below). If you have a bit of free time, we d love to have more contributors. Here s were to get the needed information: We created a team in Salsa: https://salsa.debian.org/homeassistant-team/ Our TODO list: https://wiki.debian.org/Python/HomeAssistant Our DDPO Q/A page: https://qa.debian.org/developer.php?login=team%2Bhomeassistant%40tracker.debian.org Feel free to join us on IRC: #debian-homeassistant Discussing with a lot of people about it, I realized that A LOT of DDs are actually using Home Assistant. Wouldn t you like it better if it was just a apt install away ? Any DD can simply take a package in the wiki, open an ITP, upload it s debianized source on Salsa, and upload to the Debian archive. Most are very easy simple packages to make.

10 May 2024

Reproducible Builds: Reproducible Builds in April 2024

Welcome to the April 2024 report from the Reproducible Builds project! In our reports, we attempt to outline what we have been up to over the past month, as well as mentioning some of the important things happening more generally in software supply-chain security. As ever, if you are interested in contributing to the project, please visit our Contribute page on our website. Table of contents:
  1. New backseat-signed tool to validate distributions source inputs
  2. NixOS is not reproducible
  3. Certificate vulnerabilities in F-Droid s fdroidserver
  4. Website updates
  5. Reproducible Builds and Insights from an Independent Verifier for Arch Linux
  6. libntlm now releasing minimal source-only tarballs
  7. Distribution work
  8. Mailing list news
  9. diffoscope
  10. Upstream patches
  11. reprotest
  12. Reproducibility testing framework

New backseat-signed tool to validate distributions source inputs kpcyrd announced a new tool called backseat-signed, after:
I figured out a somewhat straight-forward way to check if a given git archive output is cryptographically claimed to be the source input of a given binary package in either Arch Linux or Debian (or both).
Elaborating more in their announcement post, kpcyrd writes:
I believe this to be the reproducible source tarball thing some people have been asking about. As explained in the README, I believe reproducing autotools-generated tarballs isn t worth everybody s time and instead a distribution that claims to build from source should operate on VCS snapshots instead of tarballs with 25k lines of pre-generated shell-script.
Indeed, many distributions packages already build from VCS snapshots, and this trend is likely to accelerate in response to the xz incident. The announcement led to a lengthy discussion on our mailing list, as well as shorter followup thread from kpcyrd about bootstrapping Autotools projects.

NixOS is not reproducible Morten Linderud posted an post on his blog this month, provocatively titled, NixOS is not reproducible . Although quickly admitting that his title is indeed clickbait , Morten goes on to clarify the precise guarantees and promises that NixOS provides its users. Later in the most, Morten mentions that he was motivated to write the post because:
I have heavily invested my free-time on this topic since 2017, and met some of the accomplishments we have had with Doesn t NixOS solve this? for just as long and I thought it would be of peoples interest to clarify[.]

Certificate vulnerabilities in F-Droid s fdroidserver In early April, Fay Stegerman announced a certificate pinning bypass vulnerability and Proof of Concept (PoC) in the F-Droid fdroidserver tools for managing builds, indexes, updates, and deployments for F-Droid repositories to the oss-security mailing list.
We observed that embedding a v1 (JAR) signature file in an APK with minSdk >= 24 will be ignored by Android/apksigner, which only checks v2/v3 in that case. However, since fdroidserver checks v1 first, regardless of minSdk, and does not verify the signature, it will accept a fake certificate and see an incorrect certificate fingerprint. [ ] We also realised that the above mentioned discrepancy between apksigner and androguard (which fdroidserver uses to extract the v2/v3 certificates) can be abused here as well. [ ]
Later on in the month, Fay followed up with a second post detailing a third vulnerability and a script that could be used to scan for potentially affected .apk files and mentioned that, whilst upstream had acknowledged the vulnerability, they had not yet applied any ameliorating fixes.

Website updates There were a number of improvements made to our website this month, including Chris Lamb updating the archive page to recommend -X and unzipping with TZ=UTC [ ] and adding Maven, Gradle, JDK and Groovy examples to the SOURCE_DATE_EPOCH page [ ]. In addition Jan Zerebecki added a new /contribute/opensuse/ page [ ] and Sertonix fixed the automatic RSS feed detection [ ][ ].

Reproducible Builds and Insights from an Independent Verifier for Arch Linux Joshua Drexel, Esther H nggi and Iy n M ndez Veiga of the School of Computer Science and Information Technology, Hochschule Luzern (HSLU) in Switzerland published a paper this month entitled Reproducible Builds and Insights from an Independent Verifier for Arch Linux. The paper establishes the context as follows:
Supply chain attacks have emerged as a prominent cybersecurity threat in recent years. Reproducible and bootstrappable builds have the potential to reduce such attacks significantly. In combination with independent, exhaustive and periodic source code audits, these measures can effectively eradicate compromises in the building process. In this paper we introduce both concepts, we analyze the achievements over the last ten years and explain the remaining challenges.
What is more, the paper aims to:
contribute to the reproducible builds effort by setting up a rebuilder and verifier instance to test the reproducibility of Arch Linux packages. Using the results from this instance, we uncover an unnoticed and security-relevant packaging issue affecting 16 packages related to Certbot [ ].
A PDF of the paper is available.

libntlm now releasing minimal source-only tarballs Simon Josefsson wrote on his blog this month that, going forward, the libntlm project will now be releasing what they call minimal source-only tarballs :
The XZUtils incident illustrate that tarballs with files that are not included in the git archive offer an opportunity to disguise malicious backdoors. [The] risk of hiding malware is not the only motivation to publish signed minimal source-only tarballs. With pre-generated content in tarballs, there is a risk that GNU/Linux distributions [ship] generated files coming from the tarball into the binary *.deb or *.rpm package file. Typically the person packaging the upstream project never realized that some installed artifacts was not re-built[.]
Simon s post goes into further details how this was achieved, and describes some potential caveats and counters some expected responses as well. A shorter version can be found in the announcement for the 1.8 release of libntlm.

Distribution work In Debian this month, Helmut Grohne filed a bug suggesting the removal of dh-buildinfo, a tool to generate and distribute .buildinfo-like files within binary packages. Note that this is distinct from the .buildinfo generation performed by dpkg-genbuildinfo. By contrast, the entirely optional dh-buildinfo generated a debian/buildinfo file that would be shipped within binary packages as /usr/share/doc/package/buildinfo_$arch.gz. Adrian Bunk recently asked about including source hashes in Debian s .buildinfo files, which prompted Guillem Jover to refresh some old patches to dpkg to make this possible, which revealed some quirks Vagrant Cascadian discovered when testing. In addition, 21 reviews of Debian packages were added, 22 were updated and 16 were removed this month adding to our knowledge about identified issues. A number issue types have been added, such as new random_temporary_filenames_embedded_by_mesonpy and timestamps_added_by_librime toolchain issues. In openSUSE, it was announced that their Factory distribution enabled bit-by-bit reproducible builds for almost all parts of the package. Previously, more parts needed to be ignored when comparing package files, but now only the signature needs to be deleted. In addition, Bernhard M. Wiedemann published theunreproduciblepackage as a proper .rpm package which it allows to better test tools intended to debug reproducibility. Furthermore, it was announced that Bernhard s work on a 100% reproducible openSUSE-based distribution will be funded by NLnet. He also posted another monthly report for his reproducibility work in openSUSE. In GNU Guix, Janneke Nieuwenhuizen submitted a patch set for creating a reproducible source tarball for Guix. That is to say, ensuring that make dist is reproducible when run from Git. [ ] Lastly, in Fedora, a new wiki page was created to propose a change to the distribution. Titled Changes/ReproduciblePackageBuilds , the page summarises itself as a proposal whereby A post-build cleanup is integrated into the RPM build process so that common causes of build irreproducibility in packages are removed, making most of Fedora packages reproducible.

Mailing list news On our mailing list this month:
  • Continuing a thread started in March 2024 about the Arch Linux minimal container now being 100% reproducible, John Gilmore followed up with a post about the practical and philosophical distinctions of local vs. remote storage of the various artifacts needed to build packages.
  • Chris Lamb asked the list which conferences readers are attending these days: After peak Covid and other industry-wide changes, conferences are no longer the must attend events they previously were especially in the area of software supply-chain security. In rough, practical terms, it seems harder to justify conference travel today than it did in mid-2019. The thread generated a number of responses which would be of interest to anyone planning travel in Q3 and Q4 of 2024.
  • James Addison wrote to the list about a quirk in Git related to its core.autocrlf functionality, thus helpfully passing on a slightly off-topic and perhaps not of direct relevance to anyone on the list today note that might still be the kind of issue that is useful to be aware of if-and-when puzzling over unexpected git content / checksum issues (situations that I do expect people on this list encounter from time-to-time) .

diffoscope diffoscope is our in-depth and content-aware diff utility that can locate and diagnose reproducibility issues. This month, Chris Lamb made a number of changes such as uploading versions 263, 264 and 265 to Debian and made the following additional changes:
  • Don t crash on invalid .zip files, even if we encounter their badness halfway through the file and not at the time of their initial opening. [ ]
  • Prevent odt2txt tests from always being skipped due to an (impossibly) new version requirement. [ ]
  • Avoid parens-in-parens in test skipping messages. [ ]
  • Ensure that tests with >=-style version constraints actually print the tool name. [ ]
In addition, Fay Stegerman fixed a crash when there are (invalid) duplicate entries in .zip which was originally reported in Debian bug #1068705). [ ] Fay also added a user-visible note to a diff when there are duplicate entries in ZIP files [ ]. Lastly, Vagrant Cascadian added an external tool pointer for the zipdetails tool under GNU Guix [ ] and proposed updates to diffoscope in Guix as well [ ] which were merged as [264] [265], fixed a regression in test coverage and increased verbosity of the test suite[ ].

Upstream patches The Reproducible Builds project detects, dissects and attempts to fix as many currently-unreproducible packages as possible. We endeavour to send all of our patches upstream where appropriate. This month, we wrote a large number of such patches, including:

reprotest reprotest is our tool for building the same source code twice in different environments and then checking the binaries produced by each build for any differences. This month, reprotest version 0.7.27 was uploaded to Debian unstable) by Vagrant Cascadian who made the following additional changes:
  • Enable specific number of CPUs using --vary=num_cpus.cpus=X. [ ]
  • Consistently use 398 days for time variation, rather than choosing randomly each time. [ ]
  • Disable builds of arch:any packages. [ ]
  • Update the description for the build_path.path option in README.rst. [ ]
  • Update escape sequences for compatibility with Python 3.12. (#1068853). [ ]
  • Remove the generic upstream signing-key [ ] and update the packages signing key with the currently active team members [ ].
  • Update the packaging Standards-Version to 4.7.0. [ ]
In addition, Holger Levsen fixed some spelling errors detected by the spellintian tool [ ] and Vagrant Cascadian updated reprotest in GNU Guix to 0.7.27.

Reproducibility testing framework The Reproducible Builds project operates a comprehensive testing framework running primarily at tests.reproducible-builds.org in order to check packages and other artifacts for reproducibility. In April, an enormous number of changes were made by Holger Levsen:
  • Debian-related changes:
    • Adjust for changed internal IP addresses at Codethink. [ ]
    • Automatically cleanup failed diffoscope user services if there are too many failures. [ ][ ]
    • Configure two new nodes at infomanik.cloud. [ ][ ]
    • Schedule Debian experimental even less. [ ][ ]
  • Breakage detection:
    • Exclude currently building packages from breakage detection. [ ]
    • Be more noisy if diffoscope crashes. [ ]
    • Health check: provide clickable URLs in jenkins job log for failed pkg builds due to diffoscope crashes. [ ]
    • Limit graph to about the last 100 days of breakages only. [ ]
    • Fix all found files with bad permissions. [ ]
    • Prepare dealing with diffoscope timeouts. [ ]
    • Detect more cases of failure to debootstrap base system. [ ]
    • Include timestamps of failed job runs. [ ]
  • Documentation updates:
    • Document how to access arm64 nodes at Codethink. [ ]
    • Document how to use infomaniak.cloud. [ ]
    • Drop notes about long stalled LeMaker HiKey960 boards sponsored by HPE and hosted at ETH. [ ]
    • Mention osuosl4 and osuosl5 and explain their usage. [ ]
    • Mention that some packages are built differently. [ ][ ]
    • Improve language in a comment. [ ]
    • Add more notes how to query resource usage from infomaniak.cloud. [ ]
  • Node maintenance:
    • Add ionos4 and ionos14 to THANKS. [ ][ ][ ][ ][ ]
    • Deprecate Squid on ionos1 and ionos10. [ ]
    • Drop obsolete script to powercycle arm64 architecture nodes. [ ]
    • Update system_health_check for new proxy nodes. [ ]
  • Misc changes:
    • Make the update_jdn.sh script more robust. [ ][ ]
    • Update my SSH public key. [ ]
In addition, Mattia Rizzolo added some new host details. [ ]

If you are interested in contributing to the Reproducible Builds project, please visit our Contribute page on our website. However, you can get in touch with us via:

24 September 2023

Thomas Goirand: Searching for a Ryzen 9, 16 cores, small laptop

The new 7945HX CPU from AMD is currently the most powerful. I d love to have one of them, to replace the now aging 6 core Xeon that I ve been using for more than 5 years. So, I ve been searching for a laptop with that CPU. Absolutely all of the laptops I found with this CPU also embed a very powerful RTX 40 0 series GPU, that I have no use: I don t play games, and I don t do AI. I just want something that builds Debian packages fast (like Ceph, that takes more than 1h to build for me ). The more cores I get, the faster all OpenStack unit tests are running too (stestr does a moderately good job at spreading the tests to all cores). That d be ok if I had to pay more for a GPU that I don t need, and I would have deal with the annoyance of the NVidia driver, if only I could find something with a correct size. But I can only find 16 or bigger laptops, that wont fit in my scooter back case (most of the time, these laptops have an 17 inch screen: that s a way too big). Currently, I found: If one of the readers of this post find a smaller laptop with a 7945HX CPU, please let me know! Even better if I can get rid of the expensive NVidia GPU.

16 July 2022

Thomas Goirand: My work during debcamp

I arrived in Prizren late on Wednesday. Here s what I did during debcamp (so over 3 days). I hope this post just motivates others to contribute more to Debian. At least 2 DDs want to upload packages that need a new version of python3-jsonschema (ie: version > 4.x). Unfortunately, version 4 broke a few packages. I therefore uploaded it to Experimental a few months/week, so I could see the result of autopkgtest reading the pseudo excuse page. And it showed a few packages broke. Here s the one used (or part of) OpenStack: Thanks to a reactive upstream, I was able to fix the first 4 above, but not Sahara yet. Vitrage poped-up when I uploade Debian release 2 of jsonschema, surprisingly. Also python3-jsonschema autopkgtest itself was broken because missing python3-pip in depends, but that should be fixed also.
I then filed bugs for packages not under my control: It looks tlike now there s also spyder which wasn t in the list a few hours ago. Maybe I should also file a bug against it. At this point, I don t think the python-jsonschema transition is finished, but it s on good tracks.
Then I also uploaded a new package of Ceph removing the ceph-mgr-diskprediction-local because it depended on python3-sklearn that the release team wanted to remove. I also prepared a point release update for it, but I m currently waiting for the previous upload to migrate to testing before uploading the point release.

Last, I wrote the missing update command for extrepo, and pushed the merge request to Salsa. Now extrepo should be feature complete (at least from my point of view). I also merged the patch for numberstation fixing the debian/copyright, and uploaded it to the NEW queue. It s a new package that does 2 factor authentication, and is mobile friendly: it works perfectly on any Mobian powered phone. Next, I intend to work with Arthur on the Cloud image finder. I hope we can find the time to work on it so it does what I need (ie: support the kind of setup I want to do, with HA, puppet, etc.).

6 October 2021

Thomas Goirand: OpenStack Xena, the 24th OpenStack release, is out

It was out at 3pm, and I managed to finish uploading the last bits to Unstable at 9pm Of course, that s because all of the packaging and testing work was done before the release date. All of it is, as usual, also available through a Bullseye non-official backports repository that can be added using extrepo (ie: extrepo enable openstack_xena ).

Thomas Goirand: Infomaniak launches its public IaaS cloud with ground breaking prices

My employer, the biggest Swiss server hosting company, Infomaniak, has just opened registration for its new IaaS (Infrastructure as a Service) OpenStack-based public cloud. Well, in fact, it s been opened since a week or so. Previously, it was only in beta (during that beta period, we hosted (for free) the whole Debconf 21 infrastructure). Nothing really new in the market, except that it is by far cheaper than most (if not all) of its (OpenStack-based or not) competitors, including AWS, GCE or Azure. Also, everything is hosted in Switzerland, in our own data centers, where data protection is written in the law (and Infomaniak often advertises about data privacy: this is real here ). Not only Infomaniak is (by far ) the cheapest offer in the market (including a 300 CHF free tier: enough for our smallest VM for a full year), but we also have very good technical support, and the hardware we used is top notch: Some of our customers didn t even believe how we could do such pricing. Well, the reason is simple: most of our competitors are simply really overpriced, and are making too much money. Since we re late in the market, and that newer hardware (with many cores on a single server) makes is possible to increase density without too much over-commit, my bosses decided that since we could, we would be the cheapest! Hopefully, this will work as a good business strategy. All of that public cloud infrastructure has been setup with OpenStack Cluster Installer for which I m the main author, and that is fully in Debian. All of this is running on a plain, unmodified Debian Bullseye (well, with a few OpenStack packages a little bit more up-to-date, but really not much, and all of that is publicly available ). Last, choosing the cheapest and best offer is also a good action: it promotes OpenStack and cloud computing in Debian, which I believe is the least vendor locked-in IaaS solution.

30 August 2021

Thomas Goirand: developers-reference needs love

During Debconf, Holger, who s one of the developers-reference maintainers, made a quick presentation that was explaining the developers-reference needs some love. Indeed, it has gathered dust, and some useful refresh would be very welcome. Holger pointed at the list of bugs:
https://bugs.debian.org/src:developers-reference After having a quick look into that list, after Holger s Debconf presentation, I wrote to him on IRC: <zigo> Many of the bugs you refered are indeed easily actionable, if all of us just try to help for one bug, that d be a huge improvement of that doc. Then, as I was waiting for the closing ceremony of Debconf, I thought I shouldn t just say it, but actually do something about it. I decided to address https://bugs.debian.org/793633 as I thought it was easy. In just a few minutes, I was able to do a first patch, as seen here: https://salsa.debian.org/debian/developers-reference/-/merge_requests/27 I wrote about it on IRC, and a few people helped with rephrasing what was there (thanks to Fil for correcting my English mistakes, and others for the content). Today, which is 2 days after the MR was opened, I have decided it was long enough and actually merged it, as I considered it was enough time to gather comments. So we now have a brand new shiny chapter about Backports and how to handle them. I m sure that new part is perfectible, so do not hesitate, and do patch what I just wrote if you feel like you can do better. If I m writing this blog post, this is not to promote myself. The goal is to promote the developers-reference manual and push others in Debian to do the same. Please do what Holger suggested, and what I just did: contribute to the document by addressing just one of the currently opened bugs. If all DDs do it, we ll get a much nicer document, and help others to contribute to Debian. This is going to take less than 30 minutes of your time, and it is very much ok if you do this only once. It is really easy: just clone https://salsa.debian.org/debian/developers-reference/ and write a patch. If you re a DD, you can even merge your patch yourself once you re satisfied with it.

23 April 2021

Thomas Goirand: Puppet and OS detection

As you may know, Puppet uses facter to get facts about the machine it is about to configure. That s fine, and a nice concept. One can later use variables in a puppet manifest to do different things depending on what facter tells. For example, the operating system name oh no! This thing is really stupid Here s the code one has to do to be compatible with puppet from version 3 up to 5: if $::lsbdistcodename == undef
# This works around differences between facter versions
if $facts['os']['lsb'] != undef
$distro_codename = $facts['os']['lsb']['distcodename']
else
$distro_codename = $facts['os']['distro']['codename']

else
$distro_codename = downcase($::lsbdistcodename)
Indeed, the global variable $::lsbdistcodename still existed up to Stretch (and is gone in Buster). The global $::facts wasn t an array before (but a hash), so in Jessie, it breaks with the error message facts is not a hash or array when accessing it with os . So, one need the full code above to make this work. It s ok to improve things. It is NOT OK to break os detection. To me it is a very bad practice from upstream Puppet authors. I m publishing this in the hope to avoid others to fall in the same trap as I did.

17 January 2021

Wouter Verhelst: Software available through Extrepo

Just over 7 months ago, I blogged about extrepo, my answer to the "how do you safely install software on Debian without downloading random scripts off the Internet and running them as root" question. I also held a talk during the recent "MiniDebConf Online" that was held, well, online. The most important part of extrepo is "what can you install through it". If the number of available repositories is too low, there's really no reason to use it. So, I thought, let's look what we have after 7 months... To cut to the chase, there's a bunch of interesting content there, although not all of it has a "main" policy. Each of these can be enabled by installing extrepo, and then running extrepo enable <reponame>, where <reponame> is the name of the repository. Note that the list is not exhaustive, but I intend to show that even though we're nowhere near complete, extrepo is already quite useful in its current state:

Free software
  • The debian_official, debian_backports, and debian_experimental repositories contain Debian's official, backports, and experimental repositories, respectively. These shouldn't have to be managed through extrepo, but then again it might be useful for someone, so I decided to just add them anyway. The config here uses the deb.debian.org alias for CDN-backed package mirrors.
  • The belgium_eid repository contains the Belgian eID software. Obviously this is added, since I'm upstream for eID, and as such it was a large motivating factor for me to actually write extrepo in the first place.
  • elastic: the elasticsearch software.
  • Some repositories, such as dovecot, winehq and bareos contain upstream versions of their respective software. These two repositories contain software that is available in Debian, too; but their upstreams package their most recent release independently, and some people might prefer to run those instead.
  • The sury, fai, and postgresql repositories, as well as a number of repositories such as openstack_rocky, openstack_train, haproxy-1.5 and haproxy-2.0 (there are more) contain more recent versions of software packaged in Debian already by the same maintainer of that package repository. For the sury repository, that is PHP; for the others, the name should give it away. The difference between these repositories and the ones above is that it is the official Debian maintainer for the same software who maintains the repository, which is not the case for the others.
  • The vscodium repository contains the unencumbered version of Microsoft's Visual Studio Code; i.e., the codium version of Visual Studio Code is to code as the chromium browser is to chrome: it is a build of the same softare, but without the non-free bits that make code not entirely Free Software.
  • While Debian ships with at least two browsers (Firefox and Chromium), additional browsers are available through extrepo, too. The iridiumbrowser repository contains a Chromium-based browser that focuses on privacy.
  • Speaking of privacy, perhaps you might want to try out the torproject repository.
  • For those who want to do Cloud Computing on Debian in ways that isn't covered by Openstack, there is a kubernetes repository that contains the Kubernetes stack, the as well as the google_cloud one containing the Google Cloud SDK.

Non-free software While these are available to be installed through extrepo, please note that non-free and contrib repositories are disabled by default. In order to enable these repositories, you must first enable them; this can be accomplished through /etc/extrepo/config.yaml.
  • In case you don't care about freedom and want the official build of Visual Studio Code, the vscode repository contains it.
  • While we're on the subject of Microsoft, there's also Microsoft Teams available in the msteams repository. And, hey, skype.
  • For those who are not satisfied with the free browsers in Debian or any of the free repositories, there's opera and google_chrome.
  • The docker-ce repository contains the official build of Docker CE. While this is the free "community edition" that should have free licenses, I could not find a licensing statement anywhere, and therefore I'm not 100% sure whether this repository is actually free software. For that reason, it is currently marked as a non-free one. Merge Requests for rectifying that from someone with more information on the actual licensing situation of Docker CE would be welcome...
  • For gamers, there's Valve's steam repository.
Again, the above lists are not meant to be exhaustive. Special thanks go out to Russ Allbery, Kim Alvefur, Vincent Bernat, Nick Black, Arnaud Ferraris, Thorsten Glaser, Thomas Goirand, Juri Grabowski, Paolo Greppi, and Josh Triplett, for helping me build the current list of repositories. Is your favourite repository not listed? Create a configuration based on template.yaml, and file a merge request!

9 January 2021

Louis-Philippe V ronneau: puppetserver 6: a Debian packaging post-mortem

I have been a Puppet user for a couple of years now, first at work, and eventually for my personal servers and computers. Although it can have a steep learning curve, I find Puppet both nimble and very powerful. I also prefer it to Ansible for its speed and the agent-server model it uses. Sadly, Puppet Labs hasn't been the most supportive upstream and tends to move pretty fast. Major versions rarely last for a whole Debian Stable release and the upstream .deb packages are full of vendored libraries.1 Since 2017, Apollon Oikonomopoulos has been the one doing most of the work on Puppet in Debian. Sadly, he's had less time for that lately and with Puppet 5 being deprecated in January 2021, Thomas Goirand, Utkarsh Gupta and I have been trying to package Puppet 6 in Debian for the last 6 months. With Puppet 6, the old ruby Puppet server using Passenger is not supported anymore and has been replaced by puppetserver, written in Clojure and running on the JVM. That's quite a large change and although puppetserver does reuse some of the Clojure libraries puppetdb (already in Debian) uses, packaging it meant quite a lot of work. Work in the Clojure team As part of my efforts to package puppetserver, I had the pleasure to join the Clojure team and learn a lot about the Clojure ecosystem. As I mentioned earlier, a lot of the Clojure dependencies needed for puppetserver were already in the archive. Unfortunately, when Apollon Oikonomopoulos packaged them, the leiningen build tool hadn't been packaged yet. This meant I had to rebuild a lot of packages, on top of packaging some new ones. Since then, thanks to the efforts of Elana Hashman, leiningen has been packaged and lets us run the upstream testsuites and create .jar artifacts closer to those upstream releases. During my work on puppetserver, I worked on the following packages:
List of packages
  • backport9
  • bidi-clojure
  • clj-digest-clojure
  • clj-helper
  • clj-time-clojure
  • clj-yaml-clojure
  • cljx-clojure
  • core-async-clojure
  • core-cache-clojure
  • core-match-clojure
  • cpath-clojure
  • crypto-equality-clojure
  • crypto-random-clojure
  • data-csv-clojure
  • data-json-clojure
  • data-priority-map-clojure
  • java-classpath-clojure
  • jnr-constants
  • jnr-enxio
  • jruby
  • jruby-utils-clojure
  • kitchensink-clojure
  • lazymap-clojure
  • liberator-clojure
  • ordered-clojure
  • pathetic-clojure
  • potemkin-clojure
  • prismatic-plumbing-clojure
  • prismatic-schema-clojure
  • puppetlabs-http-client-clojure
  • puppetlabs-i18n-clojure
  • puppetlabs-ring-middleware-clojure
  • puppetserver
  • raynes-fs-clojure
  • riddley-clojure
  • ring-basic-authentication-clojure
  • ring-clojure
  • ring-codec-clojure
  • shell-utils-clojure
  • ssl-utils-clojure
  • test-check-clojure
  • tools-analyzer-clojure
  • tools-analyzer-jvm-clojure
  • tools-cli-clojure
  • tools-reader-clojure
  • trapperkeeper-authorization-clojure
  • trapperkeeper-clojure
  • trapperkeeper-filesystem-watcher-clojure
  • trapperkeeper-metrics-clojure
  • trapperkeeper-scheduler-clojure
  • trapperkeeper-webserver-jetty9-clojure
  • url-clojure
  • useful-clojure
  • watchtower-clojure
If you want to learn more about packaging Clojure libraries and applications, I rewrote the Debian Clojure packaging tutorial and added a section about the quirks of using leiningen without a dedicated dh_lein tool. Work left to get puppetserver 6 in the archive Unfortunately, I was not able to finish the puppetserver 6 packaging work. It is thus unlikely it will make it in Debian Bullseye. If the issues described below are fixed, it would be possible to to package puppetserver in bullseye-backports though. So what's left? jruby Although I tried my best (kudos to Utkarsh Gupta and Thomas Goirand for the help), jruby in Debian is still broken. It does build properly, but the testsuite fails with multiple errors: jruby testsuite failures aside, I have not been able to use the jruby.deb the package currently builds in jruby-utils-clojure (testsuite failure). I had the same exact failure with the (more broken) jruby version that is currently in the archive, which leads me to think this is a LOAD_PATH issue in jruby-utils-clojure. More on that below. To try to bypass these issues, I tried to vendor jruby into jruby-utils-clojure. At first I understood vendoring meant including upstream pre-built artifacts (jruby-complete.jar) and shipping them directly. After talking with people on the #debian-mentors and #debian-ftp IRC channels, I now understand why this isn't a good idea (and why it's not permitted in Debian). Many thanks to the people who were patient and kind enough to discuss this with me and give me alternatives. As far as I now understand it, vendoring in Debian means "to have an embedded copy of the source code in another package". Code shipped that way still needs to be built from source. This means we need to build jruby ourselves, one way or another. Vendoring jruby in another package thus isn't terribly helpful. If fixing jruby the proper way isn't possible, I would suggest trying to build the package using embedded code copies of the external libraries jruby needs to build, instead of trying to use the Debian libraries.2 This should make it easier to replicate what upstream does and to have a final .jar that can be used. jruby-utils-clojure This package is a first-level dependency for puppetserver and is the glue between jruby and puppetserver. It builds fine, but the testsuite fails when using the Debian jruby package. I think the problem is caused by a jruby LOAD_PATH issue. The Debian jruby package plays with the LOAD_PATH a little to try use Debian packages instead of downloading gems from the web, as upstream jruby does. This seems to clash with the gem-home, gem-path, and jruby-load-path variables in the jruby-utils-clojure package. The testsuite plays around with these variables and some Ruby libraries can't be found. I tried to fix this, but failed. Using the upstream jruby-complete.jar instead of the Debian jruby package, the testsuite passes fine. This package could clearly be uploaded to NEW right now by ignoring the testsuite failures (we're just packaging static .clj source files in the proper location in a .jar). puppetserver jruby issues aside, packaging puppetserver itself is 80% done. Using the upstream jruby-complete.jar artifact, the testsuite fails with a weird Clojure error I'm not sure I understand, but I haven't debugged it for very long. Upstream uses git submodules to vendor puppet (agent), hiera (3), facter and puppet-resource-api for the testsuite to run properly. I haven't touched that, but I believe we can either: Without the testsuite actually running, it's hard to know what files are needed in those packages. What now Puppet 5 is now deprecated. If you or your organisation cares about Puppet in Debian,3 puppetserver really isn't far away from making it in the archive. Very talented Debian Developers are always eager to work on these issues and can be contracted for very reasonable rates. If you're interested in contracting someone to help iron out the last issues, don't hesitate to reach out via one of the following: As for I, I'm happy to say I got a new contract and will go back to teaching Economics for the Winter 2021 session. I might help out with some general Debian packaging work from time to time, but it'll be as a hobby instead of a job. Thanks The work I did during the last 6 weeks would be not have been possible without the support of the Wikimedia Foundation, who were gracious enough to contract me. My particular thanks to Faidon Liambotis, Moritz M hlenhoff and John Bond. Many, many thanks to Rob Browning, Thomas Goirand, Elana Hashman, Utkarsh Gupta and Apollon Oikonomopoulos for their direct and indirect help, without which all of this wouldn't have been possible.

  1. For example, the upstream package for the Puppet Agent vendors OpenSSL.
  2. One of the problems of using Ruby libraries already packaged in Debian is that jruby currently only supports Ruby 2.5. Ruby libraries in Debian are currently expected to work with Ruby 2.7, with the transition to Ruby 3.0 planned after the Bullseye release.
  3. If you run Puppet, you clearly should care: the .deb packages upstream publishes really aren't great and I would not recommend using them.

14 October 2020

Thomas Goirand: The Gnocchi package in Debian

This is a follow-up from the blog post of Russel as seen here: https://etbe.coker.com.au/2020/10/13/first-try-gnocchi-statsd/. There s a bunch of things he wrote which I unfortunately must say is inaccurate, and sometimes even completely wrong. It is my point of view that none of the reported bugs are helpful for anyone that understand Gnocchi and how to set it up. It s however a terrible experience that Russell had, and I do understand why (and why it s not his fault). I m very much open on how to fix this on the packaging level, though some things aren t IMO fixable. Here s the details. 1/ The daemon startups First of all, the most surprising thing is when Russell claimed that there s no startup scripts for the Gnocchi daemons. In fact, they all come with both systemd and sysv-rc support: # ls /lib/systemd/system/gnocchi-api.service
/lib/systemd/system/gnocchi-api.service
# /etc/init.d/gnocchi-api
/etc/init.d/gnocchi-api Russell then tried to start gnocchi-api without the good options that are set in the Debian scripts, and not surprisingly, this failed. Russell attempted to do what was in the upstream doc, which isn t adapted to what we have in Debian (the upstream doc is probably completely outdated, as Gnocchi is unfortunately not very well maintained upstream). The bug #972087 is therefore, IMO not valid. 2/ The database setup By default for all things OpenStack in Debian, there are some debconf helpers using dbconfig-common to help users setup database for their services. This is clearly for beginners, but that doesn t prevent from attempting to understand what you re doing. That is, more specifically for Gnocchi, there are 2 databases: one for Gnocchi itself, and one for the indexer, which not necessarily is using the same backend. The Debian package already setups one database, but one has to do it manually for the indexer one. I m sorry this isn t well enough documented. Now, if some package are supporting sqlite as a backend (since most things in OpenStack are using SQLAlchemy), it looks like Gnocchi doesn t right now. This is IMO a bug upstream, rather than a bug in the package. However, I don t think the Debian packages are to be blame here, as they simply offer a unified interface, and it s up to the users to know what they are doing. SQLite is anyway not a production ready backend. I m not sure if I should close #971996 without any action, or just try to disable the SQLite backend option of this package because it may be confusing. 3/ The metrics UUID Russell then thinks the UUID should be set by default. This is probably right in a single server setup, however, this wouldn t work setting-up a cluster, which is probably what most Gnocchi users will do. In this type of environment, the metrics UUID must be the same on the 3 servers, and setting-up a random (and therefore different) UUID on the 3 servers wouldn t work. So I m also tempted to just close #972092 without any action on my side. 4/ The coordination URL Since Gnocchi is supposed to be setup with more than one server, as in OpenStack, having an HA setup is very common, then a backend for the coordination (ie: sharing the workload) must be set. This is done by setting an URL that tooz understand. The best coordinator being Zookeeper, something like this should be set by hand: coordination_url=zookeeper://192.168.101.2:2181/ Here again, I don t think the Debian package is to be blamed for not providing the automation. I would however accept contributions to fix this and provide the choice using debconf, however, users would still need to understand what s going on, and setup something like Zookeeper (or redis, memcache, or any other backend supported by tooz) to act as coordinator. 5/ The Debconf interface cannot replace a good documentation and there s not so much I can do at my package maintainer level for this. Russell, I m really sorry for the bad user experience you had with Gnocchi. Now that you know a little big more about it, maybe you can have another go? Sure, the OpenStack telemetry system isn t an easy to understand beast, but it s IMO worth trying. And the recent versions can scale horizontally

16 July 2020

Louis-Philippe V ronneau: DebConf Videoteam Sprint Report -- DebConf20@Home

DebConf20 starts in about 5 weeks, and as always, the DebConf Videoteam is working hard to make sure it'll be a success. As such, we held a sprint from July 9th to 13th to work on our new infrastructure. A remote sprint certainly ain't as fun as an in-person one, but we nonetheless managed to enjoy ourselves. Many thanks to those who participated, namely: We also wish to extend our thanks to Thomas Goirand and Infomaniak for providing us with virtual machines to experiment on and host the video infrastructure for DebConf20. Advice for presenters For DebConf20, we strongly encourage presenters to record their talks in advance and send us the resulting video. We understand this is more work, but we think it'll make for a more agreeable conference for everyone. Video conferencing is still pretty wonky and there is nothing worse than a talk ruined by a flaky internet connection or hardware failures. As such, if you are giving a talk at DebConf this year, we are asking you to read and follow our guide on how to record your presentation. Fear not: we are not getting rid of the Q&A period at the end of talks. Attendees will ask their questions either on IRC or on a collaborative pad and the Talkmeister will relay them to the speaker once the pre-recorded video has finished playing. New infrastructure, who dis? Organising a virtual DebConf implies migrating from our battle-tested on-premise workflow to a completely new remote one. One of the major changes this means for us is the addition of Jitsi Meet to our infrastructure. We normally have 3 different video sources in a room: two cameras and a slides grabber. With the new online workflow, directors will be able to play pre-recorded videos as a source, will get a feed from a Jitsi room and will see the audience questions as a third source. This might seem simple at first, but is in fact a very major change to our workflow and required a lot of work to implement.
               == On-premise ==                                          == Online ==
                                                      
              Camera 1                                                 Jitsi
                                                                          
                 v                 ---> Frontend                         v                 ---> Frontend
                                                                                            
    Slides -> Voctomix -> Backend -+--> Frontend         Questions -> Voctomix -> Backend -+--> Frontend
                                                                                            
                 ^                 ---> Frontend                         ^                 ---> Frontend
                                                                          
              Camera 2                                           Pre-recorded video
In our tests, playing back pre-recorded videos to voctomix worked well, but was sometimes unreliable due to inconsistent encoding settings. Presenters will thus upload their pre-recorded talks to SReview so we can make sure there aren't any obvious errors. Videos will then be re-encoded to ensure a consistent encoding and to normalise audio levels. This process will also let us stitch the Q&As at the end of the pre-recorded videos more easily prior to publication. Reducing the stream latency One of the pitfalls of the streaming infrastructure we have been using since 2016 is high video latency. In a worst case scenario, remote attendees could get up to 45 seconds of latency, making participation in events like BoFs arduous. In preparation for DebConf20, we added a new way to stream our talks: RTMP. Attendees will thus have the option of using either an HLS stream with higher latency or an RTMP stream with lower latency. Here is a comparative table that can help you decide between the two protocols:
HLS RTMP
Pros
  • Can be watched from a browser
  • Auto-selects a stream encoding
  • Single URL to remember
  • Lower latency (~5s)
Cons
  • Higher latency (up to 45s)
  • Requires a dedicated video player (VLC, mpv)
  • Specific URLs for each encoding setting
Live mixing from home with VoctoWeb Since DebConf16, we have been using voctomix, a live video mixer developed by the CCC VOC. voctomix is conveniently divided in two: voctocore is the backend server while voctogui is a GTK+ UI frontend directors can use to live-mix. Although voctogui can connect to a remote server, it was primarily designed to run either on the same machine as voctocore or on the same LAN. Trying to use voctogui from a machine at home to connect to a voctocore running in a datacenter proved unreliable, especially for high-latency and low bandwidth connections. Inspired by the setup FOSDEM uses, we instead decided to go with a web frontend for voctocore. We initially used FOSDEM's code as a proof of concept, but quickly reimplemented it in Python, a language we are more familiar with as a team. Compared to the FOSDEM PHP implementation, voctoweb implements A / B source selection (akin to voctogui) as well as audio control, two very useful features. In the following screen captures, you can see the old PHP UI on the left and the new shiny Python one on the right. The old PHP voctowebThe new Python3 voctoweb Voctoweb is still under development and is likely to change quite a bit until DebConf20. Still, the current version seems to works well enough to be used in production if you ever need to. Python GeoIP redirector We run multiple geographically-distributed streaming frontend servers to minimize the load on our streaming backend and to reduce overall latency. Although users can connect to the frontends directly, we typically point them to live.debconf.org and redirect connections to the nearest server. Sadly, 6 months ago MaxMind decided to change the licence on their GeoLite2 database and left us scrambling. To fix this annoying issue, Stefano Rivera wrote a Python program that uses the new database and reworked our ansible frontend server role. Since the new database cannot be redistributed freely, you'll have to get a (free) license key from MaxMind if you to use this role. Ansible & CI improvements Infrastructure as code is a living process and needs constant care to fix bugs, follow changes in DSL and to implement new features. All that to say a large part of the sprint was spent making our ansible roles and continuous integration setup more reliable, less buggy and more featureful. All in all, we merged 26 separate ansible-related merge request during the sprint! As always, if you are good with ansible and wish to help, we accept merge requests on our ansible repository :)

29 May 2020

Thomas Goirand: A quick look into Storcli packaging horror

So, Megacli is to be replaced by Storcli, both being proprietary tools for configuring RAID cards from LSI. So I went to download what s provided by Lenovo, available here:
https://support.lenovo.com/fr/en/downloads/ds041827 It s very annoying, because they force users to download a .zip file containing a deb file, instead of providing a Debian repository. Well, ok, though at least there s a deb file there. Let s have a look what s using my favorite tool before installing (ie: let s run Lintian).
Then it s a horror story. Not only there s obvious packaging wrong, like the package provide stuff in /opt, and all is statically linked and provide embedded copies of libm and ncurses, or even the package is marked arch: all instead of arch: amd64 (in fact, the package contains both i386 and amd64 arch files ), but there s also some really wrong things going on:
E: storcli: arch-independent-package-contains-binary-or-object opt/MegaRAID/storcli/storcli
E: storcli: embedded-library opt/MegaRAID/storcli/storcli: libm
E: storcli: embedded-library opt/MegaRAID/storcli/storcli: ncurses
E: storcli: statically-linked-binary opt/MegaRAID/storcli/storcli
E: storcli: arch-independent-package-contains-binary-or-object opt/MegaRAID/storcli/storcli64
E: storcli: embedded-library opt/MegaRAID/storcli/storcli64: libm
E: storcli: embedded-library use no-tag-display-limit to see all (or pipe to a file/program)
E: storcli: statically-linked-binary opt/MegaRAID/storcli/storcli64
E: storcli: changelog-file-missing-in-native-package
E: storcli: control-file-has-bad-permissions postinst 0775 != 0755
E: storcli: control-file-has-bad-owner postinst asif/asif != root/root
E: storcli: control-file-has-bad-permissions preinst 0775 != 0755
E: storcli: control-file-has-bad-owner preinst asif/asif != root/root
E: storcli: no-copyright-file
E: storcli: extended-description-is-empty
W: storcli: essential-no-not-needed
W: storcli: unknown-section storcli
E: storcli: depends-on-essential-package-without-using-version depends: bash
E: storcli: wrong-file-owner-uid-or-gid opt/ 1000/1000
W: storcli: non-standard-dir-perm opt/ 0775 != 0755
E: storcli: wrong-file-owner-uid-or-gid opt/MegaRAID/ 1000/1000
E: storcli: dir-or-file-in-opt opt/MegaRAID/
W: storcli: non-standard-dir-perm opt/MegaRAID/ 0775 != 0755
E: storcli: wrong-file-owner-uid-or-gid opt/MegaRAID/storcli/ 1000/1000
E: storcli: dir-or-file-in-opt opt/MegaRAID/storcli/
W: storcli: non-standard-dir-perm opt/MegaRAID/storcli/ 0775 != 0755
E: storcli: wrong-file-owner-uid-or-gid use no-tag-display-limit to see all (or pipe to a file/program)
E: storcli: dir-or-file-in-opt opt/MegaRAID/storcli/storcli
E: storcli: dir-or-file-in-opt use no-tag-display-limit to see all (or pipe to a file/program) Some of the above are grave security problems, like wrong Unix mode for folders, even with the preinst script installed as non-root.
I always wonder why this type of tool needs to be proprietary. They clearly don t know how to get packaging right, so they d better just provide the source code, and let us (the Debian community) do the work for them. I don t think there s any secret that they are keeping by hiding how to configure the cards, so it s not in the vendor s interest to keep everything closed. Or maybe they are just hiding really bad code in there, that they are ashamed to share? In any way, they d better not provide any package than this pile of dirt (and I m trying to stay polite here ).

19 October 2016

Reproducible builds folks: Reproducible Builds: week 77 in Stretch cycle

What happened in the Reproducible Builds effort between Sunday October 9 and Saturday October 15 2016: Media coverage Documentation update After discussions with HW42, Steven Chamberlain, Vagrant Cascadian, Daniel Shahaf, Christopher Berg, Daniel Kahn Gillmor and others, Ximin Luo has started writing up more concrete and detailed design plans for setting SOURCE_ROOT_DIR for reproducible debugging symbols, buildinfo security semantics and buildinfo security infrastructure. Toolchain development and fixes Dmitry Shachnev noted that our patch for #831779 has been temporarily rejected by docutils upstream; we are trying to persuade them again. Tony Mancill uploaded javatools/0.59 to unstable containing original patch by Chris Lamb. This fixed an issue where documentation Recommends: substvars would not be reproducible. Ximin Luo filed bug 77985 to GCC as a pre-requisite for future patches to make debugging symbols reproducible. Packages reviewed and fixed, and bugs filed The following updated packages have become reproducible - in our current test setup - after being fixed: The following updated packages appear to be reproducible now, for reasons we were not able to figure out. (Relevant changelogs did not mention reproducible builds.) Some uploads have addressed some reproducibility issues, but not all of them: Some uploads have addressed nearly all reproducibility issues, except for build path issues: Patches submitted that have not made their way to the archive yet: Reviews of unreproducible packages 101 package reviews have been added, 49 have been updated and 4 have been removed in this week, adding to our knowledge about identified issues. 3 issue types have been updated: Weekly QA work During of reproducibility testing, some FTBFS bugs have been detected and reported by: tests.reproducible-builds.org Debian: Openwrt/LEDE/NetBSD/coreboot/Fedora/archlinux: Misc. We are running a poll to find a good time for an IRC meeting. This week's edition was written by Ximin Luo, Holger Levsen & Chris Lamb and reviewed by a bunch of Reproducible Builds folks on IRC.

16 October 2016

Thomas Goirand: Released OpenStack Newton, Moving OpenStack packages to upstream Gerrit CI/CD

OpenStack Newton is released, and uploaded to Sid OpenStack Newton was released on the Thursday 6th of October. I was able to upload nearly all of it before the week-end, though there was a bit of hick-ups still, as I forgot to upload python-fixtures 3.0.0 to unstable, and only realized it thanks to some bug reports. As this is a build time dependency, it didn t disrupt Sid users too much, but 38 packages wouldn t build without it. Thanks to Santiago Vila for pointing at the issue here. As of writing, a lot of the Newton packages didn t migrate to Testing yet. It s been migrating in a very messy way. I d love to improve this process, but I m not sure how, if not filling RC bugs against 250 packages (which would be painful to do), so they would migrate at once. Suggestions welcome. Bye bye Jenkins For a few years, I was using Jenkins, together with a post-receive hook to build Debian Stable backports of OpenStack packages. Though nearly a year and a half ago, we had that project to build the packages within the OpenStack infrastructure, and use the CI/CD like OpenStack upstream was doing. This is done, and Jenkins is gone, as of OpenStack Newton. Current status As of August, almost all of the packages Git repositories were uploaded to OpenStack Gerrit, and the build now happens in OpenStack infrastructure. We ve been able to build all packages a release OpenStack Newton Debian packages using this system. This non-official jessie backports repository has also been validated using Tempest. Goodies from Gerrit and upstream CI/CD It is very nice to have it built this way, so we will be able to maintain a full CI/CD in upstream infrastructure using Newton for the life of Stretch, which means we will have the tools to test security patches virtually forever. Another thing is that now, anyone can propose packaging patches without the need for an Alioth account, by sending a patch for review through Gerrit. It is our hope that this will increase the likeliness of external contribution, for example from 3rd party plugins vendors (ie: networking driver vendors, for example), or upstream contributors themselves. They are already used to Gerrit, and they all expected the packaging to work this way. They are all very much welcome. The upstream infra: nodepool, zuul and friends
The OpenStack infrastructure has been described already in planet.debian.org, by Ian Wienand. So I wont describe it again, he did a better job than I ever would. How it works All source packages are stored in Gerrit with the deb- prefix. This is in order to avoid conflict with upstream code, and to easily locate packaging repositories. For example, you ll find Nova packaging under https://git.openstack.org/cgit/openstack/deb-nova. Two Debian repositories are stored in the infrastructure AFS (Andrew File System, which means a copy of that repository exist on each cloud were we have compute resources): one for the actual deb-* builds, under jessie-newton , and one for the automatic backports, maintained in the deb-auto-backports gerrit repository. We re using a git tag based workflow. Every Gerrit repository contains all of the upstream branch, plus a debian/newton branch, which contains the same content as a tag of upstream, plus the debian folder. The orig tarball is generated using git archive , then used by sbuild to produce binaries. To package a new upstream release, one simply needs to git merge -X theirs FOO (where FOO is the tag you want to merge), then edit debian/changelog so that the Debian package version matches the tag, then do git commit -a amend , and simply git review . At this point, the OpenStack CI will build the package. If it builds correctly, then a core reviewer can approve the merge commit , the patch is merged, then the package is built and the binary package published on the OpenStack Debian package repository. Maintaining backports automatically The automatic backports is maintained through a Gerrit repository called deb-auto-backports containing a packages-list file that simply lists source packages we need to backport. On each new CR (change request) in Gerrit, thanks to some madison-lite and dpkg compare-version magic, the packages-list is used to compare what s in the Debian archive and what we have in the jessie-newton-backports repository. If the version is lower in our repository, or if the package doesn t exist, then a build is triggered. There is the possibility to backport from any Debian release (using the -d flag in the packages-list file), and even we can use jessie-backports to just rebuild the package. I also had to write a hack to just download from jessie-backports without rebuilding, because rebuilding the webkit2gtk package (needed by sphinx) was taking too resources (though we ll try to never use it, and rebuild packages when possible). The nice thing with this system, is that we don t need to care much about maintaining packages up-to-date: the script does that for us. Upstream Debian repository are NOT for production The produced package repositories are there because we have interconnected build dependencies, needed to run unit test at build time. It is the only reason why such Debian repository exist. They are not for production use. If you wish to deploy OpenStack, we very much recommend using packages from distributions (like Debian or Ubuntu). Indeed, the infrastructure Debian repositories are updated multiple times daily. As a result, it is very likely that you will experience failures to download (hash or file size mismatch and such). Also, the functional tests aren t yet wired in the CI/CD in OpenStack infra, and therefore, we cannot guarantee yet that the packages are usable. Improving the build infrastructure There s a bunch of things which we could do to improve the build process. Let me give a list of things we want to do. Generalizing to Debian During Debconf 16, I had very interesting talks with the DSA (Debian System Administrator) about deploying such a CI/CD for the whole of the Debian archive, interfacing Gerrit with something like dgit and a build CI. I was told that I should provide a proof of concept first, which I very much agreed with. Such a PoC is there now, within OpenStack infra. I very much welcome any Debian contributor to try it, through a packaging patch. If you wish to do so, you should read how to contribute to OpenStack here: https://wiki.openstack.org/wiki/How_To_Contribute#If_you.27re_a_developer and then simply send your patch with git review . This system, however, currently only fits the git tag based packaging workflow. We d have to do a little bit more work to make it possible to use pristine-tar (basically, allow to push in the upstream and pristine-tar branches without any CI job connected to the push). Dear DSA team, as we now nice PoC that is working well, on which the OpenStack PKG team is maintaining 100s of packages, shall we try to generalize and provide such infrastructure for every packaging team and DDs?

15 June 2016

Reproducible builds folks: Reproducible builds: week 59 in Stretch cycle

What happened in the Reproducible Builds effort between June 5th and June 11th 2016: Media coverage Ed Maste gave a talk at BSDCan 2016 on reproducible builds (slides, video). GSoC and Outreachy updates Weekly reports by our participants: Documentation update - Ximin Luo proposed a modification to our SOURCE_DATE_EPOCH spec explaining FORCE_SOURCE_DATE. Some upstream build tools (e.g. TeX, see below) have expressed a desire to control which cases of embedded timestamps should obey SOURCE_DATE_EPOCH. They were not convinced by our arguments on why this is a bad idea, so we agreed on an environment variable FORCE_SOURCE_DATE for them to implement their desired behaviour - named generically, so that at least we can set it centrally. For more details, see the text just linked. However, we strongly urge most build tools not to use this, and instead obey SOURCE_DATE_EPOCH unconditionally in all cases. Toolchain fixes Packages fixed The following 16 packages have become reproducible due to changes in their build-dependencies: apertium-dan-nor apertium-swe-nor asterisk-prompt-fr-armelle blktrace canl-c code-saturne coinor-symphony dsc-statistics frobby libphp-jpgraph paje.app proxycheck pybit spip tircd xbs The following 5 packages are new in Debian and appear to be reproducible so far: golang-github-bowery-prompt golang-github-pkg-errors golang-gopkg-dancannon-gorethink.v2 libtask-kensho-perl sspace The following packages had older versions which were reproducible, and their latest versions are now reproducible again after being fixed: The following packages have become reproducible after being fixed: Some uploads have fixed some reproducibility issues, but not all of them: Patches submitted that have not made their way to the archive yet: Package reviews 68 reviews have been added, 19 have been updated and 28 have been removed in this week. New and updated issues: 26 FTBFS bugs have been reported by Chris Lamb, 1 by Santiago Vila and 1 by Sascha Steinbiss. diffoscope development strip-nondeterminism development disorderfs development tests.reproducible-builds.org Misc. Steven Chamberlain submitted a patch to FreeBSD's makefs to allow reproducible builds of the kfreebsd installer. Ed Maste committed a patch to FreeBSD's binutils to enable determinstic archives by default in GNU ar. Helmut Grohne experimented with cross+native reproductions of dash with some success, using rebootstrap. This week's edition was written by Ximin Luo, Chris Lamb, Holger Levsen, Mattia Rizzolo and reviewed by a bunch of Reproducible builds folks on IRC.

18 April 2016

Reproducible builds folks: Reproducible builds: week 50 in Stretch cycle

What happened in the reproducible builds effort between April 3rd and April 9th 2016: Media coverage Emily Ratliff wrote an article for SecurityWeek called Establishing Correspondence Between an Application and its Source Code - How Combining Two Completely Separate Open Source Projects Can Make Us All More Secure. Tails have started work on a design for freezable APT repositories to make it easier and practical to perform reproductions of an entire distribution at a given point in time, which will be needed to create reproducible installation- or live-media. Toolchain fixes Alexis Bienven e submitted patches adding support for SOURCE_DATE_EPOCH in several tools: transfig, imagemagick, rdtool, and asciidoctor. boyska submitted one for python-reportlab. Packages fixed The following packages have become reproducible due to changes in their build dependencies: atinject-jsr330 brailleutils cglib3 gnugo libcobra-java libgnumail-java libjchart2d-java libjcommon-java libjfreechart-java libjide-oss-java liblaf-widget-java liblastfm-java liboptions-java octave-control octave-mpi octave-nan octave-parallel octave-stk octave-struct octave-tsa oar The following packages became reproducible after getting fixed: Several uploads fixed some reproducibility issues, but not all of them: Patches submitted which have not made their way to the archive yet: Other upstream fixes Alexander Batischev made a commit to make newsbeuter reproducible. tests.reproducible-builds.org Package reviews 93 reviews have been removed, 66 added and 21 updated in the previous week. 12 new FTBFS bugs have been reported by Chris Lamb and Niko Tyni. Misc. This week's edition was written by Lunar, Holger Levsen, Reiner Herrmann, Mattia Rizzolo and Ximin Luo. With the departure of Lunar as a full-time contributor, Reproducible Builds Weekly News (this thing you're reading) has moved from his personal Debian blog on Debian People to the Reproducible Builds team web site on Debian Alioth. You may want to update your RSS or Atom feeds. Very many thanks to Lunar for writing and publishing this weekly news for so long, well & continously!

11 April 2016

Thomas Goirand: Announcing validated Debian packages for Mitaka

Greetings! This is a (4 days delay) copy of the announce I made on the openstack-dev@lists.openstack.org on the 8th of April 2016. I am overjoyed, thrilled and delighted to announce the release of the Debian packages for Mitaka. All of the DefCore packages were validated successfully this morning through our package-only-based Tempest CI. Content of this release
This release includes the following 23 services:
aodh 2.0.0
barbican 2.0.0
ceilometer 6.0.0
cinder 8.0.0
congress 3.0.0+dfsg1
designate 2.0.0
glance 12.0.0
gnocchi 2.0.2
heat 6.0.0
horizon 9.0.0
ironic 5.1.0
keystone 9.0.0
magnum 2.0.0
manila 2.0.0
mistral 2.0.0
murano 2.0.0
neutron 8.0.0
nova 13.0.0
trove 5.0.0
sahara 4.0.0
senlin 1.0.0
swift 2.7.0
zaqar 2.0.0 Where to find these packages
1/ Sid
All of Mitaka was uploaded to Debian Sid this week. You can use Debian Sid directly to use them. 2/ Official jessie-backports
As soon as everything migrates to Debian Testing (currently aka: Stretch), in 5 days if no RC bug is reported, it will be possible to upload all of Mitaka to the Debian official jessie-backports. 3/ Non-official Jessie and Trusty backports
In the meantime, the packages are available through Mirantis Jenkins automatic Debian Jessie backport repository. The full sources.list is available here: http://mitaka-jessie.pkgs.mirantis.com/ You can use the Trusty backports as well: http://mitaka-trusty.pkgs.mirantis.com/ To use these repositories, simply add the described sources.list to (for example) /etc/apt/sources.list.d/openstack.list, and run apt-get update. If you want to install the GPG key of the repositories, you can either install the mitaka-jessie-archive-keyring or mitaka-trusty-archive-keyring package (depending on your distribution of choice). Alternatively apt-key add the public key available at /debian/dists/pukey.gpg in these repositories. As a reminder, the URLs above contain the word Mirantis only because the service is sponsored by my employer. These repositories are straight backports from what is available in Debian Sid, without any modification. Remember that the packages listed below are maintained separately in Debian and Ubuntu, and therefore, packages are different in these distributions:
aodh, barbican, ceilometer, cinder, designate, glance, heat, horizon, ironic, keystone, manila, neutron, nova, trove, swift. All other packages (including all OpenStack libraries like Oslo and python-*clients) are maintained in Debian, with the contribution of Canonical, and then synced to Ubuntu, so they are the exact same packages (or at least, with a minimal difference). I hope we can further improve collaboration between Debian and Canonical during the Newton cycle. Bug reporting
As always, bug reports are welcome, and considered as high value contributions. Please follow the instructions available at https://www.debian.org/Bugs/Reporting to report bugs to the Debian BTS. Moving forward with higher QA and the Packaging-deb project in Newton
Currently, DefCore packages are tested through a package-only (ie: no puppet, chef, you-name-it system management involved) Tempest CI. Results can be seen at:
https://mitaka-jessie.pkgs.mirantis.com/job/openstack-tempest-ci/ Though not all packages are included in this CI. It is my intention, during the Newton cycle, to also include services like Designate, Trove, Barbican, Congress, in this CI. Individual upstream team for these services are more than welcome to approach us to get this happen quicker. Also, as we re slowing starting to get the Packaging-Deb project (ie: packaging using upstream OpenStack gerrit and gating), it is also in the pipe to use the above mentioned tempest CI system as a gate system for the packaging. Hopefully, this will lead us to a full CI/CD working from trunk. We also hope to be able to use these packages to help the Puppet team to test packaged OpenStack from trunk. Greetings
On each release, I ask myself who I should thank. This time, I would like to thank everyone, because this release was overall very nice and working well. The whole OpenStack community is always very helpful and understand the requirements of downstream distributions. Guys, you re awesome, I love my work, and I love working with you all! Cheers,

3 February 2016

Thomas Goirand: Moby

Just a quick reply to Rhonda about Moby. You can t introduce him without telling about Go, which is the title who made him famous, very early in the age of electronic music (November 1990, according to wikipedia). Many attempted to remix this song (and Moby himself), but nothing s as good as the original version.

Next.