Search Results: "Neil Williams"

2 May 2023

Neil Williams: Carrying Grief

This isn't a book review, although the reason that I am typing this now is because of a book, You Are Not Alone: from the creator and host of Griefcast, Cariad Lloyd, ISBN: 978-1526621870 and I include a handful of quotes from Cariad where there is really no better way of describing things. Many people experience death for the first time as a child, often relating to a family pet. Death is universal but every experience of death is unique. One of the myths of grief is the idea of the Five Stages but this is a misinterpretation. Denial, Anger, Bargaining, Depression and Acceptance represent the five stage model of death and have nothing to do with grief. The five stages were developed from studying those who are terminally ill, the dying, not those who then grieve for the dead person and have to go on living without them. Grief is for those who loved the person who has died and it varies between each of those people just as people vary in how they love someone. The Five Stages end at the moment of death, grief is what comes next and most people do not grieve in stages, it can be more like a tangled knot. Death has a date and time, so that is why the last stage of the model is Acceptance. Grief has no timetable, those who grieve will carry that grief for the rest of their lives. Death starts the process of grief in those who go on living just as it ends the life of the person who is loved. "Grief eases and changes and returns but it never disappears.". I suspect many will have already stopped reading by this point. People do not talk about death and grief enough and this only adds to the burden of those who carry their grief. It can be of enormous comfort to those who have carried grief for some time to talk directly about the dead, not in vague pleasantries but with specific and strong memories. Find a safe place without distractions and talk with the person grieving face to face. Name the dead person. Go to places with strong memories and be there alongside. Talk about the times with that person before their death. Early on, everything about grief is painful and sad. It does ease but it remains unpredictable. Closing it away in a box inside your head (as I did at one point) is like cutting off a damaged limb but keeping the pain in a box on the shelf. You still miss the limb and eventually, the box starts leaking. For me, there were family pets which died but my first job out of university was to work in hospitals, helping the nurses manage the medication regimen and providing specialist advice as a pharmacist. It will not be long in that environment before everyone on the ward gets direct experience of the death of a person. In some ways, this helped me to separate the process of death from the process of grief. I cared for these people as patients but these were not my loved ones. Later, I worked in specialist terminal care units, including providing potential treatments as part of clinical trials. Here, it was not expected for any patient to be discharged alive. The more aggressive chemotherapies had already been tried and had failed, this was about pain relief, symptom management and helping the loved ones. Palliative care is not just about the patient, it involves helping the loved ones to accept what is happening as this provides comfort to the patient by closing the loop. Grief is stressful. One of the most common causes of personal stress is bereavement. The death of your loved one is outside of your control, it has happened, no amount of regret can change that. Then come all the other stresses, maybe about money or having somewhere to live as a result of what else has changed after the death or having to care for other loved ones. In the early stages, the first two years, I found it helpful to imagine my life as a box containing a ball and a button. The button triggers new waves of pain and loss each time it is hit. The ball bounces around the box and hits the button at random. Initially, the button is large and the ball is enormous, so the button is hit almost constantly. Over time, both the button and the ball change size. Starting off at maximum, initially there is only one direction of change. There are two problems with this analogy. First is that the grief ball has infinite energy which does not happen in reality. The ball may get smaller and the button harder to hit but the ball will continue bouncing. Secondly, the life box is not a predictable shape, so the pattern of movement of the ball is unpredictable. A single stress is one thing, but what has happened since has just kept adding more stress for me. Shortly before my father died 5 years ago now, I had moved house. Then, I was made redundant on the day of the first anniversary of my father's death. A year or so later, my long term relationship failed and a few months after that COVID-19 appeared. As the country eased out of the pandemic in 2021, my mother died (unrelated to COVID itself). A year after that, I had to take early retirement. My brother and sister, of course, share a lot of those stressors. My brother, in particular, took the responsibility for organising both funerals and did most of the visits to my mother before her death. The grief is different for each of the surviving family. Cariad's book helped me understand why I was getting frequent ideas about going back to visit places which my father and I both knew. My parents encouraged each of us to work hard to leave Port Talbot (or Pong Toilet locally) behind, in no small part due to the unrestrained pollution and deprivation that is common to small industrial towns across Wales, the midlands and the north of the UK. It wasn't that I wanted to move house back to our ancestral roots. It was my grief leaking out of the box. Yes, I long for mountains and the sea because I'm now living in a remorselessly flat and landlocked region after moving here for employment. However, it was my grief driving those longings - not for the physical surroundings but out of the shared memories with my father. I can visit those memories without moving house, I just need to arrange things so that I can be undisturbed and undistracted. I am not alone with my grief and I am grateful to my friends who have helped whilst carrying their own grief. It is necessary for everyone to think and talk about death and grief. In respect of your own death, no matter how far ahead that may be, consider Advance Care Planning and Expressions of Wish as well as your Will. Talk to people, document what you want. Your loved ones will be grateful and they deserve that much whilst they try to cope with the first onslaught of grief. Talk to your loved ones and get them to do the same for themselves. Normalise talking about death with your family, especially children. None of us are getting out of this alive and we will all leave behind people who will grieve.

20 December 2022

Freexian Collaborators: Recent improvements to Tryton's Debian Packaging (by Mathias Behrle and Rapha l Hertzog)

Foreword Freexian has been using Tryton for a few years to handle its invoicing and accounting. We have thus also been using the Debian packages maintained by Mathias Behrle and we have been funding some of his work because maintaining an ERP with more than 50 source packages was too much for him to handle alone on his free time. When Mathias discovered our Project Funding initiative, it was quite natural for him to consider applying to be able to bring some much needed improvements to Tryton s Debian packaging. He s running his own consulting company (MBSolutions) so it s easy for him to invoice Freexian to get the money for the funded projects. What follows is Mathias Behrle s description of the projects that he submitted and of the work that he achieved. If you want to contact him, you can reach out to mathiasb@m9s.biz or mbehrle@debian.org. You can also follow him on Mastodon.

Report In January 2022 I applied for two projects in the Freexian Project Funding Initiative.
  • Tryton Project 1
    • The starting point of this project was Debian Bug #998319: tryton-server should provide a ready-to-use production-grade server config. To address this problem instead of only providing configuration snippets the idea was to provide a full featured guided setup of a Tryton production environment, thus eliminating the risks of trial and error for the system administrator.
  • Tryton Project 2
    • The goal of this project was to complete the available Tryton modules in Debian main with the latest set available from tryton.org and to automate the task of creating new Debian packages from Tryton modules as much as possible.

Accomplishments As the result of Task 1, several new packages emerged:
  • tryton-server-postgresql provides the guided setup of a PostgreSQL database backend.
  • tryton-server-uwsgi provides the installation and configuration of a robust WSGI server on top of tryton-server.
  • tryton-server-nginx provides the configuration of a scalable web frontend to the uwsgi server, including the optional setup of secure access by Letsencrypt certificates.
  • tryton-server-all-in-one puts it all together to provide a fully functional Tryton production environment, including a database filled with basic static data. With the installation of this package a robust and secure production grade setup is possible from scratch, all configuration leg work is done in the background.
The work was thoroughly reviewed by Neil Williams. Thanks go to him for his detailed feedback providing very valuable information from the view of a fresh Tryton user getting in first contact with the software. A cordial thank you as well goes to the translation teams providing initial reviews and translations for the configuration questions. The efforts of Task 1 were completed with Task 2:
  • A Tryton specific version of PyPi2deb was created to help in the preparation of new Debian packages for new Tryton modules.
  • All missing Tryton modules for the current series were packaged for Debian.
On top of those two planned projects, I completed an additional task: the packaging of the Tryton Web Client. The Web Client is a quite important feature to access a Tryton server with the browser and even a crucial requirement for some companies. Unfortunately the packaging of the Web Client for Debian was problematic from the beginning. tryton-sao requires exact versions of JavaScript libraries that are almost never guaranteed to be available in the different targeted Debian releases. Therefore a package with vendored libraries has been created and will hopefully soon hit the Debian main archive. The package is already available from the Tryton Backport Mirror for the usually supported Debian releases.

Summary I am very pleased that the Tryton suite in Debian has gained full coverage of Tryton modules and a user-friendly installation. The completion of the project represents a huge step forward in the state-of-the-art deployment of a production grade Tryton environment. Without the monetary support of Freexian s project funding the realization of this project wouldn t have been possible in this way and to this extent.

19 April 2022

Steve McIntyre: Firmware - what are we going to do about it?

TL;DR: firmware support in Debian sucks, and we need to change this. See the "My preference, and rationale" Section below. In my opinion, the way we deal with (non-free) firmware in Debian is a mess, and this is hurting many of our users daily. For a long time we've been pretending that supporting and including (non-free) firmware on Debian systems is not necessary. We don't want to have to provide (non-free) firmware to our users, and in an ideal world we wouldn't need to. However, it's very clearly no longer a sensible path when trying to support lots of common current hardware. Background - why has (non-free) firmware become an issue? Firmware is the low-level software that's designed to make hardware devices work. Firmware is tightly coupled to the hardware, exposing its features, providing higher-level functionality and interfaces for other software to use. For a variety of reasons, it's typically not Free Software. For Debian's purposes, we typically separate firmware from software by considering where the code executes (does it run on a separate processor? Is it visible to the host OS?) but it can be difficult to define a single reliable dividing line here. Consider the Intel/AMD CPU microcode packages, or the U-Boot firmware packages as examples. In times past, all necessary firmware would normally be included directly in devices / expansion cards by their vendors. Over time, however, it has become more and more attractive (and therefore more common) for device manufacturers to not include complete firmware on all devices. Instead, some devices just embed a very simple set of firmware that allows for upload of a more complete firmware "blob" into memory. Device drivers are then expected to provide that blob during device initialisation. There are a couple of key drivers for this change: Due to these reasons, more and more devices in a typical computer now need firmware to be uploaded at runtime for them to function correctly. This has grown: At the beginning of this timeline, a typical Debian user would be able to use almost all of their computer's hardware without needing any firmware blobs. It might have been inconvenient to not be able to use the WiFi, but most laptops had wired ethernet anyway. The WiFi could always be enabled and configured after installation. Today, a user with a new laptop from most vendors will struggle to use it at all with our firmware-free Debian installation media. Modern laptops normally don't come with wired ethernet now. There won't be any usable graphics on the laptop's screen. A visually-impaired user won't get any audio prompts. These experiences are not acceptable, by any measure. There are new computers still available for purchase today which don't need firmware to be uploaded, but they are growing less and less common. Current state of firmware in Debian For clarity: obviously not all devices need extra firmware uploading like this. There are many devices that depend on firmware for operation, but we never have to think about them in normal circumstances. The code is not likely to be Free Software, but it's not something that we in Debian must spend our time on as we're not distributing that code ourselves. Our problems come when our user needs extra firmware to make their computer work, and they need/expect us to provide it. We have a small set of Free firmware binaries included in Debian main, and these are included on our installation and live media. This is great - we all love Free Software and this works. However, there are many more firmware binaries that are not Free. If we are legally able to redistribute those binaries, we package them up and include them in the non-free section of the archive. As Free Software developers, we don't like providing or supporting non-free software for our users, but we acknowledge that it's sometimes a necessary thing for them. This tension is acknowledged in the Debian Free Software Guidelines. This tension extends to our installation and live media. As non-free is officially not considered part of Debian, our official media cannot include anything from non-free. This has been a deliberate policy for many years. Instead, we have for some time been building a limited parallel set of "unofficial non-free" images which include non-free firmware. These non-free images are produced by the same software that we use for the official images, and by the same team. There are a number of issues here that make developers and users unhappy:
  1. Building, testing and publishing two sets of images takes more effort.
  2. We don't really want to be providing non-free images at all, from a philosophy point of view. So we mainly promote and advertise the preferred official free images. That can be a cause of confusion for users. We do link to the non-free images in various places, but they're not so easy to find.
  3. Using non-free installation media will cause more installations to use non-free software by default. That's not a great story for us, and we may end up with more of our users using non-free software and believing that it's all part of Debian.
  4. A number of users and developers complain that we're wasting their time by publishing official images that are just not useful for a lot (a majority?) of users.
We should do better than this. Options The status quo is a mess, and I believe we can and should do things differently. I see several possible options that the images team can choose from here. However, several of these options could undermine the principles of Debian. We don't want to make fundamental changes like that without the clear backing of the wider project. That's why I'm writing this...
  1. Keep the existing setup. It's horrible, but maybe it's the best we can do? (I hope not!)
  2. We could just stop providing the non-free unofficial images altogether. That's not really a promising route to follow - we'd be making it even harder for users to install our software. While ideologically pure, it's not going to advance the cause of Free Software.
  3. We could stop pretending that the non-free images are unofficial, and maybe move them alongside the normal free images so they're published together. This would make them easier to find for people that need them, but is likely to cause users to question why we still make any images without firmware if they're otherwise identical.
  4. The images team technically could simply include non-free into the official images, and add firmware packages to the input lists for those images. However, that would still leave us with problem 3 from above (non-free generally enabled on most installations).
  5. We could split out the non-free firmware packages into a new non-free-firmware component in the archive, and allow a specific exception only to allow inclusion of those packages on our official media. We would then generate only one set of official media, including those non-free firmware packages. (We've already seen various suggestions in recent years to split up the non-free component of the archive like this, for example into non-free-firmware, non-free-doc, non-free-drivers, etc. Disagreement (bike-shedding?) about the split caused us to not make any progress on this. I believe this project should be picked up and completed. We don't have to make a perfect solution here immediately, just something that works well enough for our needs today. We can always tweak and improve the setup incrementally if that's needed.)
These are the most likely possible options, in my opinion. If you have a better suggestion, please let us know! I'd like to take this set of options to a GR, and do it soon. I want to get a clear decision from the wider Debian project as to how to organise firmware and installation images. If we do end up changing how we do things, I want a clear mandate from the project to do that. My preference, and rationale Mainly, I want to see how the project as a whole feels here - this is a big issue that we're overdue solving. What would I choose to do? My personal preference would be to go with option 5: split the non-free firmware into a special new component and include that on official media. Does that make me a sellout? I don't think so. I've been passionately supporting and developing Free Software for more than half my life. My philosophy here has not changed. However, this is a complex and nuanced situation. I firmly believe that sharing software freedom with our users comes with a responsibility to also make our software useful. If users can't easily install and use Debian, that helps nobody. By splitting things out here, we would enable users to install and use Debian on their hardware, without promoting/pushing higher-level non-free software in general. I think that's a reasonable compromise. This is simply a change to recognise that hardware requirements have moved on over the years. Further work If we do go with the changes in option 5, there are other things we could do here for better control of and information about non-free firmware:
  1. Along with adding non-free firmware onto media, when the installer (or live image) runs, we should make it clear exactly which firmware packages have been used/installed to support detected hardware. We could link to docs about each, and maybe also to projects working on Free re-implementations.
  2. Add an option at boot to explicitly disable the use of the non-free firmware packages, so that users can choose to avoid them.
Acknowledgements Thanks to people who reviewed earlier versions of this document and/or made suggestions for improvement, in particular:

8 February 2022

Neil Williams: Django Model Mommy moving to Model Bakery

Some Django applications use Model Mommy in unit tests: https://tracker.debian.org/pkg/python-model-mommy Upstream, model mommy has been renamed model bakery: https://model-bakery.readthedocs.io/en/latest/
Model Bakery is a rename of the legacy model_mommy s project. This is because the project s creator and maintainers decided to not reinforce gender stereotypes for women in technology. You can read more about this subject here
Hence: https://bugs.debian.org/1005114 and https://ftp-master.debian.org/new/python-model-bakery_1.4.0-1.html So this is a heads-up to all those using Debian for their Django unit tests. Model Mommy will no longer get updates upstream, so model mommy will not be able to support Django4. Updates will only be done, upstream, in the Model Bakery package which already supports Django4. Bakery is not a drop-in replacement. Model Bakery includes a helper script to migrate: https://salsa.debian.org/python-team/packages/python-model-bakery/-/blob/master/utils/from_mommy_to_bakery.py This is being packaged in /usr/share/ in the upcoming python3-model-bakery package. It is a tad confusing that model-mommy is at version 1.6.0 but model-bakery is at version 1.4.0 but that only reinforces that Django apps using Model Mommy will need editing to move to Model Bakery. I'll be using the migration script for a Freexian Django app which currently uses Model Mommy. Once Model Bakery reaches bookworm, I plan to do a backport to bullseye. Then I'll file a bug against Model Mommy. Severity of that bug will be increased when Django4 enters unstable (the 4.0 dev release is currently in experimental but there is time before the 4.2 LTS is uploaded to unstable). https://packages.debian.org/experimental/python3-django Model Bakery is in NEW and Salsa Update: Django 4.2 LTS would be the first Django4 release in unstable.

16 December 2021

Raphaël Hertzog: Freexian s report about Debian Long Term Support, November 2021

A Debian LTS logo
Every month we review the work funded by Freexian s Debian LTS offering. Please find the report for November below. Debian project funding We continue to looking forward to hearing about Debian project proposals from various Debian stakeholders. This month has seen work on a survey that will go out to Debian Developers to gather feedback on what they think should be the priorities for funding in the project. Learn more about the rationale behind this initiative in this article. Debian LTS contributors In November 13 contributors were paid to work on Debian LTS, their reports are available below. If you re interested in participating in the LTS or ELTS teams, we welcome participation from the Debian community. Simply get in touch with Jeremiah if you are interested in participating. Evolution of the situation In November we released 31 DLAs. The security tracker currently lists 23 packages with a known CVE and the dla-needed.txt file has 16 packages needing an update. Thanks to our sponsors Sponsors that joined recently are in bold.

11 December 2021

Neil Williams: Diversity and gender

As a follow on to a previous blog entry of mine, Free and Open, I feel it worthwhile to do my bit to dismantle the pseudo-science and over simplification in the idea that gender is binary at a biological level.
TL;DR: Science simply does not support binary sexes or binary genders. Truth is a bit more complicated.
There is certainty and there are binary answers in mathematics. Things get less definitive in physics, certainly as soon as quantum is broached. Processes become more of an equilibrium between states in chemistry, never wholly one or the other. Yes, there is the oddity of absolute zero but no experiment has yet achieved that fully. It is accurate to describe physics as a development of applied mathematics and to view chemistry as applied physics. Biology, at the biochemical level, is applied chemistry. The sciences build on each other, "on the shoulders of giants", but at each level, some certainty is lost, some amount of uncertainty is expanded and measurements become probabilities, proportions and percentages. Biology is dependent on biochemistry - chemistry is how a biological change results in a different organism. Physics is how that chemical change occurs - temperature, pressure and physical states are inherent to all chemical changes. Outside laboratory constraints, few chemical reactions, especially in organic chemistry, produce one and only one result from two or more known reagents. In biology, everyone is familiar with genetic mutations but a genetic mutation only happens because a biochemical reaction (hydrogen bonding of nucleobases) does not always produce the expected result. Every cell division, every viral infection, there is a finite probability that a change will occur. It might be a small number but it is never zero and can never be dismissed. This is obvious in the current Covid pandemic - genetic mutations result in new variants. Some variants are inviable, some variants produce no net change in the way that the viral particles infect adjacent cells. Sometimes, a mutation happens that changes everything. These mutations are not mistakes - these are simply changes with undetermined outcomes. Genetic changes are the foundation of biodiversity and variety is what allows lifeforms of all kinds to survive changes in environmental factors and/or changes in prevalent diseases. It is precisely the same in humans, particularly in one of the principle spheres of human life that involves replicating genetic material - the creation of gametes for sexual reproduction. Every single time any DNA is copied, there is a finite chance that a different base will be put in place compared to the original. Copying genetic material is therefore non-binary. Given precisely the same initial conditions, the result is not always predictable and the range of how the results vary from one to another increases with every iteration. Let me stress that - at the molecular level, no genetic operation in any biological lifeform has a truly binary result. Repeat that operation sufficiently often and an unexpected result WILL inevitably occur. It is a mathematical certainty that genetic changes will arise by attempting precisely the same genetic operation enough times. Genetic changes are fundamental to how lifeforms survive changing conditions. Life would likely have died out a long time ago on this planet if every genetic operation was perfect. Diversity is life. Similarity leads to extinction. Viral load is interesting at this point. Someone can be infected with a virus, including coronavirus, by encountering a small number of viral particles. Some viruses, it may be a few hundred, some viruses may need a few thousand particles to infect a vulnerable host. But here's the thing, for that host to be at risk of infecting another host, the virus needs the host to produce billions upon billions of copies of the virus by taking over the genetic machinery within a huge number of cells in the host. This, as is accepted with Covid, is before the virus has been copied enough times to produce symptoms in the host. Before those symptoms become serious, billions more copies will be made. The numbers become unimaginable - and that is within a single host, let alone the 265 million (and counting) hosts in the current Covid19 pandemic. It's also no wonder that viral infections cause tiredness, the infection is diverting huge resources to propagating itself - before even considering the activity of the immune system. It is idiocy of the highest order to expect all those copies to be identical. The rise of variants is inevitable - indeed essential - in all spheres of biology. A single viral particle is absolutely no threat of any kind - it must first get inside and then copy the genetic information in a host cell. This is where the complexity lies in the definition of life itself. A virus can be considered a lifeform but it is only able to reproduce using another, more complex, lifeform. In truth, a viral particle does not and cannot mutate. The infected host mutates the virus. The longer it takes that host to clear the infection, the more mutations that host will create and then potentially spread to others. Now apply this to the creation of gametes in humans. With seven billion humans, the amount of copying of genetic material is not as large as the pandemic but it is still easy for everyone to understand that children do not merely combine the DNA of both parents. Changes happen. Human sexual reproduction is not as simple as 1 + 1 = 2. Sometimes, the copying of the genetic material produces an unexpected result. Sexual reproduction itself is non-binary. Sexual reproduction is not easy or simple for lifeforms to adopt - the diversity which results from the non-binary operations are exactly why so many lifeforms invest so much energy in reproducing in this way. Whilst many genetic changes in humans will be benign or beneficial, I d like to take an example of a genetic disorder that results from the non-binary nature of sex. Humans can be born with the XY phenotype - i.e. at a genetic level, the individual has the same combination of chromosomes as another XY individual but there are changes within the genes in those chromosomes. We accept this, some children of blonde parents do not have blonde hair, etc. There are also genetic changes where an XY phenotype is not binary. Some people, who at a genetic level would be almost identical to another person who is genetically male, have a genetic mutation which makes it impossible for the cells of that individual to respond to androgens (testosterone). (See Androgen insensitivity syndrome). Genetically, that individual has an X and a Y chromosome, just like many other individuals. However, due to a change in how the genes on those chromosomes were copied, that individual is biologically incapable of constructing the secondary sexual characteristics of a male. At a genetic level, the individual has the XY phenotype of a male. At the physical level, the individual has all the sexual characteristics of a female and none of the sexual characteristics of a male. The gender of that individual is not binary. Treatment is centred on supporting the individual and minimising some risks from the inactive genes on the Y chromosome. Human sexual reproduction is non-binary. The results of any sexual reproduction in humans will not always produce the binary option of male or female. It is a lie to claim that human gender is binary. The science is in plain view and cannot be ignored. Identifying as non-binary is not a "cop out" - it can be a biological, genetic, scientific fact. Human sexuality and gender are malleable. Where genetic changes result in symptoms, these can be ameliorated by treatment with human sex hormones, like oestrogen and testosterone. There are valid medical uses for anabolic steroids and hormone replacement therapies to help individuals who, at a genetic level, have non-binary gender. These treatments can help align the physical outer signs with the personality and identity of the individual, whether with or without surgery. It is unacceptable to abandon such people to suffer life long discrimination and harassment by imposing a binary definition that has no basis in science. When a human being has an XY phenotype, that human being is not necessarily male. That individual will be on a spectrum from female (left unaffected by sex hormones in the womb, the foetus will be female, even with an X and a Y chromosome), to various degrees of male. So, at a genetic, biological level, it is a scientific fact that human beings do not have binary gender. There is no evidence that this is new to the modern era, there is no scientific basis for thinking that copying of genetic material was somehow perfectly reliable in earlier history, or that such mutations are specific to homo sapiens. Changes in genetic material provide the diversity to fight infections and adapt to changing environmental factors. Species have and will continue to go extinct if this diversity is absent. With that out of the way, it is no longer a stretch to encompass other aspects of human non-binary genders beyond the known genetic syndromes based on changes in the XY phenotype. Science has not uncovered all of the ways that genes affect personality, behaviour, or identity. How other, less studied, genetic changes affect the much more subtle human facets, especially anything to do with consciousness, identity, personality, sexuality and behaviour, is guesswork. All of these facets can and likely are being affected by genetic factors as well as environmental factors in an endless range of permutations. Personality traits are a beautiful and largely unknowable blend of genes and environment. Genetic information has a finite probability of changes at each and every iteration. Environmental factors are more akin to chaos theory. The idea that the results will fit into binary constructs is laughable. Human society puts huge emphasis on societal norms. Individuals who do not fit into those norms suffer discrimination. The norms themselves have evolved over time as a response to various influences on human civilisation but most are not based on science. It is up to all humans in that society to call out discrimination, to call for changes in the accepted norms and support those who are marginalised. It is a precarious balance, one that humans rarely get right, but it must be based on an acceptance that variation is the natural state. Artificial constraints, like binary genders, must be dismantled because human beings and human sexual reproduction are not binary. To those who think, "well it is for 99%", think again about Covid. 99% (or closer to 98%) of infected humans recover without notable after effects. That has still crippled the nations of the globe and humbled all those who tried to deny it. Five million human beings are dead because "most infected people recover". Just because something only affects a proportion of human beings does not invalidate the suffering of those humans and the discrimination that those humans will face. Societal norms are not necessarily correct. Religious and other influences typically obscure and ignore scientific fact and undermine human kindness. The scientific truth of life on this planet is that gender is not binary. The more complex the lifeform, the more factors will affect where on the spectrum any one individual will appear. Just because we do not yet fully understand how genes affect human personality and sexuality, does not invalidate the science that variation is the natural order. My previous blog about diversity is not just about male vs female, one nationality vs another, one ethnicity compared to another. Diversity is diverse. Diversity requires accepting that every facet of humanity is subject to variation. That leads to tension at times, it is inevitable. Tension against societal norms, tension against discrimination, tension around those individuals who would abuse the tolerance of others for their own gratification or from their own ignorance. None of us are perfect, none of us have any of this fully sorted and all of us will make mistakes. Personally, I try to respect those around me. I will use whatever pronouns and other conventions that the person requests, from their perspective and not mine. To do otherwise is to deny the natural order and to deny the science. Celebrate all diversity, it is the very stuff of life. The discussions around (typically female) bathroom facilities often miss the point. The concern is not about individuals who describe themselves as non-binary. The concern is about individuals who are fully certain of their own sexuality and who act as sexual predators for their own gratification. These people are acting out a lie for their own ends. The problem people are the predators, so stop blaming the victims who are just as at risk as anyone else who identifies as female. Maybe the best people to spot such predators are those who are non-binary, who have had to pretend to fit into societal norms. Just as travel can be a good antidote to racism, openness and discussion can be a tool to undermine the lies of sexual predators and reassure those who are justifiably fearful. There can never be a biological binary test of gender, there can never be any scientific justification for binary division of facilities. Humanity itself is not binary, even life itself has blurry borders around comas, suspended animation and locked-in syndrome. Legal definitions of human death vary around the world. The only common thread I have ever found is: Be kind to each other. If you find anything above objectionable, then I can only suggest that you reconsider the science and learn to be kind to your fellow humans. None of us are getting out of this alive. I Think You ll Find It s a Bit More Complicated Than That - Ben Goldacre ISBN 978-0-00-750514-2 https://www.amazon.co.uk/dp/B00HATQA8K/ https://en.wikipedia.org/wiki/Androgen_insensitivity_syndrome https://www.bbc.co.uk/news/world-51235105 https://en.wikipedia.org/wiki/Nucleobase My degree is in pharmaceutical sciences and I practised community and hospital pharmacy for 20 years before moving into programming. I have direct experience of supporting people who were prescribed hormones to transition their physical characteristics to match their personal identity. I had a Christian upbringing but my work showed me that those religious norms were incompatible with being kind to others, so I rejected religion and I now consider myself a secular humanist.

19 November 2021

Neil Williams: git worktrees

A few scenarios have been problematic with git and I've now discovered git worktrees which help with each. You could go to the trouble of making a new directory and re-cloning the same tree. However, a local commit in one tree is then not accessible to the other tree. You could commit everything every time, but with a dirty tree, that involves sorting out the .gitignore rules as well. That could well be pointless with an experimental change. Git worktrees allow multiple filesystems from a single git tree. Commits on any branch are visible from other branches, even when the commit was on a different worktree. This makes things like cherry-picking easy, without needing to push pointless changes or branches. Branches on a worktree can be rebased as normal, with the benefit that commit hashes from other local changes are available for reference and cherry-picks. I'm sure git worktrees are not new. However, I've only started using them recently and others have asked about how the worktree operates. Creating a new tree can be done with a new or existing branch. To make it easier, set the new directory at the same time, usually in ../ New branch (branched from the current branch):
git worktree add -b branch_name ../branch_name
Existing branch - note, slightly different syntax here, specify the commit-ish last (branch name, tag or hash):
git worktree add ../branch_name branch_name
git worktree list
/home/neil/Documents/testing/testrepo        0612677 [master]
/home/neil/Documents/testing/testtree        d38f5a3 [testtree]
Use git worktree remove <name> to drop the entire directory for that tree and the git tracking. I'm using this for work on the Debian Security Tracker. I have two local branches and having two worktrees allows me to have three terminals open, using the same files and the same git repository. One to run make serve and update the local SQLite database. One to access master to run git pull One to make local changes without risking collisions on master.
git add data/CVE/list
git commit
# pre commit hook runs here
git log -n 1
# copy the hash
# switch to master terminal
git pull
git cherry-pick <HASH>
git push
# switch to server terminal
git rebase master
# no git pull or fetch, it's all local
make
# switch back to changes terminal
git rebase master
Sadly, one area where this isn't as easy is with importing a new DSC into Salsa with git-buildpackage as that uses several branches at the same time. It would be possible but you'll need to have a separate upstream and possibly pristine-tar branches and supply the relevant options. Possibly something git-buildpackage to adopt - it is common to need to make changes to the packaging with a new upstream release & a lot of those changes are currently done outside git. For the rest of the support, see git worktree (1)

10 November 2021

Neil Williams: LetsEncrypt with Apache, Gunicorn and Debian Bullseye

This took me far too long to identify and debug, so I'm going to write it up here for my own reference and to possibly help others.
Background Upgrading an old codebase from Python2 on Buster to Python3 ready for Bullseye and from Django1 to Django2 (prepared for Django3). Everything is fine at this stage - the Django test server is happy with HTTP and it gives enough support to do the actual code changes to get to Python3. All well and good so far. The main purpose of this particular code was to support payments, so a chunk of the testing cannot be done without HTTPS, which is where things got awkward. This particular service needs HTTPS using LetsEncrypt and Apache2. To support Django, I typically use Gunicorn. All of this works with HTTP. Moving to HTTPS was easy to test using the default-ssl virtual host that comes with Apache2 in Debian. It's a static page and it worked well with https. The problems all start when trying to use this known-working HTTPS config with the other Apache virtual host to add support for the gunicorn proxy.
Apache reverse proxy AH00898 Error during SSL Handshake with remote server
Investigating Now that I know why this happened, it's easier to see what was happening. At the time, I was swamped in a plethora of options and permutations between the Django HTTPS options and the Apache VirtualHost SSL and proxy commands. Going through all of those took up huge amounts of time, all in the wrong area. In previous configurations using packages in Buster, gunicorn could simply run on http://localhost:8000 and Apache would proxy that as https. In versions in Bullseye, this no longer works and it is that handover from https in Apache to http in the proxy is where it is failing. Apache is using HTTPS because the LetsEncrypt certificates, created using dehydrated, are specified in the VirtualHost configuration. To fix the handshake error, the proxy server needs to know about the certificates created by dehydrated as well.
Gunicorn needs the certificates The clue is in the gunicorn help:
--keyfile FILE        SSL key file [None]
--certfile FILE       SSL certificate file [None]
The final part of the puzzle is that the certificates created by dehydrated are in a private location:
drwx------ 2 root root /var/lib/dehydrated/certs/
To test gunicorn, this will mean using sudo but that's just a step towards running gunicorn as a systemd service (when access to the certs will not be a problem). Starting gunicorn using these options shows the proxy now being available at https://localhost:8000 which is a subtle but very important change.
Environment=LOGLEVEL=DEBUG WORKERS=4 LOGFILE=/var/log/gunicorn/site.log
ExecStart=/usr/bin/gunicorn3 site.wsgi --log-level $LOGLEVEL --log-file $LOGFILE --workers $WORKERS \
--certfile /var/lib/dehydrated/certs/site/cert.pem \
--keyfile /var/lib/dehydrated/certs/site/privkey.pem
The specified locations are symbolic links created by dehydrated to cope with regular updates of the certificates using cron.

19 October 2021

Rapha&#235;l Hertzog: Freexian s report about Debian Long Term Support, September 2021

A Debian LTS logo
Like each month, have a look at the work funded by Freexian s Debian LTS offering. Debian project funding Folks from the LTS team, along with members of the Debian Android Tools team and Phil Morrel, have proposed work on the Java build tool, gradle, which is currently blocked due to the need to build with a plugin not available in Debian. The LTS team reviewed the project submission and it has been approved. After approval we ve created a Request for Bids which is active now. You ll hear more about this through official Debian channels, but in the meantime, if you feel you can help with this project, please submit a bid. Thanks! This September, Freexian set aside 2550 EUR to fund Debian projects. We re looking forward to receive more projects from various Debian teams! Learn more about the rationale behind this initiative in this article. Debian LTS contributors In September, 15 contributors have been paid to work on Debian LTS, their reports are available: Evolution of the situation In September we released 30 DLAs. September was also the second month of Jeremiah coordinating LTS contributors. Also, we would like say that we are always looking for new contributors to LTS. Please contact Jeremiah if you are interested! The security tracker currently lists 33 packages with a known CVE and the dla-needed.txt file has 26 packages needing an update. Thanks to our sponsors Sponsors that joined recently are in bold.

8 October 2021

Neil Williams: Using Salsa with contrib and non-free

OK, I know contrib and non-free aren't popular topics to many but I've had to sort out some simple CI for such contributions and I thought it best to document how to get it working. You will need access to the GitLab Settings for the project in Salsa - or ask someone to add some CI/CD variables on your behalf. (If CI isn't running at all, the settings will need to be modified to enable debian/salsa-ci.yml first, in the same way as packages in main). The default Salsa config (debian/salsa-ci.yml) won't get a passing build for packages in contrib or non-free:
# For more information on what jobs are run see:
# https://salsa.debian.org/salsa-ci-team/pipeline
#
---
include:
  - https://salsa.debian.org/salsa-ci-team/pipeline/raw/master/salsa-ci.yml
  - https://salsa.debian.org/salsa-ci-team/pipeline/raw/master/pipeline-jobs.yml
Variables need to be added. piuparts can use the extra contrib and non-free components directly from these variables.
variables:
   RELEASE: 'unstable'
   SALSA_CI_COMPONENTS: 'main contrib non-free'
Many packages in contrib and non-free only support amd64 - so the i386 build job needs to be removed from the pipeline by extending the variables dictionary:
variables:
   RELEASE: 'unstable'
   SALSA_CI_COMPONENTS: 'main contrib non-free'
   SALSA_CI_DISABLE_BUILD_PACKAGE_I386: 1
The extra step is to add the apt source file variable to the CI/CD settings for the project. The CI/CD settings are at a URL like:
https://salsa.debian.org/<team>/<project>/-/settings/ci_cd
Expand the section on Variables and add a <b>File</b> type variable:
Key: SALSA_CI_EXTRA_REPOSITORY Value: deb https://deb.debian.org/debian/ sid contrib non-free
The pipeline will run at the next push - alternatively, the CI/CD pipelines page has a "Run Pipeline" button. The settings added to the main CI/CD settings will be applied, so there is no need to add a variable at this stage. (This can be used to test the variables themselves but only with manually triggered CI pipelines.) For more information and additional settings (for example disabling or allowing certain jobs to fail), check https://salsa.debian.org/salsa-ci-team/pipeline

4 October 2021

Rapha&#235;l Hertzog: Freexian s report about Debian Long Term Support, August 2021

A Debian LTS logo
Like each month, have a look at the work funded by Freexian s Debian LTS offering. Debian project funding In August, we put aside 2460 EUR to fund Debian projects. We received a new project proposal that got approved and there s an associated bid request if you feel like proposing yourself to implement this project. We re looking forward to receive more projects from various Debian teams! Learn more about the rationale behind this initiative in this article. Debian LTS contributors In August, 14 contributors have been paid to work on Debian LTS, their reports are available: Evolution of the situation In August we released 30 DLAs.

This is the first month of Jeremiah coordinating LTS contributors. We would like to thank Holger Levsen for his work on this role up to now.

Also, we would like to remark once again that we are constantly looking for new contributors. Please contact Jeremiah if you are interested! The security tracker currently lists 73 packages with a known CVE and the dla-needed.txt file has 29 packages needing an update. Thanks to our sponsors Sponsors that joined recently are in bold.

25 August 2021

Rapha&#235;l Hertzog: Freexian s report about Debian Long Term Support, July 2021

A Debian LTS logo
Like each month, have a look at the work funded by Freexian s Debian LTS offering. Debian project funding In July, we put aside 2400 EUR to fund Debian projects. We haven t received proposals of projects to fund in the last months, so we have scheduled a discussion during Debconf to try to to figure out why that is and how we can fix that. Join us on August 26th at 16:00 UTC on this link. We are pleased to announce that Jeremiah Foster will help out to make this initiative a success : he can help Debian members to come up with solid proposals, he can look for people willing to do the work once the project has been formalized and approved, and he will make sure that the project implementation keeps on track when the actual work has begun. We re looking forward to receive more projects from various Debian teams! Learn more about the rationale behind this initiative in this article. Debian LTS contributors In July, 12 contributors have been paid to work on Debian LTS, their reports are available: Evolution of the situation In July we released 30 DLAs. Also we were glad to welcome Neil Williams and Lee Garrett who became active contributors. The security tracker currently lists 63 packages with a known CVE and the dla-needed.txt file has 17 packages needing an update. We would like to thank Holger Levsen for the years of work where he managed/coordinated the paid LTS contributors. Jeremiah Foster will take over his duties. Thanks to our sponsors Sponsors that joined recently are in bold.

27 March 2021

Neil Williams: Free and Open

A long time, no blog entries. What's prompted a new missive? Two things: All the time I've worked with software and computers, the lack of diversity in the contributors has irked me. I started my career in pharmaceutical sciences where the mix of people, at university, would appear to be genuinely diverse. It was in the workplace that the problems started, especially in the retail portion of community pharmacy with a particular gender imbalance between management and counter staff. Then I started to pick up programming. I gravitated to free software for the simple reason that I could tinker with the source code. To do a career change and learn how to program with a background in a completely alien branch of science, the only realistic route into the field is to have access to the raw material - source code. Without that, I would have been seeking a sponsor, in much the same way as other branches of science need grants or sponsors to purchase the equipment and facilities to get a foot in the door. All it took from me was some time, a willingness to learn and a means to work alongside those already involved. That's it. No complex titration equipment, flasks, centrifuges, pipettes, spectrographs, or petri dishes. It's hard to do pharmaceutical science from home. Software is so much more accessible - but only if the software itself is free. Software freedom removes the main barrier to entry. Software freedom enables the removal of other barriers too. Once the software is free, it becomes obvious that the compiler (indeed the entire toolchain) needs to be free, to turn the software into something other people can use. The same applies to interpreters, editors, kernels and the entire stack. I was in a different branch of science whilst all that was being created and I am very glad that Debian Woody was available as a free cover disc on a software magazine just at the time I was looking to pick up software programming. That should be that. The only next step to be good enough to write free software was the "means to work alongside those already involved". That, it turns out, is much more than just a machine running Debian. It's more than just having an ISP account and working email (not commonplace in 2003). It's working alongside the people - all the people. It was my first real exposure to the toxicity of some parts of many scientific and technical arenas. Where was the diversity? OK, maybe it was just early days and the other barriers (like an account with an ISP) were prohibitive for many parts of the world outside Europe & USA in 2003, so there were few people from other countries but the software world was massively dominated by the white Caucasian male. I'd been insulated by my degree course, and to a large extent by my university which also had courses which already had much more diverse intakes - optics and pharmacy, business and human resources. Relocating from the insular world of a little town in Wales to Birmingham was also key. Maybe things would improve as the technical barriers to internet connectivity were lowered. Sadly, no. The echo chamber of white Caucasian input has become more and more diluted as other countries have built the infrastructure to get the populace online. Debian has helped in this area, principally via DebConf. Yet only the ethnicity seemed to change, not the diversity. Recently, more is being done to at least make Debian more welcoming to those who are brave enough to increase the mix. Progress is much slower than the gains seen in the ethnicity mix, probably because that was a benefit of a technological change, not a societal change. The attitudes so prevalent in the late 20th century are becoming less prevalent amongst, and increasingly abhorrent to, the next generation of potential community members. Diversity must come or the pool of contributors will shrink to nil. Community members who cling to these attitudes are already dinosaurs and increasingly unwelcome. This is a necessary step to retain access to new contributors as existing contributors age. To be able to increase the number of contributors, the community cannot afford to be dragged backwards by anyone, no matter how important or (previously) respected. Debian, or even free software overall, cannot change all the problems with diversity in STEM but we must not perpetuate the problems either. Those people involved in free software need to be open to change, to input from all portions of society and welcoming. Puerile jokes and disrespectful attitudes must be a thing of the past. The technical contributions do not excuse behaviours that act to prevent new people joining the community. Debian is getting older, the community and the people. The presence of spouses at Debian social events does not fix the problem of the lack of diversity at Debian technical events. As contributors age, Debian must welcome new, younger, people to continue the work. All the technical contributions from the people during the 20th century will not sustain Debian in the 21st century. Bit rot affects us all. If the people who provided those contributions are not encouraging a more diverse mix to sustain free software into the future then all their contributions will be for nought and free software will die. So it comes to the FSF and RMS. I did hear Richard speak at an event in Bristol many years ago. I haven't personally witnessed the behavioural patterns that have been described by others but my memories of that event only add to the reality of those accounts. No attempts to be inclusive, jokes that focused on division and perpetuating the white male echo chamber. I'm not perfect, I struggle with some of this at times. Personally, I find the FSFE and Red Hat statements much more in line with my feelings on the matter than the open letter which is the basis of the GR. I deplore the preliminary FSF statement on governance as closed-minded, opaque and archaic. It only adds to my concerns that the FSF is not fit for the 21st century. The open letter has the advantage that it is a common text which has the backing of many in the community, individuals and groups, who are fit for purpose and whom I have respected for a long time. Free software must be open to contributions from all who are technically capable of doing the technical work required. Free software must equally require respect for all contributors and technical excellence is not an excuse for disrespect. Diversity is the life blood of all social groups and without social cohesion, technical contributions alone do not support a community. People can change and apologies are welcome when accompanied by modified behaviour. The issues causing a lack of diversity in Debian are complex and mostly reflective of a wider problem with STEM. Debian can only survive by working to change those parts of the wider problem which come within our influence. Influence is key here, this is a soft issue, replete with unreliable emotions, unhelpful polemic and complexity. Reputations are hard won and easily blemished. The problems are all about how Debian looks to those who are thinking about where to focus in the years to come. It looks like the FSFE should be the ones to take the baton from the FSF - unless the FSF can adopt a truly inclusive and open governance. My problem is not with RMS himself, what he has or has not done, which apologies are deemed sincere and whether behaviour has changed. He is one man and his contributions can be respected even as his behaviour is criticised. My problem is with the FSF for behaving in a closed, opaque and divisive manner and then making a governance statement that makes things worse, whilst purporting to be transparent. Institutions lay the foundations for the future of the community and must be expected to hold individuals to account. Free and open have been contentious concepts for the FSF, with all the arguments about Open Source which is not Free Software. It is clear that the FSF do understand the implications of freedom and openness. It is absurd to then adopt a closed and archaic governance. A valid governance model for the FSF would never have allowed RMS back onto the board, instead the FSF should be the primary institution to hold him, and others like him, to account for their actions. The FSF needs to be front and centre in promoting diversity and openness. The FSF could learn from the FSFE. The calls for the FSF to adopt a more diverse, inclusive, board membership are not new. I can only echo the Red Hat statement:
in order to regain the confidence of the broader free software community, the FSF should make fundamental and lasting changes to its governance.
...
[There is] no reason to believe that the most recent FSF board statement signals any meaningful commitment to positive change.
And the FSFE statement:
The goal of the software freedom movement is to empower all people to control technology and thereby create a better society for everyone. Free Software is meant to serve everyone regardless of their age, ability or disability, gender identity, sex, ethnicity, nationality, religion or sexual orientation. This requires an inclusive and diverse environment that welcomes all contributors equally.
The FSF has not demonstrated the behaviour I expect from a free software institution and I cannot respect an institution which proclaims a mantra of freedom and openness that is not reflected in the governance of that institution. The preliminary statement by the FSF board is abhorrent. The FSF must now take up the offers from those institutions within the community who retain the respect of that community. I'm only one voice, but I would implore the FSF to make substantive, positive and permanent change to the governance, practices and future of the FSF or face irrelevance.

4 June 2017

Lars Wirzenius: Vmdb2 first alpha release: Debian disk image creation tool

tl;dr: Get vmdebootstrap replacement from http://git.liw.fi/vmdb2 and run it from the source tree. Tell me if something doesn't work. Send patches. Many years ago I wrote vmdebootstrap, a tool for installing Debian on a disk image for virtual machines. I had a clear personal need: I was setting up a CI system and it needed six workers: one each for Debian oldstable, stable, and unstable, on two architectures (i386, amd64). Installing Debian six times in the same way is a lot of work, so I figured how difficult can it be to automate it. Turns out that not difficult at all, except to install a bootloader. (Don't ask me why I didn't use any of the other tools for this. It was long ago, and while some of the tools that now exist probably did exist then, I like writing code and learning things while doing it.) After a while I was happy with what the program did, but didn't want to upload it to Debian, and didn't want to add the kinds of things other people wanted, so I turned vmdebootstrap over to Neil Williams, who added a ton of new features. Unfortunately, it turned out that my initial architecture was not scaleable, and also the code I wrote wasn't very good, and there weren't any tests. Neil did heroic work forcing my crappy software into doing things I never envisioned. Last year he needed a break and asked me to take vmdebootstrap back. I did, and have been hiding from the public eye ever since, since I was so ashamed of the code. (I created a new identity and pretended to be an international assassin and backup specialist, travelling the world forcing people to have at least one tested backup of their system. If you've noticed reports in the press about people reporting near-death experiences while holding a shiny new USB drive, that would've been my fault.) Pop quiz: if you have a program with ten boolean options ("do this, except if that option is given, do the other thing"), how many black box tests do you need to test all the functionality? If one run of the program takes half an hour, how long will a full test suite run? I did some hard thinking about vmdebootstrap, and came to the sad conclusion that it had reached the end of its useful life as a living software project. There was no reasonable way to add most of the additional functionality people were asking for, and even maintaining the current code was too tedious a task to consider seriously. It was time to make a clean break of the past and start over, without caring about backwards compatibility. After all, the old code wasn't going anywhere so anyone who needed it could still use it. There was no need to burden a new program with my past mistakes. All new mistakes were called for. At the Cambridge mini-Debconf of November, 2016, I gave a short presentation of what I was going to do. I also posted about my plans to the debian-cloud list. In short, I would write a new, more flexible and cleaner replacement to be called vmdb2. For various personal reasons, I've not been able to spend as much time on vmdb2 as I'd like to, but I've now reached the point where I'd like to announce the first alpha version publically. The source code is hosted here: http://git.liw.fi/vmdb2 . There are .deb packages at my personal public APT repo (http://liw.fi/code/), but vmdb2 is easy enough to run directly from a git checkout:
sudo ./vmdb2 foo.vmdb --output foo.img
There's no need to install it to try it. What works: What doesn't work: I'm not opposed to adding support for those, but they're not directly interesting to me. For example, I only have amd64 machines. The best way to get support for additional features is to tell me how, preferably in the form of patches. (If I have to read tons of docs, or other people's code, and then write code and iterate while other people tell me it doesn't work, it's probably not happening.) Why would you be interested in vmdb2? There's a lot of other tools to do what it does, so perhaps you shouldn't care. That's fine. I like writing tools for myself. But if this kind of tool is of interest to you, please do have a look. A short tutorial: vmdb2 wants you to give it a "specification file" (conventionally suffixed .vmdb, because someone stole the .spec suffix, but vmdb2 doesn't care about the name). Below is an example. vmdb2 image specification files are in YAML, since I like YAML, and specify a sequence of steps to take to build the image. Each step is a tiny piece of self-contained functionality provided by a plugin.
steps:
  - mkimg: "  output  "
    size: 4G
  - mklabel: msdos
    device: "  output  "
  - mkpart: primary
    device: "  output  "
    start: 0%
    end: 100%
    part-tag: root-part
The above create an image (name is specified with the --output option), which is four gigabytes in size, and create a partitition table and a single partition that fills the whole disk. The "tag" is given so that later steps can easily refer to the partition. If you prefer another way to partition the disk, you can achieve that by adding more "mkpart" steps. For example, for UEFI you'll want to have an EFI partition.
  - mkfs: ext4
    partition: root-part
  - mount: root-part
    fs-tag: root-fs
The above formats the partition with the ext4 filesystem, and then mounts it. The mount point will be a temporary directory created by vmdb2, and a tag is again given to the mount point so it can be referred to.
  - unpack-rootfs: root-fs
The above unpacks a tar archive to put content into the filesystem, if the tar archive exists. The tar archive is specified with the --rootfs-tarball command line option.
  - debootstrap: stretch
    mirror: http://http.debian.net/debian
    target: root-fs
    unless: rootfs_unpacked
  - apt: linux-image-amd64
    fs-tag: root-fs
    unless: rootfs_unpacked
  - cache-rootfs: root-fs
    unless: rootfs_unpacked
The above will run debootstrap and install a kernel into the filesystem, but skip doing that if the rootfs tarball was used. Also, the tarball is created if it didn't exist. This way the tarball is used by all but the first run, which saves a bit of time. On my laptop and with a local mirror, debootstrap and kernel installation takes on the order of nine minutes (500 to 600 seconds), whereas unpacking the tar archive is a bit faster (takes around 30 seconds). When iterating over things other than debootstrap, this speeds things up something wonderful, and seems worth the complexity. The "unless:" mechanism is generic. All the steps share some state, and the unpack-rootfs step sets the "rootfs_unpacked" flag in the shared state. The "unless:" field tells vmdb2 to check for the flag and if it is not set, or if it is set to false ("unless it is set to true"), vmdb2 will execute the step. vmdb2 may get more such flags in the future, if there's need.
  - chroot: root-fs
    shell:  
      sed -i '/^root:[^:]*:/s//root::/' /etc/passwd
      echo pc-vmdb2 > /etc/hostname
The above executes a couple of shell commands in a chroot of the root filesystem we've just created. In this case they remove a login password from root, and set the hostname. This is a replacement of the vmdebootstrap "customize" script, but it can be inserted anywhere into the sequence of steps. There's boot chroot and non-chroot variants of the step. This is a good point to mention that writing customize scripts gets quite repetitive and tedious after a while, so vmdb2 has a plugin to run Ansible instead. You can customize your image with that instead, while the image is being built and not have to wait until you boot the image and running Ansible over ssh.
  - grub: bios
    root-fs: root-fs
    root-part: root-part
    device: "  output  "
    console: serial
Finally, install a boot loader, grub. This shows the BIOS variant, UEFI is also supported. This also configures grub and the kernel to use a serial console. There's a "yarn" (test suite) to build and smoke test an image with vmdb2 to make sure at least the basic functionality works. The smoke test boots the image under Qemu, logs in as root, and tells the VM to power off. Very, very basic, but has already found actual bugs in vmdb2. The smoke test needs the serial console to work. As with vmdebootstrap originally, I don't particularly want to maintain the package in Debian. I've added Debian packaging (so that I can install it on my own machines), but I already have enough packages to maintain, so I'm hoping someone else will volunteer to take on the Debian maintainership and bug handling duties. If you would like vmdb2 to do more things to suit you better, I'm happy to explain how to write plugins to provide more types of steps. If you are currently using vmdebootstrap, either directly or as part of another tool, I encourage you to have a look at vmdb2. In the long term, I would like to retire vmdebootstrap entirely, once vmdb2 can do everything vmdebootstrap can do, and few people use vmdebootstrap. This may take a while. In any case, whether you want a new image building tool or not, happy hacking.

1 June 2017

Rapha&#235;l Hertzog: My Free Software Activities in May 2017

My monthly report covers a large part of what I have been doing in the free software world. I write it for my donors (thanks to them!) but also for the wider Debian community because it can give ideas to newcomers and it s one of the best ways to find volunteers to work with me on projects that matter to me. Debian LTS I was allocated 12 hours to work on security updates for Debian 7 Wheezy. During this time I did the following: Misc Debian work Debian Handbook. I started to work on the update of the Debian Administrator s Handbook for Debian 9 Stretch. As part of this, I noticed a regression in dblatex and filed this issue both in the upstream tracker and in Debian and got that issue fixed in sid and stretch (sponsored the actual upload, filed the unblock request). I also stumbled on a regression in dia which was due to an incorrect Debian-specific patch that I reverted with a QA upload since the package is currently orphaned. Django. On request of Scott Kitterman, I uploaded a new security release of Django 1.8 to jessie-backports but that upload got rejected because stretch no longer has Django 1.8 and I m not allowed to maintain that branch in that repository. Ensued a long and heated discussion that has no clear resolution yet. It seems likely that some solution will be found for Django (the 1.8.18 that was rejected was accepted as a one-time update already, and our plans for the future make it clear that we would have like to have an LTS version in stretch in the first place) but the backports maintainers are not willing to change the policy to accomodate for other similar needs in the future. The discussion has been complicated by the intervention of Neil Williams who brought up an upgrade problem of lava-server (#847277). Instead of fixing the root-problem in Django (#863267), or adding a work-around in lava-server s code, he asserted that upgrading first to Django 1.8 from jessie-backports was the only upgrade path for lava-server. Thanks See you next month for a new summary of my activities.

No comment Liked this article? Click here. My blog is Flattr-enabled.

20 May 2017

Neil Williams: Software, service, data and freedom

Free software, free services but what about your data? I care a lot about free software, not only as a Debian Developer. The use of software as a service matters as well because my principle free software development is on just such a project, licensed under the GNU Affero General Public License version 3. The AGPL helps by allowing anyone who is suitably skilled to install their own copy of the software and run their own service on their own hardware. As a project, we are seeing increasing numbers of groups doing exactly this and these groups are actively contributing back to the project. So what is the problem? We've got an active project, an active community and everything is under a free software licence and regularly uploaded to Debian main. We have open code review with anonymous access to our own source code CI and anonymous access to project planning, open mailing list archives as well as an open bug tracker and a very active IRC channel (#linaro-lava on OFTC). We develop in the open, we respond in the open and we publish frequently (monthly, approximately). The code we write defaults to public visibilty at runtime with restrictions available for certain use cases. What else can we be doing? Well it was a simple question which started me thinking.
The lava documentation has various example test scripts e.g. https://validation.linaro.org/static/docs/v2/examples/test-jobs/qemu-kernel-standard-sid.yaml these have no licence information, we've adapted them for a Linux Foundation project, what licence should apply to these files? Robert Marshall
Those are our own examples, contributed as part of the documentation and covered by the AGPL like the rest of the documentation and the software which it documents, so I replied with the same. However, what about all the other submissions received by the service?
Data Freedom LAVA acts by providing a service to authenticated users. The software runs your test code on hardware which might not be available to the user or which is simply inconvenient for the test writer to setup themselves. The AGPL covers this nicely. What about the data contributed by the users? We make this available to other users who will, naturally, copy and paste for their own tests. In most cases, because the software defaults to public access, anonymous users also get to learn from the contributions of other test writers. This is a good thing and to be encouraged. (One reason why we moved to YAML for all submissions was to allow comments to help other users understand why the submission does specific things.) Writing a test job submission or a test shell definition from scratch is a non-trivial amount of work. We've written dozens of pages of documentation covering how and how not to do it but the detail of how a test job runs exactly what the test writer requires can involve substantial effort. (Our documentation recommends using version control for each of these works for exactly these reasons.) At what point do these works become software? At what point do these need licensing? How could that be declared?
Perils of the Javascript Trap approach When reading up on the AGPL, I also read about Service as a Software Substitute (SaaSS) and this led to The Javascript Trap. I don't consider LAVA to be SaaSS although it is Software as a Service (SaaS). (Distinguishing between those is best left to the GNU document as it is an almighty tangle at times.) I did look at the GNU ideas for licensing Javascript but it seems cumbersome and unnecessary - a protocol designed for the specific purposes of their own service rather than as a solution which could be readily adopted by all such services. The same problems affect trying to untangle sharing the test job data within LAVA.
Adding Licence text The traditional way, of course, is simply to add twenty lines or so of comments at the top of every file. This works nicely for source code because the comments are hidden from the final UI (unless an explicit reference is made in the --help output or similar). It is less nice for human readable submissions where the first thing someone has to do is scroll passed the comments to get to what they want to see. At that point, it starts to look like a popup or a nagging banner - blocking the requested content on a website to try and get the viewer to subscribe to a newsletter or pay for the rest of the content. Let's not actively annoy visitors who are trying to get things done.
Adding Licence files This can be done in the remote version control repository - then a single line in the submitted file can point at the licence. This is how I'm seeking to solve the problem of our own repositories. If the reference URL is included in the metadata of the test job submission, it can even be linked into the test job metadata and made available to everyone through the results API.
metadata:
  licence.text: http://mysite/lava/git/COPYING
  licence.name: BSD 3 clause
Metadata in LAVA test job submissions is free-form but if the example was adopted as a convention for LAVA submissions, it would make it easy for someone to query LAVA for the licences of a range of test submissions. Currently, LAVA does not store metadata from the test shell definitions except the URL of the git repo for the test shell definition but that may be enough in most cases for someone to find the relevant COPYING or LICENCE file.
Which licence? This could be a problem too. If users contribute data under unfriendly licences, what is LAVA to do? I've used the BSD 3 clause in the above example as I expect it to be the most commonly used licence for these contributions. A copyleft licence could be used, although doing so would require additional metadata in the submission to declare how to contribute back to the original author (because that is usually not a member of the LAVA project).
Why not Creative Commons? Although I'm referring to these contributions as data, these are not pieces of prose or images or audio. These are instructions (with comments) for a specific piece of software to execute on behalf of the user. As such, these objects must comply with the schema and syntax of the receiving service, so a code-based licence would seem correct.
Results Finally, a word about what comes back from your data submission - the results. This data cannot be restricted by any licence affecting either the submission or the software, it can be restricted using the API or left as the default of public access. If the results and the submission data really are private, then the solution is to take advantage of the AGPL, take the source code of LAVA and run it internally where the entire service can be placed within a firewall.
What happens next?
  1. Please consider editing your own LAVA test job submissions to add licence metadata.
  2. Please use comments in your own LAVA test job submissions, especially if you are using some form of template engine to generate the submission. This data will be used by others, it is easier for everyone if those users do not have to ask us or you about why your test job does what it does.
  3. Add a file to your own repositories containing LAVA test shell definitions to declare how these files can be shared freely.
  4. Think about other services to which you submit data which is either only partially machine generated or which is entirely human created. Is that data free-form or are you essentially asking the service to do a precise task on your behalf as if you were programming that server directly? (Jenkins is a classic example, closely related to LAVA.)
    • Think about how much developer time was required to create that submission and how the service publishes that submission in ways that allow others to copy and paste it into their own submissions.
    • Some of those submissions can easily end up in documentation or other published sources which will need to know about how to licence and distribute that data in a new format (i.e. modification.) Do you intend for that useful purpose to be defeated by releasing your data under All Rights Reserved?
Contact I don't enable comments on this blog but there are enough ways to contact me and the LAVA project in the body of this post, it really shouldn't be a problem for anyone to comment.

17 July 2016

Neil Williams: Deprecating dpkg-cross

Deprecating the dpkg-cross binary After a discussion in the cross-toolchain BoF at DebConf16, the gross hack which is packaged as the dpkg-cross binary package and supporting perl module have finally been deprecated, long after multiarch was actually delivered. Various reasons have complicated the final steps for dpkg-cross and there remains one use for some of the files within the package although not the dpkg-cross binary itself. 2.6.14 has now been uploaded to unstable and introduces a new binary package cross-config, so will spend a time in NEW. The changes are summarised in the NEWS entry for 2.6.14.
The cross architecture configuration files have moved to the new cross-config package and the older dpkg-cross binary with supporting perl module are now deprecated. Future uploads will only include the cross-config package. Use cross-config to retain support for autotools and CMake cross-building configuration. If you use the deprecated dpkg-cross binary, now is the time to migrate away from these path changes. The dpkg-cross binary and the supporting perl module should NOT be expected to be part of Debian by the time of the Stretch release.
2.6.14 also marks the end of my involvement with dpkg-cross. The Uploaders list has been shortened but I'm still listed to be able to get 2.6.14 into NEW. A future release will drop the perl module and the dpkg-cross binary, retaining just the new cross-config package.

3 May 2016

Neil Williams: Moving to Pelican

Prompted by Tollef, moving to Hugo, I investigated a replacement blog engine. The former site used Wordpress which is just overhead - my blog doesn't need to be generated on every view, it doesn't need the security implications of yet another website login and admin interface either. The blog is static, so I've been looking at static generators. I didn't like the look of Hugo and wanted something where the syntax was familiar - so either Jinja2 or ReST. So, I've chosen Pelican with the code living in a private git repo, naturally. I wanted a generator that was supported in Jessie. I first tried nikola but it turns out that nikola in jessie has syntax changes. I looked at creating backports but then there is a new upstream release which adds a python module not yet in Debian, so that would be an extra amount of work. Hopefully, this won't flood planet - I've gone through the RSS content to update timestamps but the URLs have changed.

6 February 2016

Neil Williams: lava.debian.net

With thanks to Iain Learmonth for the hardware, there is now a Debian instance of LAVA available for use and the Debian wiki page has been updated. LAVA is a continuous integration system for deploying operating systems onto physical and virtual hardware for running tests. Tests can be simple boot testing, bootloader testing and system level testing. Extra hardware may be required for some system tests. Results are tracked over time and data can be exported for further analysis. LAVA has a long history of supporting continuous integration of the Linux kernel on ARM devices (ARMv7 & ARMv8). So if you are testing a Linux kernel image on armhf or arm64 devices, you will find a lot of similar tests already running on the other LAVA instances. The Debian LAVA instance seeks to widen that testing in a couple of ways: This instance relies on the latest changes in lava-server and lava-dispatcher. The 2016.2 release has now deprecated the old, complex dispatcher and a whole new pipeline design is available. The Debian LAVA instance is running 2015.12 at the moment, I ll upgrade to 2016.2 once the packages migrate into testing in a few days and I can do a backport to jessie. What can LAVA do for Debian? ARMMP kernel testing Unreleased builds, experimental initramfs testing this is the core of what LAVA is already doing behind the scenes of sites like http://kernelci.org/. U-Boot ARM testing This is what fully automated LAVA labs have not been able to deliver in the past, at least without a usable SD Mux What s next LOTS. This post actually got published early (distracted by the rugby) I ll update things more in a later post. Contact me if you want to get involved, I ll provide more information on how to use the instance and how to contribute to the testing in due course.

2 January 2016

Lunar: Reproducible builds: week 33 in Stretch cycle

What happened in the reproducible builds effort between December 6th and December 12th: Toolchain fixes Reiner Herrmann rebased our experimental version of doxygen on version 1.8.9.1-6. Chris Lamb submitted a patch to make the manpages generated by ruby-ronn reproducible by using the locale-agnostic %Y-%m-%d for the dates. Daniel Kahn Gillmor took another shot at the issue of source path captured in DWARF symbols. A patch has been sent for review by GCC upstream to add the ability to read an environment variable with -fdebug-prefix-map. Packages fixed The following 24 packages have become reproducible due to changes in their build dependencies: gkeyfile-sharp, gprbuild, graphmonkey, gthumb, haskell-yi-language, ion, jackson-databind, jackson-dataformat-smile, jackson-dataformat-xml, jnr-ffi, libcommons-net-java, libproxy, maven-shared-utils, monodevelop-database, mydumper, ndesk-dbus, nini, notify-sharp, pixz, protozero, python-rtslib-fb, slurm-llnl, taglib-sharp, tomboy-latex. The following packages became reproducible after getting fixed: Some uploads fixed some reproducibility issues, but not all of them: These uploads might have fixed reproducibility issues but could not be tested yet: Patches submitted which have not made their way to the archive yet: reproducible.debian.net Files created with diffoscope now have diffoscope in their name instead debbindiff. (h01ger) Hostnames of first and second build node are now recorded and shown in the build history. (Mattia Rizzolo) Exchanges have started with F-Droid developers to better understand what would be required to test F-Droid applications. (h01ger) A first small set of Fedora 23 packages is now also being tested while development on a new framework for testing RPMs in general has begun. A new Jenkins job has been added to set up to mock, the build system used by Fedora. Another new job takes care of testing RPMs from Fedora 23 on x86_64. So far only 151 packages from the buildsys-build group are tested (currently all unreproducible), but the plan is to build all 17,000 source packages in Fedora 23 and rawhide. The page presenting the results should also soon be improved. (h01ger, Dhiru Kholia) For Arch Linux, all 2223 packages from the extra repository will also be tested from now on. Packages in extra" are tested every four weeks, while those from core every week. Statistics are now displayed alongside the results. (h01ger) jenkins.debian.net has been updated to jenkins-job-builder version 1.3.0. Many job configurations have been simplified and refactored using features of the new version. This was another milestone for the jenkins.debian.org migration. (Phil Hands, h01ger) diffoscope development Chris Lamb announced try.diffoscope.org: an online service that runs diffoscope on user provided files. Screenshot of try.diffoscope.org Improvements are welcome. The application is licensed under the AGPLv3. On diffoscope itself, most pending patches have now been merged. Expect a release soon! Most of the code implementing parallel processing has been polished. Sadly, unpacking archive is CPU-bound in most cases, so the current thread-only implementation does not offer much gain on big packages. More work is still require to also add concurrent processes. Documentation update Ximin Luo has started to write a specification for buildinfo files that could become a larger platform than the limited set of features that were thought so far for Debian .buildinfo. Package reviews 113 reviews have been removed, 111 added and 56 updated in the previous week. 42 new FTBFS bugs were opened by Chris Lamb and Niko Tyni. New issues identified this week: timestamps_in_documentation_generated_by_docbook_dbtimestamp, timestamps_in_sym_l_files_generated_by_malaga, timestamps_in_edj_files_generated_by_edje_cc. Misc. Chris Lamb presented reproducible builds at skroutz.gr.

Next.