Search Results: "codehelp"

8 February 2022

Neil Williams: Django Model Mommy moving to Model Bakery

Some Django applications use Model Mommy in unit tests: https://tracker.debian.org/pkg/python-model-mommy Upstream, model mommy has been renamed model bakery: https://model-bakery.readthedocs.io/en/latest/
Model Bakery is a rename of the legacy model_mommy s project. This is because the project s creator and maintainers decided to not reinforce gender stereotypes for women in technology. You can read more about this subject here
Hence: https://bugs.debian.org/1005114 and https://ftp-master.debian.org/new/python-model-bakery_1.4.0-1.html So this is a heads-up to all those using Debian for their Django unit tests. Model Mommy will no longer get updates upstream, so model mommy will not be able to support Django4. Updates will only be done, upstream, in the Model Bakery package which already supports Django4. Bakery is not a drop-in replacement. Model Bakery includes a helper script to migrate: https://salsa.debian.org/python-team/packages/python-model-bakery/-/blob/master/utils/from_mommy_to_bakery.py This is being packaged in /usr/share/ in the upcoming python3-model-bakery package. It is a tad confusing that model-mommy is at version 1.6.0 but model-bakery is at version 1.4.0 but that only reinforces that Django apps using Model Mommy will need editing to move to Model Bakery. I'll be using the migration script for a Freexian Django app which currently uses Model Mommy. Once Model Bakery reaches bookworm, I plan to do a backport to bullseye. Then I'll file a bug against Model Mommy. Severity of that bug will be increased when Django4 enters unstable (the 4.0 dev release is currently in experimental but there is time before the 4.2 LTS is uploaded to unstable). https://packages.debian.org/experimental/python3-django Model Bakery is in NEW and Salsa Update: Django 4.2 LTS would be the first Django4 release in unstable.

11 December 2021

Neil Williams: Diversity and gender

As a follow on to a previous blog entry of mine, Free and Open, I feel it worthwhile to do my bit to dismantle the pseudo-science and over simplification in the idea that gender is binary at a biological level.
TL;DR: Science simply does not support binary sexes or binary genders. Truth is a bit more complicated.
There is certainty and there are binary answers in mathematics. Things get less definitive in physics, certainly as soon as quantum is broached. Processes become more of an equilibrium between states in chemistry, never wholly one or the other. Yes, there is the oddity of absolute zero but no experiment has yet achieved that fully. It is accurate to describe physics as a development of applied mathematics and to view chemistry as applied physics. Biology, at the biochemical level, is applied chemistry. The sciences build on each other, "on the shoulders of giants", but at each level, some certainty is lost, some amount of uncertainty is expanded and measurements become probabilities, proportions and percentages. Biology is dependent on biochemistry - chemistry is how a biological change results in a different organism. Physics is how that chemical change occurs - temperature, pressure and physical states are inherent to all chemical changes. Outside laboratory constraints, few chemical reactions, especially in organic chemistry, produce one and only one result from two or more known reagents. In biology, everyone is familiar with genetic mutations but a genetic mutation only happens because a biochemical reaction (hydrogen bonding of nucleobases) does not always produce the expected result. Every cell division, every viral infection, there is a finite probability that a change will occur. It might be a small number but it is never zero and can never be dismissed. This is obvious in the current Covid pandemic - genetic mutations result in new variants. Some variants are inviable, some variants produce no net change in the way that the viral particles infect adjacent cells. Sometimes, a mutation happens that changes everything. These mutations are not mistakes - these are simply changes with undetermined outcomes. Genetic changes are the foundation of biodiversity and variety is what allows lifeforms of all kinds to survive changes in environmental factors and/or changes in prevalent diseases. It is precisely the same in humans, particularly in one of the principle spheres of human life that involves replicating genetic material - the creation of gametes for sexual reproduction. Every single time any DNA is copied, there is a finite chance that a different base will be put in place compared to the original. Copying genetic material is therefore non-binary. Given precisely the same initial conditions, the result is not always predictable and the range of how the results vary from one to another increases with every iteration. Let me stress that - at the molecular level, no genetic operation in any biological lifeform has a truly binary result. Repeat that operation sufficiently often and an unexpected result WILL inevitably occur. It is a mathematical certainty that genetic changes will arise by attempting precisely the same genetic operation enough times. Genetic changes are fundamental to how lifeforms survive changing conditions. Life would likely have died out a long time ago on this planet if every genetic operation was perfect. Diversity is life. Similarity leads to extinction. Viral load is interesting at this point. Someone can be infected with a virus, including coronavirus, by encountering a small number of viral particles. Some viruses, it may be a few hundred, some viruses may need a few thousand particles to infect a vulnerable host. But here's the thing, for that host to be at risk of infecting another host, the virus needs the host to produce billions upon billions of copies of the virus by taking over the genetic machinery within a huge number of cells in the host. This, as is accepted with Covid, is before the virus has been copied enough times to produce symptoms in the host. Before those symptoms become serious, billions more copies will be made. The numbers become unimaginable - and that is within a single host, let alone the 265 million (and counting) hosts in the current Covid19 pandemic. It's also no wonder that viral infections cause tiredness, the infection is diverting huge resources to propagating itself - before even considering the activity of the immune system. It is idiocy of the highest order to expect all those copies to be identical. The rise of variants is inevitable - indeed essential - in all spheres of biology. A single viral particle is absolutely no threat of any kind - it must first get inside and then copy the genetic information in a host cell. This is where the complexity lies in the definition of life itself. A virus can be considered a lifeform but it is only able to reproduce using another, more complex, lifeform. In truth, a viral particle does not and cannot mutate. The infected host mutates the virus. The longer it takes that host to clear the infection, the more mutations that host will create and then potentially spread to others. Now apply this to the creation of gametes in humans. With seven billion humans, the amount of copying of genetic material is not as large as the pandemic but it is still easy for everyone to understand that children do not merely combine the DNA of both parents. Changes happen. Human sexual reproduction is not as simple as 1 + 1 = 2. Sometimes, the copying of the genetic material produces an unexpected result. Sexual reproduction itself is non-binary. Sexual reproduction is not easy or simple for lifeforms to adopt - the diversity which results from the non-binary operations are exactly why so many lifeforms invest so much energy in reproducing in this way. Whilst many genetic changes in humans will be benign or beneficial, I d like to take an example of a genetic disorder that results from the non-binary nature of sex. Humans can be born with the XY phenotype - i.e. at a genetic level, the individual has the same combination of chromosomes as another XY individual but there are changes within the genes in those chromosomes. We accept this, some children of blonde parents do not have blonde hair, etc. There are also genetic changes where an XY phenotype is not binary. Some people, who at a genetic level would be almost identical to another person who is genetically male, have a genetic mutation which makes it impossible for the cells of that individual to respond to androgens (testosterone). (See Androgen insensitivity syndrome). Genetically, that individual has an X and a Y chromosome, just like many other individuals. However, due to a change in how the genes on those chromosomes were copied, that individual is biologically incapable of constructing the secondary sexual characteristics of a male. At a genetic level, the individual has the XY phenotype of a male. At the physical level, the individual has all the sexual characteristics of a female and none of the sexual characteristics of a male. The gender of that individual is not binary. Treatment is centred on supporting the individual and minimising some risks from the inactive genes on the Y chromosome. Human sexual reproduction is non-binary. The results of any sexual reproduction in humans will not always produce the binary option of male or female. It is a lie to claim that human gender is binary. The science is in plain view and cannot be ignored. Identifying as non-binary is not a "cop out" - it can be a biological, genetic, scientific fact. Human sexuality and gender are malleable. Where genetic changes result in symptoms, these can be ameliorated by treatment with human sex hormones, like oestrogen and testosterone. There are valid medical uses for anabolic steroids and hormone replacement therapies to help individuals who, at a genetic level, have non-binary gender. These treatments can help align the physical outer signs with the personality and identity of the individual, whether with or without surgery. It is unacceptable to abandon such people to suffer life long discrimination and harassment by imposing a binary definition that has no basis in science. When a human being has an XY phenotype, that human being is not necessarily male. That individual will be on a spectrum from female (left unaffected by sex hormones in the womb, the foetus will be female, even with an X and a Y chromosome), to various degrees of male. So, at a genetic, biological level, it is a scientific fact that human beings do not have binary gender. There is no evidence that this is new to the modern era, there is no scientific basis for thinking that copying of genetic material was somehow perfectly reliable in earlier history, or that such mutations are specific to homo sapiens. Changes in genetic material provide the diversity to fight infections and adapt to changing environmental factors. Species have and will continue to go extinct if this diversity is absent. With that out of the way, it is no longer a stretch to encompass other aspects of human non-binary genders beyond the known genetic syndromes based on changes in the XY phenotype. Science has not uncovered all of the ways that genes affect personality, behaviour, or identity. How other, less studied, genetic changes affect the much more subtle human facets, especially anything to do with consciousness, identity, personality, sexuality and behaviour, is guesswork. All of these facets can and likely are being affected by genetic factors as well as environmental factors in an endless range of permutations. Personality traits are a beautiful and largely unknowable blend of genes and environment. Genetic information has a finite probability of changes at each and every iteration. Environmental factors are more akin to chaos theory. The idea that the results will fit into binary constructs is laughable. Human society puts huge emphasis on societal norms. Individuals who do not fit into those norms suffer discrimination. The norms themselves have evolved over time as a response to various influences on human civilisation but most are not based on science. It is up to all humans in that society to call out discrimination, to call for changes in the accepted norms and support those who are marginalised. It is a precarious balance, one that humans rarely get right, but it must be based on an acceptance that variation is the natural state. Artificial constraints, like binary genders, must be dismantled because human beings and human sexual reproduction are not binary. To those who think, "well it is for 99%", think again about Covid. 99% (or closer to 98%) of infected humans recover without notable after effects. That has still crippled the nations of the globe and humbled all those who tried to deny it. Five million human beings are dead because "most infected people recover". Just because something only affects a proportion of human beings does not invalidate the suffering of those humans and the discrimination that those humans will face. Societal norms are not necessarily correct. Religious and other influences typically obscure and ignore scientific fact and undermine human kindness. The scientific truth of life on this planet is that gender is not binary. The more complex the lifeform, the more factors will affect where on the spectrum any one individual will appear. Just because we do not yet fully understand how genes affect human personality and sexuality, does not invalidate the science that variation is the natural order. My previous blog about diversity is not just about male vs female, one nationality vs another, one ethnicity compared to another. Diversity is diverse. Diversity requires accepting that every facet of humanity is subject to variation. That leads to tension at times, it is inevitable. Tension against societal norms, tension against discrimination, tension around those individuals who would abuse the tolerance of others for their own gratification or from their own ignorance. None of us are perfect, none of us have any of this fully sorted and all of us will make mistakes. Personally, I try to respect those around me. I will use whatever pronouns and other conventions that the person requests, from their perspective and not mine. To do otherwise is to deny the natural order and to deny the science. Celebrate all diversity, it is the very stuff of life. The discussions around (typically female) bathroom facilities often miss the point. The concern is not about individuals who describe themselves as non-binary. The concern is about individuals who are fully certain of their own sexuality and who act as sexual predators for their own gratification. These people are acting out a lie for their own ends. The problem people are the predators, so stop blaming the victims who are just as at risk as anyone else who identifies as female. Maybe the best people to spot such predators are those who are non-binary, who have had to pretend to fit into societal norms. Just as travel can be a good antidote to racism, openness and discussion can be a tool to undermine the lies of sexual predators and reassure those who are justifiably fearful. There can never be a biological binary test of gender, there can never be any scientific justification for binary division of facilities. Humanity itself is not binary, even life itself has blurry borders around comas, suspended animation and locked-in syndrome. Legal definitions of human death vary around the world. The only common thread I have ever found is: Be kind to each other. If you find anything above objectionable, then I can only suggest that you reconsider the science and learn to be kind to your fellow humans. None of us are getting out of this alive. I Think You ll Find It s a Bit More Complicated Than That - Ben Goldacre ISBN 978-0-00-750514-2 https://www.amazon.co.uk/dp/B00HATQA8K/ https://en.wikipedia.org/wiki/Androgen_insensitivity_syndrome https://www.bbc.co.uk/news/world-51235105 https://en.wikipedia.org/wiki/Nucleobase My degree is in pharmaceutical sciences and I practised community and hospital pharmacy for 20 years before moving into programming. I have direct experience of supporting people who were prescribed hormones to transition their physical characteristics to match their personal identity. I had a Christian upbringing but my work showed me that those religious norms were incompatible with being kind to others, so I rejected religion and I now consider myself a secular humanist.

19 November 2021

Neil Williams: git worktrees

A few scenarios have been problematic with git and I've now discovered git worktrees which help with each. You could go to the trouble of making a new directory and re-cloning the same tree. However, a local commit in one tree is then not accessible to the other tree. You could commit everything every time, but with a dirty tree, that involves sorting out the .gitignore rules as well. That could well be pointless with an experimental change. Git worktrees allow multiple filesystems from a single git tree. Commits on any branch are visible from other branches, even when the commit was on a different worktree. This makes things like cherry-picking easy, without needing to push pointless changes or branches. Branches on a worktree can be rebased as normal, with the benefit that commit hashes from other local changes are available for reference and cherry-picks. I'm sure git worktrees are not new. However, I've only started using them recently and others have asked about how the worktree operates. Creating a new tree can be done with a new or existing branch. To make it easier, set the new directory at the same time, usually in ../ New branch (branched from the current branch):
git worktree add -b branch_name ../branch_name
Existing branch - note, slightly different syntax here, specify the commit-ish last (branch name, tag or hash):
git worktree add ../branch_name branch_name
git worktree list
/home/neil/Documents/testing/testrepo        0612677 [master]
/home/neil/Documents/testing/testtree        d38f5a3 [testtree]
Use git worktree remove <name> to drop the entire directory for that tree and the git tracking. I'm using this for work on the Debian Security Tracker. I have two local branches and having two worktrees allows me to have three terminals open, using the same files and the same git repository. One to run make serve and update the local SQLite database. One to access master to run git pull One to make local changes without risking collisions on master.
git add data/CVE/list
git commit
# pre commit hook runs here
git log -n 1
# copy the hash
# switch to master terminal
git pull
git cherry-pick <HASH>
git push
# switch to server terminal
git rebase master
# no git pull or fetch, it's all local
make
# switch back to changes terminal
git rebase master
Sadly, one area where this isn't as easy is with importing a new DSC into Salsa with git-buildpackage as that uses several branches at the same time. It would be possible but you'll need to have a separate upstream and possibly pristine-tar branches and supply the relevant options. Possibly something git-buildpackage to adopt - it is common to need to make changes to the packaging with a new upstream release & a lot of those changes are currently done outside git. For the rest of the support, see git worktree (1)

10 November 2021

Neil Williams: LetsEncrypt with Apache, Gunicorn and Debian Bullseye

This took me far too long to identify and debug, so I'm going to write it up here for my own reference and to possibly help others.
Background Upgrading an old codebase from Python2 on Buster to Python3 ready for Bullseye and from Django1 to Django2 (prepared for Django3). Everything is fine at this stage - the Django test server is happy with HTTP and it gives enough support to do the actual code changes to get to Python3. All well and good so far. The main purpose of this particular code was to support payments, so a chunk of the testing cannot be done without HTTPS, which is where things got awkward. This particular service needs HTTPS using LetsEncrypt and Apache2. To support Django, I typically use Gunicorn. All of this works with HTTP. Moving to HTTPS was easy to test using the default-ssl virtual host that comes with Apache2 in Debian. It's a static page and it worked well with https. The problems all start when trying to use this known-working HTTPS config with the other Apache virtual host to add support for the gunicorn proxy.
Apache reverse proxy AH00898 Error during SSL Handshake with remote server
Investigating Now that I know why this happened, it's easier to see what was happening. At the time, I was swamped in a plethora of options and permutations between the Django HTTPS options and the Apache VirtualHost SSL and proxy commands. Going through all of those took up huge amounts of time, all in the wrong area. In previous configurations using packages in Buster, gunicorn could simply run on http://localhost:8000 and Apache would proxy that as https. In versions in Bullseye, this no longer works and it is that handover from https in Apache to http in the proxy is where it is failing. Apache is using HTTPS because the LetsEncrypt certificates, created using dehydrated, are specified in the VirtualHost configuration. To fix the handshake error, the proxy server needs to know about the certificates created by dehydrated as well.
Gunicorn needs the certificates The clue is in the gunicorn help:
--keyfile FILE        SSL key file [None]
--certfile FILE       SSL certificate file [None]
The final part of the puzzle is that the certificates created by dehydrated are in a private location:
drwx------ 2 root root /var/lib/dehydrated/certs/
To test gunicorn, this will mean using sudo but that's just a step towards running gunicorn as a systemd service (when access to the certs will not be a problem). Starting gunicorn using these options shows the proxy now being available at https://localhost:8000 which is a subtle but very important change.
Environment=LOGLEVEL=DEBUG WORKERS=4 LOGFILE=/var/log/gunicorn/site.log
ExecStart=/usr/bin/gunicorn3 site.wsgi --log-level $LOGLEVEL --log-file $LOGFILE --workers $WORKERS \
--certfile /var/lib/dehydrated/certs/site/cert.pem \
--keyfile /var/lib/dehydrated/certs/site/privkey.pem
The specified locations are symbolic links created by dehydrated to cope with regular updates of the certificates using cron.

8 October 2021

Neil Williams: Using Salsa with contrib and non-free

OK, I know contrib and non-free aren't popular topics to many but I've had to sort out some simple CI for such contributions and I thought it best to document how to get it working. You will need access to the GitLab Settings for the project in Salsa - or ask someone to add some CI/CD variables on your behalf. (If CI isn't running at all, the settings will need to be modified to enable debian/salsa-ci.yml first, in the same way as packages in main). The default Salsa config (debian/salsa-ci.yml) won't get a passing build for packages in contrib or non-free:
# For more information on what jobs are run see:
# https://salsa.debian.org/salsa-ci-team/pipeline
#
---
include:
  - https://salsa.debian.org/salsa-ci-team/pipeline/raw/master/salsa-ci.yml
  - https://salsa.debian.org/salsa-ci-team/pipeline/raw/master/pipeline-jobs.yml
Variables need to be added. piuparts can use the extra contrib and non-free components directly from these variables.
variables:
   RELEASE: 'unstable'
   SALSA_CI_COMPONENTS: 'main contrib non-free'
Many packages in contrib and non-free only support amd64 - so the i386 build job needs to be removed from the pipeline by extending the variables dictionary:
variables:
   RELEASE: 'unstable'
   SALSA_CI_COMPONENTS: 'main contrib non-free'
   SALSA_CI_DISABLE_BUILD_PACKAGE_I386: 1
The extra step is to add the apt source file variable to the CI/CD settings for the project. The CI/CD settings are at a URL like:
https://salsa.debian.org/<team>/<project>/-/settings/ci_cd
Expand the section on Variables and add a <b>File</b> type variable:
Key: SALSA_CI_EXTRA_REPOSITORY Value: deb https://deb.debian.org/debian/ sid contrib non-free
The pipeline will run at the next push - alternatively, the CI/CD pipelines page has a "Run Pipeline" button. The settings added to the main CI/CD settings will be applied, so there is no need to add a variable at this stage. (This can be used to test the variables themselves but only with manually triggered CI pipelines.) For more information and additional settings (for example disabling or allowing certain jobs to fail), check https://salsa.debian.org/salsa-ci-team/pipeline

27 March 2021

Neil Williams: Free and Open

A long time, no blog entries. What's prompted a new missive? Two things: All the time I've worked with software and computers, the lack of diversity in the contributors has irked me. I started my career in pharmaceutical sciences where the mix of people, at university, would appear to be genuinely diverse. It was in the workplace that the problems started, especially in the retail portion of community pharmacy with a particular gender imbalance between management and counter staff. Then I started to pick up programming. I gravitated to free software for the simple reason that I could tinker with the source code. To do a career change and learn how to program with a background in a completely alien branch of science, the only realistic route into the field is to have access to the raw material - source code. Without that, I would have been seeking a sponsor, in much the same way as other branches of science need grants or sponsors to purchase the equipment and facilities to get a foot in the door. All it took from me was some time, a willingness to learn and a means to work alongside those already involved. That's it. No complex titration equipment, flasks, centrifuges, pipettes, spectrographs, or petri dishes. It's hard to do pharmaceutical science from home. Software is so much more accessible - but only if the software itself is free. Software freedom removes the main barrier to entry. Software freedom enables the removal of other barriers too. Once the software is free, it becomes obvious that the compiler (indeed the entire toolchain) needs to be free, to turn the software into something other people can use. The same applies to interpreters, editors, kernels and the entire stack. I was in a different branch of science whilst all that was being created and I am very glad that Debian Woody was available as a free cover disc on a software magazine just at the time I was looking to pick up software programming. That should be that. The only next step to be good enough to write free software was the "means to work alongside those already involved". That, it turns out, is much more than just a machine running Debian. It's more than just having an ISP account and working email (not commonplace in 2003). It's working alongside the people - all the people. It was my first real exposure to the toxicity of some parts of many scientific and technical arenas. Where was the diversity? OK, maybe it was just early days and the other barriers (like an account with an ISP) were prohibitive for many parts of the world outside Europe & USA in 2003, so there were few people from other countries but the software world was massively dominated by the white Caucasian male. I'd been insulated by my degree course, and to a large extent by my university which also had courses which already had much more diverse intakes - optics and pharmacy, business and human resources. Relocating from the insular world of a little town in Wales to Birmingham was also key. Maybe things would improve as the technical barriers to internet connectivity were lowered. Sadly, no. The echo chamber of white Caucasian input has become more and more diluted as other countries have built the infrastructure to get the populace online. Debian has helped in this area, principally via DebConf. Yet only the ethnicity seemed to change, not the diversity. Recently, more is being done to at least make Debian more welcoming to those who are brave enough to increase the mix. Progress is much slower than the gains seen in the ethnicity mix, probably because that was a benefit of a technological change, not a societal change. The attitudes so prevalent in the late 20th century are becoming less prevalent amongst, and increasingly abhorrent to, the next generation of potential community members. Diversity must come or the pool of contributors will shrink to nil. Community members who cling to these attitudes are already dinosaurs and increasingly unwelcome. This is a necessary step to retain access to new contributors as existing contributors age. To be able to increase the number of contributors, the community cannot afford to be dragged backwards by anyone, no matter how important or (previously) respected. Debian, or even free software overall, cannot change all the problems with diversity in STEM but we must not perpetuate the problems either. Those people involved in free software need to be open to change, to input from all portions of society and welcoming. Puerile jokes and disrespectful attitudes must be a thing of the past. The technical contributions do not excuse behaviours that act to prevent new people joining the community. Debian is getting older, the community and the people. The presence of spouses at Debian social events does not fix the problem of the lack of diversity at Debian technical events. As contributors age, Debian must welcome new, younger, people to continue the work. All the technical contributions from the people during the 20th century will not sustain Debian in the 21st century. Bit rot affects us all. If the people who provided those contributions are not encouraging a more diverse mix to sustain free software into the future then all their contributions will be for nought and free software will die. So it comes to the FSF and RMS. I did hear Richard speak at an event in Bristol many years ago. I haven't personally witnessed the behavioural patterns that have been described by others but my memories of that event only add to the reality of those accounts. No attempts to be inclusive, jokes that focused on division and perpetuating the white male echo chamber. I'm not perfect, I struggle with some of this at times. Personally, I find the FSFE and Red Hat statements much more in line with my feelings on the matter than the open letter which is the basis of the GR. I deplore the preliminary FSF statement on governance as closed-minded, opaque and archaic. It only adds to my concerns that the FSF is not fit for the 21st century. The open letter has the advantage that it is a common text which has the backing of many in the community, individuals and groups, who are fit for purpose and whom I have respected for a long time. Free software must be open to contributions from all who are technically capable of doing the technical work required. Free software must equally require respect for all contributors and technical excellence is not an excuse for disrespect. Diversity is the life blood of all social groups and without social cohesion, technical contributions alone do not support a community. People can change and apologies are welcome when accompanied by modified behaviour. The issues causing a lack of diversity in Debian are complex and mostly reflective of a wider problem with STEM. Debian can only survive by working to change those parts of the wider problem which come within our influence. Influence is key here, this is a soft issue, replete with unreliable emotions, unhelpful polemic and complexity. Reputations are hard won and easily blemished. The problems are all about how Debian looks to those who are thinking about where to focus in the years to come. It looks like the FSFE should be the ones to take the baton from the FSF - unless the FSF can adopt a truly inclusive and open governance. My problem is not with RMS himself, what he has or has not done, which apologies are deemed sincere and whether behaviour has changed. He is one man and his contributions can be respected even as his behaviour is criticised. My problem is with the FSF for behaving in a closed, opaque and divisive manner and then making a governance statement that makes things worse, whilst purporting to be transparent. Institutions lay the foundations for the future of the community and must be expected to hold individuals to account. Free and open have been contentious concepts for the FSF, with all the arguments about Open Source which is not Free Software. It is clear that the FSF do understand the implications of freedom and openness. It is absurd to then adopt a closed and archaic governance. A valid governance model for the FSF would never have allowed RMS back onto the board, instead the FSF should be the primary institution to hold him, and others like him, to account for their actions. The FSF needs to be front and centre in promoting diversity and openness. The FSF could learn from the FSFE. The calls for the FSF to adopt a more diverse, inclusive, board membership are not new. I can only echo the Red Hat statement:
in order to regain the confidence of the broader free software community, the FSF should make fundamental and lasting changes to its governance.
...
[There is] no reason to believe that the most recent FSF board statement signals any meaningful commitment to positive change.
And the FSFE statement:
The goal of the software freedom movement is to empower all people to control technology and thereby create a better society for everyone. Free Software is meant to serve everyone regardless of their age, ability or disability, gender identity, sex, ethnicity, nationality, religion or sexual orientation. This requires an inclusive and diverse environment that welcomes all contributors equally.
The FSF has not demonstrated the behaviour I expect from a free software institution and I cannot respect an institution which proclaims a mantra of freedom and openness that is not reflected in the governance of that institution. The preliminary statement by the FSF board is abhorrent. The FSF must now take up the offers from those institutions within the community who retain the respect of that community. I'm only one voice, but I would implore the FSF to make substantive, positive and permanent change to the governance, practices and future of the FSF or face irrelevance.

20 May 2017

Neil Williams: Software, service, data and freedom

Free software, free services but what about your data? I care a lot about free software, not only as a Debian Developer. The use of software as a service matters as well because my principle free software development is on just such a project, licensed under the GNU Affero General Public License version 3. The AGPL helps by allowing anyone who is suitably skilled to install their own copy of the software and run their own service on their own hardware. As a project, we are seeing increasing numbers of groups doing exactly this and these groups are actively contributing back to the project. So what is the problem? We've got an active project, an active community and everything is under a free software licence and regularly uploaded to Debian main. We have open code review with anonymous access to our own source code CI and anonymous access to project planning, open mailing list archives as well as an open bug tracker and a very active IRC channel (#linaro-lava on OFTC). We develop in the open, we respond in the open and we publish frequently (monthly, approximately). The code we write defaults to public visibilty at runtime with restrictions available for certain use cases. What else can we be doing? Well it was a simple question which started me thinking.
The lava documentation has various example test scripts e.g. https://validation.linaro.org/static/docs/v2/examples/test-jobs/qemu-kernel-standard-sid.yaml these have no licence information, we've adapted them for a Linux Foundation project, what licence should apply to these files? Robert Marshall
Those are our own examples, contributed as part of the documentation and covered by the AGPL like the rest of the documentation and the software which it documents, so I replied with the same. However, what about all the other submissions received by the service?
Data Freedom LAVA acts by providing a service to authenticated users. The software runs your test code on hardware which might not be available to the user or which is simply inconvenient for the test writer to setup themselves. The AGPL covers this nicely. What about the data contributed by the users? We make this available to other users who will, naturally, copy and paste for their own tests. In most cases, because the software defaults to public access, anonymous users also get to learn from the contributions of other test writers. This is a good thing and to be encouraged. (One reason why we moved to YAML for all submissions was to allow comments to help other users understand why the submission does specific things.) Writing a test job submission or a test shell definition from scratch is a non-trivial amount of work. We've written dozens of pages of documentation covering how and how not to do it but the detail of how a test job runs exactly what the test writer requires can involve substantial effort. (Our documentation recommends using version control for each of these works for exactly these reasons.) At what point do these works become software? At what point do these need licensing? How could that be declared?
Perils of the Javascript Trap approach When reading up on the AGPL, I also read about Service as a Software Substitute (SaaSS) and this led to The Javascript Trap. I don't consider LAVA to be SaaSS although it is Software as a Service (SaaS). (Distinguishing between those is best left to the GNU document as it is an almighty tangle at times.) I did look at the GNU ideas for licensing Javascript but it seems cumbersome and unnecessary - a protocol designed for the specific purposes of their own service rather than as a solution which could be readily adopted by all such services. The same problems affect trying to untangle sharing the test job data within LAVA.
Adding Licence text The traditional way, of course, is simply to add twenty lines or so of comments at the top of every file. This works nicely for source code because the comments are hidden from the final UI (unless an explicit reference is made in the --help output or similar). It is less nice for human readable submissions where the first thing someone has to do is scroll passed the comments to get to what they want to see. At that point, it starts to look like a popup or a nagging banner - blocking the requested content on a website to try and get the viewer to subscribe to a newsletter or pay for the rest of the content. Let's not actively annoy visitors who are trying to get things done.
Adding Licence files This can be done in the remote version control repository - then a single line in the submitted file can point at the licence. This is how I'm seeking to solve the problem of our own repositories. If the reference URL is included in the metadata of the test job submission, it can even be linked into the test job metadata and made available to everyone through the results API.
metadata:
  licence.text: http://mysite/lava/git/COPYING
  licence.name: BSD 3 clause
Metadata in LAVA test job submissions is free-form but if the example was adopted as a convention for LAVA submissions, it would make it easy for someone to query LAVA for the licences of a range of test submissions. Currently, LAVA does not store metadata from the test shell definitions except the URL of the git repo for the test shell definition but that may be enough in most cases for someone to find the relevant COPYING or LICENCE file.
Which licence? This could be a problem too. If users contribute data under unfriendly licences, what is LAVA to do? I've used the BSD 3 clause in the above example as I expect it to be the most commonly used licence for these contributions. A copyleft licence could be used, although doing so would require additional metadata in the submission to declare how to contribute back to the original author (because that is usually not a member of the LAVA project).
Why not Creative Commons? Although I'm referring to these contributions as data, these are not pieces of prose or images or audio. These are instructions (with comments) for a specific piece of software to execute on behalf of the user. As such, these objects must comply with the schema and syntax of the receiving service, so a code-based licence would seem correct.
Results Finally, a word about what comes back from your data submission - the results. This data cannot be restricted by any licence affecting either the submission or the software, it can be restricted using the API or left as the default of public access. If the results and the submission data really are private, then the solution is to take advantage of the AGPL, take the source code of LAVA and run it internally where the entire service can be placed within a firewall.
What happens next?
  1. Please consider editing your own LAVA test job submissions to add licence metadata.
  2. Please use comments in your own LAVA test job submissions, especially if you are using some form of template engine to generate the submission. This data will be used by others, it is easier for everyone if those users do not have to ask us or you about why your test job does what it does.
  3. Add a file to your own repositories containing LAVA test shell definitions to declare how these files can be shared freely.
  4. Think about other services to which you submit data which is either only partially machine generated or which is entirely human created. Is that data free-form or are you essentially asking the service to do a precise task on your behalf as if you were programming that server directly? (Jenkins is a classic example, closely related to LAVA.)
    • Think about how much developer time was required to create that submission and how the service publishes that submission in ways that allow others to copy and paste it into their own submissions.
    • Some of those submissions can easily end up in documentation or other published sources which will need to know about how to licence and distribute that data in a new format (i.e. modification.) Do you intend for that useful purpose to be defeated by releasing your data under All Rights Reserved?
Contact I don't enable comments on this blog but there are enough ways to contact me and the LAVA project in the body of this post, it really shouldn't be a problem for anyone to comment.

17 July 2016

Neil Williams: Deprecating dpkg-cross

Deprecating the dpkg-cross binary After a discussion in the cross-toolchain BoF at DebConf16, the gross hack which is packaged as the dpkg-cross binary package and supporting perl module have finally been deprecated, long after multiarch was actually delivered. Various reasons have complicated the final steps for dpkg-cross and there remains one use for some of the files within the package although not the dpkg-cross binary itself. 2.6.14 has now been uploaded to unstable and introduces a new binary package cross-config, so will spend a time in NEW. The changes are summarised in the NEWS entry for 2.6.14.
The cross architecture configuration files have moved to the new cross-config package and the older dpkg-cross binary with supporting perl module are now deprecated. Future uploads will only include the cross-config package. Use cross-config to retain support for autotools and CMake cross-building configuration. If you use the deprecated dpkg-cross binary, now is the time to migrate away from these path changes. The dpkg-cross binary and the supporting perl module should NOT be expected to be part of Debian by the time of the Stretch release.
2.6.14 also marks the end of my involvement with dpkg-cross. The Uploaders list has been shortened but I'm still listed to be able to get 2.6.14 into NEW. A future release will drop the perl module and the dpkg-cross binary, retaining just the new cross-config package.

3 May 2016

Neil Williams: Moving to Pelican

Prompted by Tollef, moving to Hugo, I investigated a replacement blog engine. The former site used Wordpress which is just overhead - my blog doesn't need to be generated on every view, it doesn't need the security implications of yet another website login and admin interface either. The blog is static, so I've been looking at static generators. I didn't like the look of Hugo and wanted something where the syntax was familiar - so either Jinja2 or ReST. So, I've chosen Pelican with the code living in a private git repo, naturally. I wanted a generator that was supported in Jessie. I first tried nikola but it turns out that nikola in jessie has syntax changes. I looked at creating backports but then there is a new upstream release which adds a python module not yet in Debian, so that would be an extra amount of work. Hopefully, this won't flood planet - I've gone through the RSS content to update timestamps but the URLs have changed.

6 February 2016

Neil Williams: lava.debian.net

With thanks to Iain Learmonth for the hardware, there is now a Debian instance of LAVA available for use and the Debian wiki page has been updated. LAVA is a continuous integration system for deploying operating systems onto physical and virtual hardware for running tests. Tests can be simple boot testing, bootloader testing and system level testing. Extra hardware may be required for some system tests. Results are tracked over time and data can be exported for further analysis. LAVA has a long history of supporting continuous integration of the Linux kernel on ARM devices (ARMv7 & ARMv8). So if you are testing a Linux kernel image on armhf or arm64 devices, you will find a lot of similar tests already running on the other LAVA instances. The Debian LAVA instance seeks to widen that testing in a couple of ways: This instance relies on the latest changes in lava-server and lava-dispatcher. The 2016.2 release has now deprecated the old, complex dispatcher and a whole new pipeline design is available. The Debian LAVA instance is running 2015.12 at the moment, I ll upgrade to 2016.2 once the packages migrate into testing in a few days and I can do a backport to jessie. What can LAVA do for Debian? ARMMP kernel testing Unreleased builds, experimental initramfs testing this is the core of what LAVA is already doing behind the scenes of sites like http://kernelci.org/. U-Boot ARM testing This is what fully automated LAVA labs have not been able to deliver in the past, at least without a usable SD Mux What s next LOTS. This post actually got published early (distracted by the rugby) I ll update things more in a later post. Contact me if you want to get involved, I ll provide more information on how to use the instance and how to contribute to the testing in due course.

29 December 2015

Neil Williams: Experimenting with LXQt in Debian

LXQt is a Qt lightweight desktop the Qt port of LXDE. Packages exist in Debian albeit without a top level metapackage or task package to make installing it easier. So I wrote up a simple-ish vmdebootstrap call:
$ sudo vmdebootstrap --image lxqt.img --size=5G --package=lxqt-panel --package=libqt5xcbqpa5 --package=qterminal --package=openbox --package=xdm --package=lxqt-session --package=lxqt-about --package=lxqt-policykit --package=lxqt-globalkeys --package=lxqt-notificationd --package=lxqt-sudo --package=dbus-x11 --package=lxqt-admin --package=lxqt-runner --package=lxqt-config --package=task-desktop --package=locales --package=xserver-xorg-core --package=oxygen-icon-theme --grub --distribution=unstable --mirror=http://mirror.bytemark.co.uk/debian --configure-apt --enable-dhcp --serial-console --sudo --verbose --owner=neil --user='neil/neil'
(You ll need to adapt the last two commands to be a real user.) This uses xdm instead of lxdm as this tests LXQt without having any GTK+ dependencies installed. lxdm does give a nicer experience at the cost of needing GTK+. YMMV. Note the explicit additions:--package=libqt5xcbqpa5 --package=dbus-x11 as debootstrap does not follow Recommends, libqt5xcbqpa5 needs to be specified explicitly or the desktop will fail to start. dbus-x11 is also needed to get things working. task-desktop adds the Debian artwork and needs to be in the list of packages passed to debootstrap so that the Recommends of the task packages are not selected. (Note that I have so far failed to get LXQt to use the Debian artwork as a desktop background.) So, what is it like? Well alpha is how I might describe it. Not in terms of stability, more in terms of functionality. I do have a second install using lxdm which has been tweaked but it depends on your objective. If your aim is to not have GTK+ but not have KDE, then LXQt is a beginning only. In particular, if you really are intent on not having GTK+ at all, your choice of web browser is somewhat limited, to lynx. (There s no bare Qt file manager in Debian pcmanfm-qt depends on libfm-modules which uses GTK+ nor a bare text editor despite this being one of the simplest examples of a QApplication). There is a large gap in the software availability which is Qt but not KDE, despite the power and flexibility of Qt itself. (I ve written applications using Qt directly before, it is much more flexible and configurable than GTK+). So there would seem to be a reason why a metapackage and a task package do not yet exist, there is a lot more to do. I m happy to mix GTK+ applications, so my test environment can use iceweasel, chromium, leafpad and thunar. Overall, this was an interesting diversion prompted by a separate discussion about the merits and controversies of GTK+, GNOME etc. I failed to work out why the icon theme works if lxdm was installed but not with xdm (so there s a missing package but I m not yet sure exactly which), so the screenshot is more bare than I expected. lxqt-unstable With iceweasel installed and various other tweaks:
lxqt-unstable-2 Finally, note #809339 I have local changes which are being tested to use systemd-networkd but currently the masking of PredictableInterfaceNames as documented does not work, so some editing of /etc/network/interfaces.d/setup (or enable systemd-networkd yourself and add a suitable file to /etc/systemd/network/) will be needed to get a working network connection in the VM.

30 November 2015

Neil Williams: bashrc-git snippets

Just in case someone else finds these useful, some bash functions I ve got into the habit of having in ~/.bashrc:
mcd()  mkdir "$1"; cd "$1";  
gum()  git checkout "$1" && git rebase master && git checkout master;  
gsb()   LIST= git branch egrep -v '(release staging trusty playground stale)' tr '\n' ' ' tr -d '*' ; git show-branch $LIST;  
gleaf()  git branch --merged master   egrep -v '(release staging trusty playground pipeline review stale)';  
mcd is the oldest one and the simplest. The others are just useful git management shortcuts. I can use gum to bring a feature branch back to master and gsb to show me which branches need to be rebased on master, typically after a pull. The list of excluded branches includes branches which should not be rebased against master (I could do some processing of git branch -r to not have those hardcoded) but the odd one is stale. Sometimes, I get an idea for a feature which is too intrusive, too messy or just too incomplete to be rebased against master. Rather than losing the idea or wasting time rebasing, I m getting into the habit of renaming the branch foo as stale-foo and gsb then leaves it alone. Equally, there are frequently times when I need to have a feature branch based on another feature branch, sometimes several feature branches deep. Identifying these branches and avoiding rebasing on the wrong branch is important to not waste time. gsb takes a bit of getting used to, but basically the shorter and cleaner the output, the less work needs to be done. As shown, gsb is git show-branch under the hood. What I m looking for is multiple commits listed between a branch and master. Then I know which branches to use with gum. Finally, gleaf shows which feature branches can be dropped with git branch -d.

7 November 2015

Neil Williams: Vmdebootstrap sprint

At the miniDebConfUK in Cambridge, November 2015, there was a vmdebootstrap sprint. vmdebootstrap is written in python and the sprint built on the changes I made during DebConf15. The primary aim was to split out the code from a single python script and create modules which would be useful to other tools and make the source code itself easier to follow. Whilst doing this, I worked with Steve McIntyre to implement UEFI support into vmdebootstrap. This version reached experimental shortly after DebConf15. At the sprint, the squashfs support in vmdebootstrap was improved to be more useful in the preparation of live images rather than simply using mksquashfs as a compression algorithm for the entire image. I also improved and extended the documentation for vmdebootstrap. vmdebootstrap is now extensible and modular. Iain Learmonth used this new modular support in the development of live-build-ng which uses vmdebootstrap, live-boot and live-config to replace the role of live-build in the generation of official Debian live images by the debian-cd team. Iain Learmonth demonstrated that the new support in vmdebootstrap can be used to create working live images, including adding support for live images on any architecture supporting the Linux kernel in Debian and adding support for not only Grub but UEFI as well. A UEFI grub live image built by live-build-ng was demonstrated after only two days work on the vmdebootstrap base, wrapping support from live-boot and live-config with customisation for UEFI and grub config. vmdebootstrap and live-build-ng have been explicitly designed within the debian-cd team to remove the need to run live-build to create official Debian live images, replacing live-build with live-build-ng and the vmdebootstrap support. This brings working support for multiple architectures and UEFI to live images. To support the new functionality, the vmdebootstrap and debian-cd team have long sought a test suite for vmdebootstrap and Lars Wirzenius implemented one during the vmdebootstrap sprint. We now have fast tests with a pre-commit hook and build tests which are best with a local mirror. The test suite uses cmdtest & yarn. This test suite will be used for all future changes patches will not be accepted if the test suite fails and any substantially new code must provide working test cases. The fast tests can run without sudo, I expect to be able to sort out ci.debian.net testing for the fast tests too. There is also an outline for testing vmdebootstrap builds in (localhost) LAVA using a lava-submit.py example. All this work will arrive in unstable as vmdebootstrap 1.2 soon and is now in the master branch. The old codehelp/modules branch has been merged and removed.

Andrew Cater: Mini Debconf ARM CAmbridge 7 November 1022

Now watching Neil Williams (codehelp) on testing, continuous integration, ARM diversity and the problems of a huge spectrum of small ARM devices.

Mass testing == better for everyone using ARM but it's hard.

Betty Dall presented on HP Enterprise's The Machine as the first session: very interesting and dependent on ARM SoCs and Debian Linux. Linux running everywhere, even on huge hardware still being built.

14 August 2015

Neil Williams: The problem of SD mux

I keep being asked about automated bootloader testing and the phrase which crops up is SD mux hardware to multiplex SD card access (typically microSD). Each time, the questioner comes up with a simple solution which can be built over a weekend, so I ve decided to write out the actual objective, requirements and constraints to hopefully illustrate that this is not a simple problem and the solution needs to be designed to a fully scalable, reliable and maintainable standard. The objective Support bootloader testing by allowing fully automated tests to write a custom, patched, bootloader to the principal boot media of a test device, hard reset the board and automatically recover if the bootloader fails to boot the device by switching the media from the test device to a known working support device with full write access to overwrite everything on the card and write a known working bootloader. The environment 100 test devices, one SD mux each (potentially), in a single lab with support for all or any to be switched simultaneously and repeatedly (maybe a hundred times a day to and fro) with 99.99% reliability. The history First attempt was a simplistic solution which failed to operate reliably. Next attempt was a complex solution (LMP) which failed to operate as designed in a production environment (partially due to a reliance on USB) and has since suffered from a lack of maintenance. The most recent attempt was another simplistic solution which delivered three devices for test with only one usable and even that became unreliable in testing. The requirements (None of these are negotiable and all are born from real bugs or real failures of previous solutions in the above environment.)
  1. Ethernet yes, really. Physical, cat5/6 RJ45 big, bulky, ugly gigabit ethernet port. No wifi. This is not about design elegance, this is about scalability, maintenance and reliability. Must have a fully working TCP/IP stack with stable and reliable DHCP client. Stable, predictable, unique MAC addresses for every single board guaranteed. No dynamic MAC addresses, no hard coded MAC addresses which cannot be modified. Once modified, retain permanence of the required MAC address across reboots.
  2. No USB involement yes, really. The server writing to the media to recover a bricked device usually has only 2 USB ports but supports 20 devices. Powered hubs are not sufficiently reliable.
  3. Removable media only eMMC sounds useful but these are prototype development boards and some are already known to intermittently fry SD card controller chips causing permanent and irreversible damage to the SD card. If that happened to eMMC, the entire device would have to be discarded.
  4. Cable connections to the test device. This is a solved problem, the cables already exist due to the second attempt at a fix for this problem which resulted in a serviceable design for just the required cables. Do not consider any fixed connection, the height of the connector will never match all test device requirements and will be a constant source of errors when devices are moved around within the rack.
  5. Guaranteed unique, permanent and stable serial numbers for every device. With 100 devices in a lab, it is absolutely necessary that every single one is uniquely addressable.
  6. Interrogation there must be an interface for the control device to query the status of the SD mux and be assured that the results reflect reality at all times. The device must allow the control device to read and write to the media without requiring the test device to acknowledge the switch or even be powered on.
  7. No feature creep. There is no need to make this be able to switch ethernet or HDMI or GPIO as well as SD. Follow the software principle of pick one job and do it properly.
  8. Design for scalability this is not a hobbyist project, this is a serious task requiring genuine design. The problem is not simple, it is not acceptable to make a simple solution.
  9. Power the device must boot directly from power-on without requiring manual intervention of any kind and boot into a default safe mode where the media is only accessible to the control device. 5V power with a barrel connector is preferable definitely not power over USB. Device must raise the TCP/IP control interface automatically and be prepared to react to commands immediately that the interface is available.
  10. Software: some logic to prevent queued requests from causing repeated switching without any interval in between, e.g. if the device had to be power cycled.
  11. Ongoing support and maintenance of hardware, firmware and software. Test devices continue to develop and will require further changes or fixes as time goes on.
  12. Mounting holes sounds obvious but the board needs to be mounted in a sensible manner. Dangling off the end of a cat5 cable is not acceptable.
If any of those seem insurmountable or awkward or unappealing, please go back to the drawing board or leave well alone. Beyond the absolutes, there are other elements. The device is likely to need some kind of CPU and something ARM would be preferable, Cortex-M or Cortex-A if relevant, but creating a cape for a beaglebone-black is likely to be overkill. The available cables are short and the device will need to sit quite close to the test device. Test devices never put the SD card slot in the same place twice or in any location which is particularly accessible. Wherever possible, the components on the device should be commodity parts, replaceable and serviceable. The device would be best not densely populated there is no need for the device to be any particular size and overly small boards tend to be awkward to position correctly once cables are connected. There are limits, of course, so boards which end up bigger than typical test devices would seem excessive. So these are the reasons why we don t have automated bootloader testing and won t have it any time soon. If you ve got this far, maybe there is a design which meets all the criteria so contact me and let s see if this is a fixable problem after all.

2 May 2015

Neil Williams: vmextract userspace VM helper

In my previous post, I covered how to extend an initramfs to serve as a basis for tests and other purposes. In that example, the initramfs was simply copied off a device after installation. It occurred to me that python-guestfs would be a useful tool to make this step easier. So I ve written the vmextract helper which is currently in the vmdebootstrap upstream git and will make it into the next release of vmdebootstrap for Debian (0.7). Once vmdebootstrap has built an image, various files can be generated or modified during the install operations and some of these files can be useful when testing the image or packages which would be included in an updated image, before release. One example is the initrd built by the process of installing a Debian kernel. Rather than having to mount the image and copy the files manually, the vmextract helper can do it for you, without needing root privileges or any special setup.
$ /usr/share/vmdebootstrap/vmextract.py --verbose --boot \
  --image bbb/bbb-debian-armmp.img \
  --path /boot/initrd.img-3.14-2-armmp \
  --path /lib/arm-linux-gnueabihf/libresolv.so.2
This uses python-guestfs (which will become a Recommended package for vmdebootstrap) to prepare a read-only version of the image in this case with the /boot partition also mounted and copies files out into the current working directory. Note the repeating use of --path to build a list of files to be copied out. To copy out an entire directory (and all sub-directories) as a single tarball, use:
$ /usr/share/vmdebootstrap/vmextract.py --verbose --boot \
  --image bbb/bbb-debian-armmp.img \
  --directory /boot/ \
  --filename bbb-armmp-boot.tgz
If --filename is not specified, the default filename is vmextract.tgz. (vmextract uses gzip compression, just because.) The other point to note is the use of the --boot option to mount the /boot partition as well as the root partition of the image as this example uses the beaglebone-black support which has a boot partition. It s just a little helper which I hope will prove useful if only to avoid both the need for sudo and the need for loopback mount operations with the inherent confusion of calculating offsets. Thanks to the developers of python-guestfs for making this workable in barely 100 lines of python. It uses the same cliapp support as vmdebootstrap, so can be used silently in scripts by omitting the --verbose option. I haven t taken this on to the next step of unpacking the initrd and extending it, but that would just a bit of shell scripting using the files extracted by vmextract.py. Next! Next on the list will be extensions to vmdebootstrap to build live images of Debian, essentially adding Debian Installer to a vmdebootstrap image, which could actually be another python-guestfs helper (mounting read-write this time) to avoid adding lots more code to vmdebootstrap (which has grown to nearly 600 lines of python). That way, we can publish the bare VM images as well as a live conversion and reducing the number of times debootstrap needs to be called.

25 April 2015

Neil Williams: ARMMP Jessie

Whilst the rest of Debian was very busy actually making the Jessie release, I was trying to prepare the ground to test the armhf installer images. Cubietruck is fine, it s a nice install and the documentation on the wiki works nicely. I tried to automate it (as it would be something I d like to test in LAVA which is new in Jessie) with preseeding but although I have got a jessie-x86 preseed file which works in libvirt, the cubietruck installer doesn t seem to want to listen to all of the preseeding. Some of the options (notably the wifi firmware screen, hostname and user passwords) don t go away. x86 wasn t completely as expected either though it works nicely in libvirt and Virtual Machine Manager but trying to (automate it again) do it manually on the command line with qemu raised an awkward grub2 bug where it couldn t work out where to install and failed to then reboot if manually installed. iMX.53 was the next target got a few steps along the road with that one, except that the kernel in the installer isn t able to see the SATA even though the bootloader can, nor can it see the USB stick. This means that the only media the installer can see is the SD card from which the installer was booted trying to use that ends badly. Beaglebone-black required a little bit of tweaking. Don t forget that the firmware & partition.img together don t give a lot of space, so creating a second partition on the SD card upon which to put the CD image itself is useful. The Debian ARMMP kernel in Jessie doesn t see USB, so the option here is actually what I wanted. Install from SD card onto eMMC. It s a 2Gb space, but it s ideal for a default location, leaving the SD card for other testing. I ran out of time to see about preseeding BBB installs to eMMC but one word of warning, installing to the eMMC doesn t mean that you have a bootloader on the eMMC the SD card is still needed to reboot, unless that is fixed manually post-install. (Haven t got that working yet either.) Speaking of time, this post is more brief on the detail than I would like for a couple of reasons:
  1. I haven t actually got things working well enough (clearly from some of the end states above)
  2. the new code to test these in LAVA isn t ready yet either
I did mean to test on an iMX6.quad Wandboard will see about that, hopefully, later. In general, the armhf SD card support is the best start for installing Jessie on a range of ARMv7 hardware. (I haven t got as far as ARMv8 yet either.) However, when I describe that as the best start , it is lacking in many areas typically due to restrictions in the u-boot support, mainline kernel support and general board variability. It s a lot easier than it used to be but ARMv7 boards are some distance from a single installer setup or even a single set of installer documentation, for any distribution.

14 April 2015

Neil Williams: Extending an existing ARMMP initramfs

The actual use of this extension is still in development and the log files are not currently publicly visible, but it may still be useful for people to know the what and why The Debian ARMMP kernel can be used for multiple devices, just changing the DTB. I ve already done tests with this for Cubietruck and Beaglebone-Black, iMX.53 was one of the original test devices too. Whilst these tests can deploy a full image (there are examples of building such images in the vmdebootstrap package), it is a lot quicker to do simple tests of a kernel using a ramdisk. The default Debian initramfs has a focused objective but still forms a useful base for extension. In particular, I want to be able to test one initramfs on multiple boards (so multiple dtbs) with the same kernel image. I then want to be able, on selected boards, to mount a SATA drive or write an image to eMMC or a USB stick or whatever. LAVA (via the ongoing refactoring, not necessarily in the current dispatcher code) can automate such tests, e.g. to allow me to boot a Cubietruck into a standard Debian ARMMP armhf install on the SATA drive but using a modified (or updated) ARMMP kernel over TFTP without needing to install it on the device itself. That same kernel image can then be tested on multiple boards to see if the changes have benefitted one board at the expense of another. Automating all of that could be of a lot of benefit to the ARM kernel developers in Debian and outside Debian. So, the start point. Install Debian onto a Cubietruck in my case, with a SATA drive attached. All well and good so far, standard Debian Jessie ARMMP. (Cubietruck uses the LPAE kernel flavour but that won t matter for the initramfs.) Rather than building the initramfs manually, this provides a shortcut at some point I may investigate how to do this in QEMU but for now, it s just as quick to SSH onto the Cubietruck and update. I ve already written a little script to download the relevant linux-image package for ARMMP, unpack it and pull out the vmlinuz, the dtbs and a selected list of modules. The list is selective because TFTP has a 32Mb download limit and there are more modules than that. So I borrowed a snippet from the Xen folks (already shown previously here). The script is in a support repository for LAVA but can be used anywhere. (You ll need to edit the package name in the script to choose between ARMMP and ARMMP LPAE. Steps
  1. Get a working initramfs from an installed device running Debian ARMMP and copy some files for use later. Note: use the name of the symlink in the copy so that the file in /tmp/ is the actual file, using the name of the symlink as the filename. This is important later as it saves a step of having to make the (unnecessary) symlink inside the initramfs. Also, mkinitramfs, which built this initrd.img file in the first place, uses the same shared libraries as the main system, so copying these into the initramfs still works. (This is really useful when you get your ramdisk to support the attached secondary storage, allowing you to simply mount the original Debian install and fixup the initramfs by copying files off the main Debian install.) The relevant files are to support DNS lookup inside the initramfs which then allows a test to download a new image to put onto the attached media before rebooting into it.
    cp /boot/initrd.img-3.16.0-4-armmp-lpae /tmp/
    cp /lib/arm-linux-gnueabihf/libresolv.so.2 /tmp/
    cp /mnt/lib/arm-linux-gnueabihf/libnss_dns.so.2 /tmp/
    
    Copy these off the device for local adjustment:
    scp IP_ADDR:/tmp/FILE .
    
  2. Decompress the initrd.img:
    cp initrd.img-3.16.0-4-armmp-lpae initrd.img-3.16.0-4-armmp-lpae.gzip
    gunzip initrd.img-3.16.0-4-armmp-lpae.gzip
    
  3. Make a new empty directory
    mkdir initramfs
    cd initramfs
    
  4. Unpack:
    sudo cpio -id < initrd.img-3.16.0-4-armmp-lpae
    
  5. Remove the old modules (LAVA can add these later, allowing tests to use an updated build with updated modules):
    sudo rm -rf ./lib/modules/*
    
  6. Start to customise - need a script for udhcpc and two of the libraries from the installed system to allow the initramfs to do DNS lookups successfully.
    cp ../libresolv.so.2 ./lib/arm-linux-gnueabihf/
    cp ../libnss_dns.so.2 ./lib/arm-linux-gnueabihf/
    
  7. Copy the udhcpc default script into place:
    mkdir ./etc/udhcpc/
    sudo cp ../udhcpc.d ./etc/udhcpc/default.script
    sudo chmod 0755 ./etc/udhcpc/default.script
    
  8. Rebuild the cpio archive:
    find .   cpio -H newc -o > ../initrd.img-armmp.cpio
    
  9. Recompress:
    cd ..
    gzip initrd.img-armmp.cpio
    
  10. If using u-boot, add the UBoot header:
    mkimage -A arm -T ramdisk -C none -d initrd.img-armmp.cpio.gz initrd.img-armmp.cpio.gz.u-boot
    
  11. Checksum the final file so that you can check that against the LAVA logs.
    md5sum initrd.img-armmp.cpio.gz.u-boot
    
Each type of device will need a set of modules modprobed before tests can start. With the refactoring code, I can use an inline YAML and use dmesg -n 5 to reduce the kernel message noise. The actual module names here are just those for the Cubietruck but by having these only in the job submission, it makes it easier to test particular combinations and requirements.
- dmesg -n 5
- lava-test-case udevadm --shell udevadm hwdb --update
- lava-test-case depmod --shell depmod -a
- lava-test-case sata-mod --shell modprobe -a stmmac ahci_sunxi sd_mod sg ext4
- lava-test-case ifconfig --shell ifconfig eth0 up
- lava-test-case udhcpc --shell udhcpc
- dmesg -n 7
In due course, this will be added to the main LAVA documentation to allow others to keep the initramfs up to date and to support further test development.

13 February 2015

Neil Williams: OpenTAC mailing list

After the OpenTAC session at Linaro Connect, we do now have a mailing list to support any and all discussions related to OpenTAC. Thanks to Daniel Silverstone for the list. List archive: http://listmaster.pepperfish.net/pipermail/opentac-vero-apparatus.com More information on OpenTAC: http://wiki.vero-apparatus.com/OpenTAC

29 December 2014

Neil Williams: OpenTAC hardware in manufacture

A bit of news on the development of OpenTAC the Open Hardware Test Automation Controller. I ve talked about this at the MiniDebConf 2014 in Cambridge. (Video available). The development is being tracked on the Vero-Apparatus wiki and as this is Open Hardware, the files are attached to the wiki (All files are CC BY-SA 4.0). Andy completed the schematics at the start of November, allowing work to start on the routing. With routing completed, orders were placed for manufacture of the first PCBs on the 19th December 2014. Whilst waiting for the PCBs to arrive we re working on the device tree database (my first real experience with creating a device tree) which will underpin the PDU and serial console services available to the user as well as the admin interface for control of the USB subsystems, fan control, power control and thermal monitoring. The first thing we need to do is create a data dictionary a table to correlate software identifiers with the real hardware pins. We ll then follow that through with a default device tree overlay that will leave all the associated I/O lines in a safe initial state. Once we have some code, I ll be pushing to a branch on GitHub. We ve also got an internal git repo on vero-apparatus for OpenTAC files which we will add to in due course. Image of the board as rendered prior to prototype production: OpenTAC_2V00_Model More to come once we have the hardware on the desk

Next.