Search Results: "bates"

20 December 2023

Ulrike Uhlig: How volunteer work in F/LOSS exacerbates pre-existing lines of oppression, and what that has to do with low diversity

This is a post I wrote in June 2022, but did not publish back then. After first publishing it in December 2023, a perfectionist insecure part of me unpublished it again. After receiving positive feedback, i slightly amended and republish it now. In this post, I talk about unpaid work in F/LOSS, taking on the example of hackathons, and why, in my opinion, the expectation of volunteer work is hurting diversity. Disclaimer: I don t have all the answers, only some ideas and questions.

Previous findings In 2006, the Flosspols survey searched to explain the role of gender in free/libre/open source software (F/LOSS) communities because an earlier [study] revealed a significant discrepancy in the proportion of men to women. It showed that just about 1.5% of F/LOSS community members were female at that time, compared with 28% in proprietary software (which is also a low number). Their key findings were, to name just a few:
  • that F/LOSS rewards the producing code rather than the producing software. It thereby puts most emphasis on a particular skill set. Other activities such as interface design or documentation are understood as less technical and therefore less prestigious.
  • The reliance on long hours of intensive computing in writing successful code means that men, who in general assume that time outside of waged labour is theirs , are freer to participate than women, who normally still assume a disproportionate amount of domestic responsibilities. Female F/LOSS participants, however, seem to be able to allocate a disproportionate larger share of their leisure time for their F/LOSS activities. This gives an indication that women who are not able to spend as much time on voluntary activities have difficulties to integrate into the community.
We also know from the 2016 Debian survey, published in 2021, that a majority of Debian contributors are employed, rather than being contractors, and rather than being students. Also, 95.5% of respondents to that study were men between the ages of 30 and 49, highly educated, with the largest groups coming from Germany, France, USA, and the UK. The study found that only 20% of the respondents were being paid to work on Debian. Half of these 20% estimate that the amount of work on Debian they are being paid for corresponds to less than 20% of the work they do there. On the other side, there are 14% of those who are being paid for Debian work who declared that 80-100% of the work they do in Debian is remunerated.

So, if a majority of people is not paid, why do they work on F/LOSS? Or: What are the incentives of free software? In 2021, Louis-Philippe V ronneau aka Pollo, who is not only a Debian Developer but also an economist, published his thesis What are the incentive structures of free software (The actual thesis was written in French). One very interesting finding Pollo pointed out is this one:
Indeed, while we have proven that there is a strong and significative correlation between the income and the participation in a free/libre software project, it is not possible for us to pronounce ourselves about the causality of this link.
In the French original text:
En effet, si nous avons prouv qu il existe une corr lation forte et significative entre le salaire et la participation un projet libre, il ne nous est pas possible de nous prononcer sur la causalit de ce lien.
Said differently, it is certain that there is a relationship between income and F/LOSS contribution, but it s unclear whether working on free/libre software ultimately helps finding a well paid job, or if having a well paid job is the cause enabling work on free/libre software. I would like to scratch this question a bit further, mostly relying on my own observations, experiences, and discussions with F/LOSS contributors.

Volunteer work is unpaid work We often hear of hackathons, hack weeks, or hackfests. I ve been at some such events myself, Tails organized one, the IETF regularly organizes hackathons, and last week (June 2022!) I saw an invitation for a hack week with the Torproject. This type of event generally last several days. While the people who organize these events are being paid by the organizations they work for, participants on the other hand are generally joining on a volunteer basis. Who can we expect to show up at this type of event under these circumstances as participants? To answer this question, I collected some ideas:
  • people who have an employer sponsoring their work
  • people who have a funder/grant sponsoring their work
  • people who have a high income and can take time off easily (in that regard, remember the Gender Pay Gap, women often earn less for the same work than men)
  • people who rely on family wealth (living off an inheritance, living on rights payments from a famous grandparent - I m not making these situations up, there are actual people in such financially favorable situations )
  • people who don t need much money because they don t have to pay rent or pay low rent (besides house owners that category includes people who live in squats or have social welfare paying for their rent, people who live with parents or caretakers)
  • people who don t need to do care work (for children, elderly family members, pets. Remember that most care work is still done by women.)
  • students who have financial support or are in a situation in which they do not yet need to generate a lot of income
  • people who otherwise have free time at their disposal
So, who, in your opinion, fits these unwritten requirements? Looking at this list, it s pretty clear to me why we d mostly find white men from the Global North, generally with higher education in hackathons and F/LOSS development. ( Great, they re a culture fit! ) Yes, there will also always be some people of marginalized groups who will attend such events because they expect to network, to find an internship, to find a better job in the future, or to add their participation to their curriculum. To me, this rings a bunch of alarm bells.

Low diversity in F/LOSS projects a mirror of the distribution of wealth I believe that the lack of diversity in F/LOSS is first of all a mirror of the distribution of wealth on a larger level. And by wealth I m referring to financial wealth as much as to social wealth in the sense of Bourdieu: Families of highly educated parents socially reproducing privilege by allowing their kids to attend better schools, supporting and guiding them in their choices of study and work, providing them with relations to internships acting as springboards into well paid jobs and so on. That said, we should ask ourselves as well:

Do F/LOSS projects exacerbate existing lines of oppression by relying on unpaid work? Let s look again at the causality question of Pollo s research (in my words):
It is unclear whether working on free/libre software ultimately helps finding a well paid job, or if having a well paid job is the cause enabling work on free/libre software.
Maybe we need to imagine this cause-effect relationship over time: as a student, without children and lots of free time, hopefully some money from the state or the family, people can spend time on F/LOSS, collect experience, earn recognition - and later find a well-paid job and make unpaid F/LOSS contributions into a hobby, cementing their status in the community, while at the same time generating a sense of well-being from working on the common good. This is a quite common scenario. As the Flosspols study revealed however, boys often get their own computer at the age of 14, while girls get one only at the age of 20. (These numbers might be slightly different now, and possibly many people don t own an actual laptop or desktop computer anymore, instead they own mobile devices which are not exactly inciting them to look behind the surface, take apart, learn, appropriate technology.) In any case, the above scenario does not allow for people who join F/LOSS later in life, eg. changing careers, to find their place. I believe that F/LOSS projects cannot expect to have more women, people of color, people from working class backgrounds, people from outside of Germany, France, USA, UK, Australia, and Canada on board as long as volunteer work is the status quo and waged labour an earned privilege.

Wait, are you criticizing all these wonderful people who sacrifice their free time to work towards common good? No, that s definitely not my intention, I m glad that F/LOSS exists, and the F/LOSS ecosystem has always represented a small utopia to me that is worth cherishing and nurturing. However, I think we still need to talk more about the lack of diversity, and investigate it further.

Some types of work are never being paid Besides free work at hacking events, let me also underline that a lot of work in F/LOSS is not considered payable work (yes, that s an oxymoron!). Which F/LOSS project for example, has ever paid translators a decent fee? Which project has ever considered that doing the social glue work, often done by women in the projects, is work that should be paid for? Which F/LOSS projects pay the people who do their Debian packaging rather than relying on yet another already well-paid white man who can afford doing this work for free all the while holding up how great the F/LOSS ecosystem is? And how many people on opensourcedesign jobs are looking to get their logo or website done for free? (Isn t that heart icon appealing to your altruistic empathy?) In my experience even F/LOSS projects which are trying to do the right thing by paying everyone the same amount of money per hour run into issues when it turns out that not all hours are equal and that some types of work do not qualify for remuneration at all or that the rules for the clocking of work are not universally applied in the same way by everyone.

Not every interaction should have a monetary value, but Some of you want to keep working without being paid, because that feels a bit like communism within capitalism, it makes you feel good to contribute to the greater good while not having the system determine your value over money. I hear you. I ve been there (and sometimes still am). But as long as we live in this system, even though we didn t choose to and maybe even despise it - communism is not about working for free, it s about getting paid equally and adequately. We may not think about it while under the age of 40 or 45, but working without adequate financial compensation, even half of the time, will ultimately result in not being able to care for oneself when sick, when old. And while this may not be an issue for people who inherit wealth, or have an otherwise safe economical background, eg. an academic salary, it is a huge problem and barrier for many people coming out of the working or service classes. (Oh and please, don t repeat the neoliberal lie that everyone can achieve whatever they aim for, if they just tried hard enough. French research shows that (in France) one has only 30% chance to become a class defector , and change social class upwards. But I managed to get out and move up, so everyone can! - well, if you believe that I m afraid you might be experiencing survivor bias.)

Not all bodies are equally able We should also be aware that not all of us can work with the same amount of energy either. There is yet another category of people who are excluded by the expectation of volunteer work, either because the waged labour they do already eats all of their energy, or because their bodies are not disposed to do that much work, for example because of mental health issues - such as depression-, or because of physical disabilities.

When organizing events relying on volunteer work please think about these things. Yes, you can tell people that they should ask their employer to pay them for attending a hackathon - but, as I ve hopefully shown, that would not do it for many people, especially newcomers. Instead, you could propose a fund to make it possible that people who would not normally attend can attend. DebConf is a good example for having done this for many years.

Conclusively I would like to urge free software projects that have a budget and directly pay some people from it to map where they rely on volunteer work and how this hurts diversity in their project. How do you or your project exacerbate pre-existing lines of oppression by granting or not granting monetary value to certain types of work? What is it that you take for granted? As always, I m curious about your feedback!

Worth a read These ideas are far from being new. Ashe Dryden s well-researched post The ethics of unpaid labor and the OSS community dates back to 2013 and is as important as it was ten years ago.

1 July 2023

Debian Brasil: MiniDebConf Bras lia 2023 - um breve relato

Minidebconf2033 palco No per odo de 25 a 27 de maio, Bras lia foi palco da MiniDebConf 2023. Esse encontro, composto por diversas atividades como palestras, oficinas, sprints, BSP (Bug Squashing Party), assinatura de chaves, eventos sociais e hacking, teve como principal objetivo reunir a comunidade e celebrar o maior projeto de Software Livre do mundo: o Debian. A MiniDebConf Bras lia 2023 foi um sucesso gra as participa o de todas e todos, independentemente do n vel de conhecimento sobre o Debian. Valorizamos a presen a tanto dos(as) usu rios(as) iniciantes que est o se familiarizando com o sistema quanto dos(as) desenvolvedores(as) oficiais do projeto. O esp rito de acolhimento e colabora o esteve presente em todos os momentos. As MiniDebConfs s o encontros locais organizados por membros do Projeto Debian, visando objetivos semelhantes aos da DebConf, por m em mbito regional. Ao longo do ano, eventos como esse ocorrem em diferentes partes do mundo, fortalecendo a comunidade Debian. Minidebconf2023 placa Atividades A programa o da MiniDebConf foi intensa e diversificada. Nos dias 25 e 26 (quinta e sexta-feira), tivemos palestras, debates, oficinas e muitas atividades pr ticas. J no dia 27 (s bado), ocorreu o Hacking Day, um momento especial em que os(as) colaboradores(as) do Debian se reuniram para trabalhar em conjunto em v rios aspectos do projeto. Essa foi a vers o brasileira da Debcamp, tradi o pr via DebConf. Nesse dia, priorizamos as atividades pr ticas de contribui o ao projeto, como empacotamento de softwares, tradu es, assinaturas de chaves, install fest e a Bug Squashing Party. Minidebconf2023 auditorio

Minidebconf2023 oficina N meros da edi o Os n meros do evento impressionam e demonstram o envolvimento da comunidade com o Debian. Tivemos 236 inscritos(as), 20 palestras submetidas, 14 volunt rios(as) e 125 check-ins realizados. Al m disso, nas atividades pr ticas, tivemos resultados significativos, como 7 novas instala es do Debian GNU/Linux, a atualiza o de 18 pacotes no reposit rio oficial do projeto Debian pelos participantes e a inclus o de 7 novos contribuidores na equipe de tradu o. Destacamos tamb m a participa o da comunidade de forma remota, por meio de transmiss es ao vivo. Os dados anal ticos revelam que nosso site obteve 7.058 visualiza es no total, com 2.079 visualiza es na p gina principal (que contava com o apoio de nossos patrocinadores), 3.042 visualiza es na p gina de programa o e 104 visualiza es na p gina de patrocinadores. Registramos 922 usu rios(as) nicos durante o evento. No YouTube, a transmiss o ao vivo alcan ou 311 visualiza es, com 56 curtidas e um pico de 20 visualiza es simult neas. Foram incr veis 85,1 horas de exibi o, e nosso canal conquistou 30 novos inscritos(as). Todo esse engajamento e interesse da comunidade fortalecem ainda mais a MiniDebConf. Minidebconf2023 palestrantes Fotos e v deos Para revivermos os melhores momentos do evento, temos dispon veis fotos e v deos. As fotos podem ser acessadas em: https://deb.li/pbsb2023. J os v deos com as grava es das palestras est o dispon veis no seguinte link: https://deb.li/vbsb2023. Para manter-se atualizado e conectar-se com a comunidade Debian Bras lia, siga-nos em nossas redes sociais: Agradecimentos Gostar amos de agradecer profundamente a todos(as) os(as) participantes, organizadores(as), patrocinadores e apoiadores(as) que contribu ram para o sucesso da MiniDebConf Bras lia 2023. Em especial, expressamos nossa gratid o aos patrocinadores Ouro: Pencillabs, Globo, Policorp e Toradex Brasil, e ao patrocinador Prata, 4-Linux. Tamb m agradecemos Finatec e ao Instituto para Conserva o de Tecnologias Livres (ICTL) pelo apoio. Minidebconf2023 coffee A MiniDebConf Bras lia 2023 foi um marco para a comunidade Debian, demonstrando o poder da colabora o e do Software Livre. Esperamos que todas e todos tenham desfrutado desse encontro enriquecedor e que continuem participando ativamente das pr ximas iniciativas do Projeto Debian. Juntos, podemos fazer a diferen a! Minidebconf2023 fotos oficial

10 June 2023

Marco d'Itri: On having a track record in operating systems development

Now that Debian 12 has been released with proprietary firmwares on the official media, non-optional merged-/usr and systemd adopted by everybody, I want to take a moment to list, not without some pride, a few things that I was right about over the last 20 years: Accepting the obvious solution about firmwares took 18 years. My work on the merged-/usr transition started in 2014, and the first discussions about replacing sysvinit are from 2011. The general adoption of udev (and dynamic device names, and persistent network interface names...) took less time in comparison and no large-scale flame wars, since people could enable it at their own pace. But it required countless little debates in the Debian Bug Tracking System: I still remember the people insisting that they would never use this newfangled dynamic /dev/, or complaining about their beloved /dev/cdrom symbolic link and persistent network interface names. So follow me for more rants about inevitable technologies.

31 May 2023

Arturo Borrero Gonz lez: Wikimedia Hackathon 2023 Athens summary

Post logo During the weekend of 19-23 May 2023 I attended the Wikimedia hackathon 2023 in Athens, Greece. The event physically reunited folks interested in the more technological aspects of the Wikimedia movement in person for the first time since 2019. The scope of the hacking projects include (but was not limited to) tools, wikipedia bots, gadgets, server and network infrastructure, data and other technical systems. My role in the event was two-fold: on one hand I was in the event because of my role as SRE in the Wikimedia Cloud Services team, where we provided very valuable services to the community, and I was expected to support the technical contributors of the movement that were around. Additionally, and because of that same role, I did some hacking myself too, which was specially augmented given I generally collaborate on a daily basis with some community members that were present in the hacking room. The hackathon had some conference-style track and I ran a session with my coworker Bryan, called Past, Present and Future of Wikimedia Cloud Services (Toolforge and friends) (slides) which was very satisfying to deliver given the friendly space that it was. I attended a bunch of other sessions, and all of them were interesting and well presented. The number of ML themes that were present in the program schedule was exciting. I definitely learned a lot from attending those sessions, from how LLMs work, some fascinating applications for them in the wikimedia space, to what were some industry trends for training and hosting ML models. Session Despite the sessions, the main purpose of the hackathon was, well, hacking. While I was in the hacking space for more than 12 hours each day, my ability to get things done was greatly reduced by the constant conversations, help requests, and other social interactions with the folks. Don t get me wrong, I embraced that reality with joy, because the social bonding aspect of it is perhaps the main reason why we gathered in person instead of virtually. That being said, this is a rough list of what I did: The hackathon was also the final days of Technical Engagement as an umbrella group for WMCS and Developer Advocacy teams within the Technology department of the Wikimedia Foundation because of an internal reorg.. We used the chance to reflect on the pleasant time we have had together since 2019 and take a final picture of the few of us that were in person in the event. Technical Engagement It wasn t the first Wikimedia Hackathon for me, and I felt the same as in previous iterations: it was a welcoming space, and I was surrounded by friends and nice human beings. I ended the event with a profound feeling of being privileged, because I was part of the Wikimedia movement, and because I was invited to participate in it.

18 December 2022

Russell Coker: Wall Facers

I m currently reading the second book of the TriSolar Sci-Fi series by Cixin Liu, I ve only just started it so this post can t have spoilers for it and I will also only have minimal spoilers for the first book (nothing more than you will get from pop culture references to it). In the second book there are people called Wall Facers who have broad powers to shape the course of the Human response to an alien invasion in 400+ years time. The idea is that as the aliens have an ability to see everything that can be seen on Earth any ideas that leave the brain of one person can be snooped on, so if some people act independently without communicating their plans they can take the aliens by surprise. While that is probably going to work out well in the books history in general seems to show that people who act independently without any useful feedback from others tend to perform poorly, every king and dictator seems to demonstrate this. Efficient Work I ve been thinking about what I would do if I had significant powers to guide the response to an alien threat in some hundreds of years. The first thing to do would be to get all people working as efficiently as possible. Without the imminent threat of alien invasion we can have debates about how much time to spend working vs leisure time. Should we make 24 hours per week the new normal work week? But if the threat of annihilation is looming then the discussion should be about how to get as many people as possible working as much as possible. Currently 1/4 of the world population lack access to safe drinking water [1], there s a plan to achieve universal and equitable access to safe and affordable drinking water for all by 2030 . But 2030 isn t soon enough, another 8 years where 1/4 of children born won t reach their potential due to poor water is unacceptable. Currently 13% of the world population don t have access to electricity and 40% don t have access to clean fuels for cooking [2]. Lack of energy access reduces health and opportunities for education. Healthcare is another major obstacle to human development and therefore economic development. Even some allegedly first-world countries like the US lack universal affordable healthcare. I think we could reasonably get safe water to 99% of the world population before 2025 if we tried hard (IE applied a small fraction of the resources of a single war to it). Getting electricity to 95% of the world population and clean cooking fuels to 90% of the world population are probably achievable goals for 2025 as well. Healthcare is a slightly harder problem as we need to train more nurses and doctors. A registered nurse apparently needs 3 years of training after completing high school. We may have to improve high schools to get more students up to the standard of nursing degrees. If it takes 3 years to improve schools in year 9+ and then 3 years to get more high school graduates that would mean that it would take about 9 years to get an increase in nurses. Doing this would require increasing the capacity of universities and making university almost free (as it was for decades). So in about 2031 we could start sending a significant number of nurses from developed countries to help out developing countries. Becoming a doctor apparently requires 8 years of study plus a minimum of 3 years residency . So if doctors were entirely trained in first world countries then we wouldn t be able to send many doctors to developing countries until 2039. If the residency was performed in other countries then it could be as early as 2036. According to the WHO currently only half the world s population have adequate healthcare [3]. To get adequate healthcare to the world we need to more than double the number of doctors because currently we don t have enough in countries with decent healthcare systems such as Australia. It would probably take to at least 2060 to get enough doctors trained. The end goal of course would be to have every country able to train enough of it s citizens to provide all medical services, but countries that have serious widespread healthcare problems that reduce the number of people who can pursue higher education will have difficulty in that until some of the healthcare problems are alleviated. Education Obviously education is important to all achievements. Currently education seems very poorly run, it is possible to create a school system that teaches children effectively without the bullying that is common in Australia and without the sort of pressure that South Korea is infamous for. One of the main issues to resolve with the school system is the idea that everyone should learn at the same speed, that goal can only be achieved by making the majority of the students learn slowly. Students should be able to freely skip ahead as their skill permits and finish school at any age. Also high school isn t for everyone, the tech schools that teach trades need to be brought back. Deceiving Aliens A plot point in the TriSolar series is that the aliens can see each other s thoughts, the local communication (their equivalent to talking) is based on reading each other s thoughts without the possibility of deception. While deceptive written communication is potentially possible for them they haven t developed skills in that area. As a first step towards exploiting this humans could focus more on linguistic development that increases language complexity, such as the way the English language adopts words from other languages and gives them slightly different meanings for example the difference between driver and chauffeur and the difference between dog and hound is not obvious to many Europeans who otherwise speak English fluently. When involved in conversation it s possible to convey meaning without directly stating things, this is used extensively by people who are interested in security. My observations of this are based on conversations with people who do government work, but I imagine that criminal organisations also do similar things for similar reasons. An increased focus on poetry in schools might be helpful in developing skills for conveying ideas to people who think in human ways where the message is unclear to non-humans who have no experience of deception. I wonder whether the ability to understand human poetry would make aliens less hostile to humans, if they can think like us then they would be less likely to want to exterminate us. Poker is a game that depends on the ability to deceive others, I ve never been any good at it. I wonder if making it part of the school curriculum would help improve the overall human ability to deceive aliens. I don t think that such schools would become dens of sociopathy as depicted in Kakegurui, but it might have some negative results. Spreading education to a larger portion of the world s population requires more use of electronic education. Anything learned via text can be more easily assimilated by aliens than things that are learned directly from other people. For high school and the basics of a university degree this is fine. But for more advanced education it seems that having a large face to face component might help keep the value away from the aliens. More Ideas? What do you think I missed on this list? I wasn t trying to list every possibility, just the more important ones. Also for any goals other than increasing inequality for it s own sake we should improve health and education for the world.

14 December 2022

Russ Allbery: Review: Contact

Review: Contact, by Carl Sagan
Publisher: Pocket Books
Copyright: 1985
Printing: October 1986
ISBN: 0-671-43422-5
Format: Mass market
Pages: 434
Contact is a standalone first-contact science fiction novel. Carl Sagan (1934 1996) was best known as a non-fiction writer, astronomer, and narrator of the PBS non-fiction program Cosmos. This is his first and only novel. Ellie Arroway is the director of Project Argus, a radio telescope array in the New Mexico desert whose primary mission is SETI: the search for extra-terrestrial intelligence by scanning the skies for unexpected radio signals. Its assignment to SETI is controversial; there are radio astronomy projects waiting, and although 25% of the telescope time is assigned to non-SETI projects, some astronomers think the SETI mission should be scrapped and Argus fully diverted to more useful research. That changes overnight when Argus picks up a signal from Vega, binary pulses representing the sequence of prime numbers. The signal of course doesn't stop when the Earth rotates, so Ellie and her team quickly notify every radio observatory they can get hold of to follow the signal as it passes out of their line of sight. Before long, nearly every country with a radio telescope is involved, and Russian help is particularly vital since they have ship-mounted equipment. The US military and intelligence establishment isn't happy about this and make a few attempts to shove the genie back into the bottle and keep any further discoveries secret, without a lot of success. (Sagan did not anticipate the end of the Cold War, and yet ironically relations with the Russians in his version of the 1990s are warmer by far than they are today. Not that this makes the military types any happier.) For better or worse, making sense of the alien signal becomes a global project. You may be familiar with this book through its 1997 movie adaptation starring Jodie Foster. What I didn't know before reading this book is that it started life as a movie treatment, co-written with Ann Druyan, in 1979. When the movie stalled, Sagan expanded it into a novel. (Given the thanks to Druyan in the author's note, it may not be far wrong to name her as a co-author.) If you've seen the movie, you will have a good idea of what will happen, but the book gives the project a more realistic international scope. Ellie has colleagues carefully selected from all over the world, including for the climactic moment of the story. The biggest problem with Contact as a novel is that Sagan is a non-fiction writer who didn't really know how to write a novel. The long, detailed descriptions of the science and the astronomical equipment fit a certain type of SF story, but the descriptions of the characters, even Ellie, are equally detailed and yet use the same style. The book starts with an account of Ellie's childhood and path into science written like a biography or a magazine profile, not like a novel in which she's the protagonist. The same is true of the other characters: we get characterization of a sort, but the tone ranges from Wikipedia article to long-form essay and never quite feels like a story. Ellie is the most interesting character in the book, partly because the way Sagan writes her is both distant but oddly compelling. Sagan (or perhaps Druyan?) tries hard to show what life is like for a woman born in the middle of the 20th century who is interested in science and I think mostly succeeds, although Ellie's reactions to sexism felt weirdly emotionless. The descriptions of her relationships are even odder and the parts where this book felt the least like a novel, but Sagan does sell some of that tone as reflective of Ellie's personality. She's a scientist, the work is the center of her life, and everything else, even when important, is secondary. It doesn't entirely make the writing style work, but it helps. Sagan does attempt to give Ellie a personal development arc related to her childhood and her relationships with her father and step-father. I thought the conclusion to that was neither believable nor anywhere near as important as Sagan thought it was, which was off-putting. Better were her ongoing arguments with evangelical Christians, one of whom is a close-minded ass and the other of which is a far more interesting character. They felt wedged into this book, and I'm dubious a realistic version of Ellie would have been the person to have those debates, but it's a subject Sagan clearly has deep personal interest in and that shows in how they're written. The other problem with Contact as a novel is that Sagan does not take science fiction seriously as a genre, instead treating it as a way to explore a thought experiment. To a science fiction reader, or at least to this science fiction reader, the interesting bits of this story involve the aliens. Those are not the bits Sagan is interested in. His attention is on how this sort of contact, and project, would affect humanity and human politics. We do get some more traditional science fiction near the end of the book, but Sagan then immediately backs away from it, minimizes that part of the story, and focuses exclusively on the emotional and philosophical implications for humans of his thought experiment. Since I found his philosophical musings about agnosticism and wonder and discovery less interesting than the actual science fiction bits, I found this somewhat annoying. The ending felt a bit more like a cheap trick than a satisfying conclusion. Interestingly, this entire novel is set in an alternate universe, for reasons entirely unexplained (at least that I noticed) in the book. It's set in the late 1990s but was written in 1985, so of course this is an alternate future, but the 1985 of this world still isn't ours. Yuri Gagarin was the first man to set foot on the moon, and the space program and the Cold War developed in subtly different ways. I'm not sure why Sagan made that choice, but it felt to me like he was separating his thought experiment farther from our world to give the ending more plausible deniability. There are, at the time of the novel, permanent orbital colonies for (mostly) rich people, because living in space turns out to greatly extend human lifespans. That gives Sagan an opportunity to wax poetic about the life-altering effects of seeing Earth from space, which in his alternate timeline rapidly sped up nuclear disarmament and made the rich more protective of the planet. This is an old bit of space boosterism that isn't as common these days, mostly because it's become abundantly clear that human psychology doesn't work this way. Sadly, rich sociopaths remain sociopaths even when you send them into space. I was a bit torn between finding Sagan's predictions charmingly hopeful and annoyingly daft. I don't think this novel is very successful as a novel. It's much longer than it needs to be and long parts of it drag. But it's still oddly readable; even knowing the rough shape of the ending in advance, I found it hard to put down once the plot properly kicks into gear about two-thirds of the way through. There's a lot in here that I'd argue with Sagan about, but he's thoughtful and makes a serious attempt to work out the political and scientific ramifications of such a discovery in detail. Despite the dry tone, he does a surprisingly good job capturing the anticipation and excitement of a very expensive and risky scientific experiment. I'm not sure I would recommend this book to anyone, but I'm also the person who found Gregory Benford's Timescape to be boring and tedious, despite its rave reviews as a science fiction novel about the practice of science. If that sort of book is more your jam, you may like Contact better than I did. Rating: 6 out of 10

25 October 2022

Arturo Borrero Gonz lez: Netfilter Workshop 2022 summary

Netfilter logo This is my report from the Netfilter Workshop 2022. The event was held on 2022-10-20/2022-10-21 in Seville, and the venue was the offices of Zevenet. We started on Thursday with Pablo Neira (head of the project) giving a short welcome / opening speech. The previous iteration of this event was in virtual fashion in 2020, two years ago. In the year 2021 we were unable to meet either in person or online. This year, the number of participants was just eight people, and this allowed the setup to be a bit more informal. We had kind of an un-conference style meeting, in which whoever had something prepared just went ahead and opened a topic for debate. In the opening speech, Pablo did a quick recap on the legal problems the Netfilter project had a few years ago, a topic that was settled for good some months ago, in January 2022. There were no news in this front, which was definitely a good thing. Moving into the technical topics, the workshop proper, Pablo started to comment on the recent developments to instrument a way to perform inner matching for tunnel protocols. The current implementation supports VXLAN, IPIP, GRE and GENEVE. Using nftables you can match packet headers that are encapsulated inside these protocols. He mentioned the design and the goals, that was to have a kernel space setup that allows adding more protocols by just patching userspace. In that sense, more tunnel protocols will be supported soon, such as IP6IP, UDP, and ESP. Pablo requested our opinion on whether if nftables should generate the matching dependencies. For example, if a given tunnel is UDP-based, a dependency match should be there otherwise the rule won t work as expected. The agreement was to assist the user in the setup when possible and if not, print clear error messages. By the way, this inner thing is pure stateless packet filtering. Doing inner-conntracking is an open topic that will be worked on in the future. Pablo continued with the next topic: nftables automatic ruleset optimizations. The times of linear ruleset evaluation are over, but some people have a hard time understanding / creating rulesets that leverage maps, sets, and concatenations. This is where the ruleset optimizations kick in: it can transform a given ruleset to be more optimal by using such advanced data structures. This is purely about optimizing the ruleset, not about validating the usefulness of it, which could be another interesting project. There were a couple of problems mentioned, however. The ruleset optimizer can be slow, O(n!) in worst case. And the user needs to use nested syntax. More improvements to come in the future. Next was Stefano Brivio s turn (Red Hat engineer). He had been involved lately in a couple of migrations to nftables, in particular libvirt and KubeVirt. We were pointed to https://libvirt.org/firewall.html, and Stefano walked us through the 3 or 4 different virtual networks that libvirt can create. He evaluated some options to generate efficient rulesets in nftables to instrument such networks, and commented on a couple of ideas: having a null matcher in nftables set expression. Or perhaps having kind of subsets, something similar to a view in a SQL database. The room spent quite a bit of time debating how the nft_lookup API could be extended to support such new search operations. We also discussed if having intermediate facilities such as firewalld could provide the abstraction levels that could make developers more comfortable. Using firewalld also may have the advantage that coordination between different system components writing ruleset to nftables is handled by firewalld itself and developers are freed of the responsibility of doing it right. Next was Fernando F. Mancera (Red Hat engineer). He wanted to improve error reporting when deleting table/chain/rules with nftables. In general, there are some inconsistencies on how tables can be deleted (or flushed). And there seems to be no correct way to make a single table go away with all its content in a single command. The room agreed in that the commands destroy table and delete table should be defined consistently, with the following meanings: This topic diverted into another: how to reload/replace a ruleset but keep stateful information (such as counters). Next was Phil Sutter (Netfilter coreteam member and Red Hat engineer). He was interested in discussing options to make iptables-nft backward compatible. The use case he brought was simple: What happens if a container running iptables 1.8.7 creates a ruleset with features not supported by 1.8.6. A later container running 1.8.6 may fail to operate. Phil s first approach was to attach additional metadata into rules to assist older iptables-nft in decoding and printing the ruleset. But in general, there are no obvious or easy solutions to this problem. Some people are mixing different tooling version, and there is no way all cases can be predicted/covered. iptables-nft already refuses to work in some of the most basic failure scenarios. An other way to approach the issue could be to introduce some kind of support to print raw expressions in iptables-nft, like -m nft xyz. Which feels ugly, but may work. We also explored playing with the semantics of release version numbers. And another idea: store strings in the nft rule userdata area with the equivalent matching information for older iptables-nft. In fact, what Phil may have been looking for is not backwards but forward compatibility. Phil was undecided which path to follow, but perhaps the most common-sense approach is to fall back to a major release version bump (2.x.y) and declaring compatibility breakage with older iptables 1.x.y. That was pretty much it for the first day. We had dinner together and went to sleep for the next day. The room The second day was opened by Florian Westphal (Netfilter coreteam member and Red Hat engineer). Florian has been trying to improve nftables performance in kernels with RETPOLINE mitigations enabled. He commented that several workarounds have been collected over the years to avoid the performance penalty of such mitigations. The basic strategy is to avoid function indirect calls in the kernel. Florian also described how BPF programs work around this more effectively. And actually, Florian tried translating nf_hook_slow() to BPF. Some preliminary benchmarks results were showed, with about 2% performance improvement in MB/s and PPS. The flowtable infrastructure is specially benefited from this approach. The software flowtable infrastructure already offers a 5x performance improvement with regards the classic forwarding path, and the change being researched by Florian would be an addition on top of that. We then moved into discussing the meeting Florian had with Alexei in Zurich. My personal opinion was that Netfilter offers interesting user-facing interfaces and semantics that BPF does not. Whereas BPF may be more performant in certain scenarios. The idea of both things going hand in hand may feel natural for some people. Others also shared my view, but no particular agreement was reached in this topic. Florian will probably continue exploring options on that front. The next topic was opened by Fernando. He wanted to discuss Netfilter involvement in Google Summer of Code and Outreachy. Pablo had some personal stuff going on last year that prevented him from engaging in such projects. After all, GSoC is not fundamental or a priority for Netfilter. Also, Pablo mentioned the lack of support from others in the project for mentoring activities. There was no particular decision made here. Netfilter may be present again in such initiatives in the future, perhaps under the umbrella of other organizations. Again, Fernando proposed the next topic: nftables JSON support. Fernando shared his plan of going over all features and introduce programmatic tests from them. He also mentioned that the nftables wiki was incomplete and couldn t be used as a reference for missing tests. Phil suggested running the nftables python test-suite in JSON mode, which should complain about missing features. The py test suite should cover pretty much all statements and variations on how the nftables expression are invoked. Next, Phil commented on nftables xtables support. This is, supporting legacy xtables extensions in nftables. The most prominent problem was that some translations had some corner cases that resulted in a listed ruleset that couldn t be fed back into the kernel. Also, iptables-to-nftables translations can be sloppy, and the resulting rule won t work in some cases. In general, nft list ruleset nft -f may fail in rulesets created by iptables-nft and there is no trivial way to solve it. Phil also commented on potential iptables-tests.py speed-ups. Running the test suite may take very long time depending on the hardware. Phil will try to re-architect it, so it runs faster. Some alternatives had been explored, including collecting all rules into a single iptables-restore run, instead of hundreds of individual iptables calls. Next topic was about documentation on the nftables wiki. Phil is interested in having all nftables code-flows documented, and presented some improvements in that front. We are trying to organize all developer-oriented docs on a mediawiki portal, but the extension was not active yet. Since I worked at the Wikimedia Foundation, all the room stared at me, so at the end I kind of committed to exploring and enabling the mediawiki portal extension. Note to self: is this perhaps https://www.mediawiki.org/wiki/Portals ? Next presentation was by Pablo. He had a list of assorted topics for quick review and comment. Following this, a new topic was introduced by Stefano. He wanted to talk about nft_set_pipapo, documentation, what to do next, etc. He did a nice explanation of how the pipapo algorithm works for element inserts, lookups, and deletion. The source code is pretty well documented, by the way. He showed performance measurements of different data types being stored in the structure. After some lengthly debate on how to introduce changes without breaking usage for users, he declared some action items: writing more docs, addressing problems with non-atomic set reloads and a potential rework of nft_rbtree. After that, the next topic was kubernetes & netfilter , also by Stefano. Actually, this topic was very similar to what we already discussed regarding libvirt. Developers want to reduce packet matching effort, but also often don t leverage nftables most performant features, like sets, maps or concatenations. Some Red Hat developers are already working on replacing everything with native nftables & firewalld integrations. But some rules generators are very bad. Kubernetes (kube-proxy) is a known case. Developers simply won t learn how to code better ruleset generators. There was a good question floating around: What are people missing on first encounter with nftables? The Netfilter project doesn t have a training or marketing department or something like that. We cannot force-educate developers on how to use nftables in the right way. Perhaps we need to create a set of dedicated guidelines, or best practices, in the wiki for app developers that rely on nftables. Jozsef Kadlecsik (Netfilter coreteam) supported this idea, and suggested going beyond: such documents should be written exclusively from the nftables point of view: stop approaching the docs as a comparison to the old iptables semantics. Related to that last topic, next was Laura Garc a (Zevenet engineer, and venue host). She shared the same information as she presented in the Kubernetes network SIG in August 2020. She walked us through nftlb and kube-nftlb, a proof-of-concept replacement for kube-proxy based on nftlb that can outperform it. For whatever reason, kube-nftlb wasn t adopted by the upstream kubernetes community. She also covered latest changes to nftlb and some missing features, such as integration with nftables egress. nftlb is being extended to be a full proxy service and a more robust overall solution for service abstractions. In a nutshell, nftlb uses a templated ruleset and only adds elements to sets, which is exactly the right usage of the nftables framework. Some other projects should follow its example. The performance numbers are impressive, and from the early days it was clear that it was outperforming classical LVS-DSR by 10x. I used this opportunity to bring a topic that I wanted to discuss. I ve seen some SRE coworkers talking about katran as a replacement for traditional LVS setups. This software is a XDP/BPF based solution for load balancing. I was puzzled about what this software had to offer versus, for example, nftlb or any other nftables-based solutions. I commented on the highlighs of katran, and we discussed the nftables equivalents. nftlb is a simple daemon which does everything using a JSON-enabled REST API. It is already packaged into Debian, ready to use, whereas katran feels more like a collection of steps that you need to run in a certain order to get it working. All the hashing, caching, HA without state sharing, and backend weight selection features of katran are already present in nftlb. To work on a pure L3/ToR datacenter network setting, katran uses IPIP encapsulation. They can t just mangle the MAC address as in traditional DSR because the backend server is on a different L3 domain. It turns out nftables has a nft_tunnel expression that can do this encapsulation for complete feature parity. It is only available in the kernel, but it can be made available easily on the userspace utility too. Also, we discussed some limitations of katran, for example, inability to handle IP fragmentation, IP options, and potentially others not documented anywhere. This seems to be common with XDP/BPF programs, because handling all possible network scenarios would over-complicate the BPF programs, and at that point you are probably better off by using the normal Linux network stack and nftables. In summary, we agreed that nftlb can pretty much offer the same as katran, in a more flexible way. Group photo Finally, after many interesting debates over two days, the workshop ended. We all agreed on the need for extending it to 3 days next time, since 2 days feel too intense and too short for all the topics worth discussing. That s all on my side! I really enjoyed this Netfilter workshop round.

31 May 2022

Russell Coker: Links May 2022

dontkillmyapp.com is a web site about Android phone vendors who make their phones kill your apps when you don t want them to [1]. One of the many reasons why Pine and Purism offer the promise of better phones. This blog post about the Librem 5 camera is interesting [2]. Currently the Librem 5 camera isn t very usable for me as I just want to point and shoot, but it apparently works well for experts. Taking RAW photos is a good feature that I d like to have in all my camera phones. The Russian government being apparently unaware of the Streisand Effect has threatened Wikipedia for publishing facts about the war against Ukraine [3]. We all should publicise this as much as possible. The Wikipedia page is The 2022 Russian Invasion of Ukraine [4]. The Jerusalem Post has an interesting article about whether Mein Kampf should be published and studied in schools [5]. I don t agree with the conclusions about studying that book in schools, but I think that the analysis of the situation in that article is worth reading. One of the issues I have with teaching Mein Kampf and similar books is the quality of social studies teaching at the school I attended, I m pretty sure that teaching Mein Kampf in any way at that school would just turn out more neo-Nazis. Maybe better schools (IE not Christian private schools) could have productive classes about Mein Kampf Vanity Fair has an interesting article about the history of the private jet [6]. Current Affairs has an unusually informative article about why blockchain currencies should die in a fire [7]. The Nazi use of methamphetamine is well known, but Time has an insightful article about lesser known aspects of meth use [8]. How they considered meth as separate from the drugs they claimed were for the morally inferior is interesting. George Monbiot wrote an insightful article comparing the 2008 bank collapse to the current system of unstable food supplies [9]. JWZ wrote an insightful blog post about Following the Money regarding the push to reopen businesses even though the pandemic is far from over [10]. His conclusion is that commercial property owners are pushing the governments to give them more money. PsyPost has an interesting article on the correlation between watching Fox News and lacking knowledge of science and of society [11]. David Brin wrote an interesting paper about Disputation and how that can benefit society [12]. I think he goes too far in some of his claims, but he has interesting points. The overall idea of a Disputation arena for ideas is a really good one. I previously had a similar idea on a much smaller scale of having debates via Wiki [13].

28 April 2022

Bits from Debian: DebConf22 bursary applications and call for papers are closing in less than 72 hours!

If you intend to apply for a DebConf22 bursary and/or submit an event proposal and have not yet done so, please proceed as soon as possible! Bursary applications for DebConf22 will be accepted until May 1st at 23:59 UTC. Applications submitted after this deadline will not be considered. You can apply for a bursary when you register for the conference. Remember that giving a talk or organising an event is considered towards your bursary; if you have a submission to make, submit it even if it is only sketched-out. You will be able to detail it later. DebCamp plans can be entered in the usual Sprints page at the Debian wiki. Please make sure to double-check your accommodation choices (dates and venue). Details about accommodation arrangements can be found on the accommodation page. Event proposals will be accepted until May 1st at 23:59 UTC too. Events are not limited to traditional presentations or informal sessions (BoFs): we welcome submissions of tutorials, performances, art installations, debates, or any other format of event that you think would be of interest to the Debian community. Regular sessions may either be 20 or 45 minutes long (including time for questions), other kinds of sessions (workshops, demos, lightning talks, and so on) could have different durations. Please choose the most suitable duration for your event and explain any special requests. You can submit it here. The the 23rd edition of DebConf will take place from July 17th to 24th, 2022 at the Innovation and Training Park (ITP) in Prizren, Kosovo, and will be preceded by DebCamp, from July 10th to 16th. See you in Prizren! DebConf22 banner open registration

18 March 2022

Bits from Debian: DebConf22 registration and call for proposals are open!

DebConf22 banner open registration Registration for DebConf22 is now open. The the 23rd edition of DebConf will take place from July 17th to 24th, 2022 at the Innovation and Training Park (ITP) in Prizren, Kosovo, and will be preceded by DebCamp, from July 10th to 16th. Along with the registration, the DebConf content team announced the call for proposals. Deadline to submit a proposal to be considered in the main schedule is April 15th, 2022 23:59:59 UTC (Friday). DebConf is an event open to everyone, no matter how you identify yourself or how others perceive you. We want to increase visibility of our diversity and work towards inclusion at Debian Project, drawing our attendees from people just starting their Debian journey, to seasoned Debian Developers or active contributors in different areas like packaging, translation, documentation, artwork, testing, specialized derivatives, user support and many other. In other words, all are welcome. To register for the event, log into the registration system and fill out the form. You will be able to edit and update your registration at any point. However, in order to help the organizers have a better estimate of how many people will attend the event, we would appreciate if you could access the system and confirm (or cancel) your participation in the conference as soon as you know if you will be able to come. The last day to confirm or cancel is July 1st, 2022 23:59:59 UTC. If you don't confirm or you register after this date, you can come to the DebConf22 but we cannot guarantee availability of accommodation, food and swag (t-shirt, bag, and so on). For more information about registration, please visit registration information. Submitting an event You can now submit an event proposal. Events are not limited to traditional presentations or informal sessions (BoFs): we welcome submissions of tutorials, performances, art installations, debates, or any other format of event that you think would be of interest to the Debian community. Regular sessions may either be 20 or 45 minutes long (including time for questions), other kinds of sessions (workshops, demos, lightning talks, and so on) could have different durations. Please choose the most suitable duration for your event and explain any special requests. In order to submit a talk, you will need to create an account on the website. We suggest that Debian Salsa account holders (including DDs and DMs) use their Salsa login when creating an account. However, this isn't required, as you can sign up with an e-mail address and password. Bursary for travel, accommodation and meals In an effort to widen the diversity of DebConf attendees, the Debian Project allocates a part of the financial resources obtained through sponsorships to pay for bursaries (travel, accommodation, and/or meals) for participants who request this support when they register. As resources are limited, we will examine the requests and decide who will receive the bursaries. They will be destined: Giving a talk, organizing an event or helping during DebConf22 is taken intoa account when deciding upon your bursary, so please mention them in your bursary application. For more information about bursaries, please visit applying for a bursary to DebConf. Attention: the registration for DebConf22 will be open until the conference starts, but the deadline to apply for bursaries using the registration form before May 1st, 2022 23:59:59 UTC. This deadline is necessary in order to the organizers use time to analyze the requests, and for successful applicants to prepare for the conference. DebConf would not be possible without the generous support of all our sponsors, especially our Platinum Sponsors Lenovo and Infomaniak. DebConf22 is accepting sponsors; if you are interested, or think you know of others who would be willing to help, please get in touch!

1 March 2022

Russell Coker: SAGE (ITPA) Spam

In 2008 I joined SAGE (the System Administrators Guild of Australia). It was a professional society for people doing sysadmin work (running computer servers). I quit when I found that the level of clue was lower than hoped and that members used the code of ethics as nothing but a way to score points in online debates. After quitting SAGE kept emailing me and wouldn t respect my request to be removed from all lists so I had to block their mail server. SAGE has in recent times changed it s name to ITPA (Information Technology Professionals Association) and is still sending me email. I ve just sent yet another unsubscribe request. How many years of sending unwanted email can be caused by incompetence and when should we assume it s malice? They have been doing this for over a decade now. Even if it s incompetence, that s still damning given that it s incompetence in the main topic of the organisation. Here is the ITPA Code of Ethics [1], as you can see there is no reference to spam. The nearest seems to be I will continue to enlarge my understanding of the social and legal issues that arise in computing environments, and I will communicate that understanding to others when appropriate . So it s great that they aren t breaking their own code of ethics :-# but I d still like them to stop emailing me.

3 January 2022

Ian Jackson: Debian s approach to Rust - Dependency handling

tl;dr: Faithfully following upstream semver, in Debian package dependencies, is a bad idea. Introduction I have been involved in Debian for a very long time. And I ve been working with Rust for a few years now. Late last year I had cause to try to work on Rust things within Debian. When I did, I found it very difficult. The Debian Rust Team were very helpful. However, the workflow and tooling require very large amounts of manual clerical work - work which it is almost impossible to do correctly since the information required does not exist. I had wanted to package a fairly straightforward program I had written in Rust, partly as a learning exercise. But, unfortunately, after I got stuck in, it looked to me like the effort would be wildly greater than I was prepared for, so I gave up. Since then I ve been thinking about what I learned about how Rust is packaged in Debian. I think I can see how to fix some of the problems. Although I don t want to go charging in and try to tell everyone how to do things, I felt I ought at least to write up my ideas. Hence this blog post, which may become the first of a series. This post is going to be about semver handling. I see problems with other aspects of dependency handling and source code management and traceability as well, and of course if my ideas find favour in principle, there are a lot of details that need to be worked out, including some kind of transition plan. How Debian packages Rust, and build vs runtime dependencies Today I will be discussing almost entirely build-dependencies; Rust doesn t (yet?) support dynamic linking, so built Rust binaries don t have Rusty dependencies. However, things are a bit confusing because even the Debian binary packages for Rust libraries contain pure source code. So for a Rust library package, building the Debian binary package from the Debian source package does not involve running the Rust compiler; it s just file-copying and format conversion. The library s Rust dependencies do not need to be installed on the build machine for this. So I m mostly going to be talking about Depends fields, which are Debian s way of talking about runtime dependencies, even though they are used only at build-time. The way this works is that some ultimate leaf package (which is supposed to produce actual executable code) Build-Depends on the libraries it needs, and those Depends on their under-libraries, so that everything needed is installed. What do dependencies mean and what are they for anyway? In systems where packages declare dependencies on other packages, it generally becomes necessary to support versioned dependencies. In all but the most simple systems, this involves an ordering (or similar) on version numbers and a way for a package A to specify that it depends on certain versions of B. Both Debian and Rust have this. Rust upstream crates have version numbers and can specify their dependencies according to semver. Debian s dependency system can represent that. So it was natural for the designers of the scheme for packaging Rust code in Debian to simply translate the Rust version dependencies to Debian ones. However, while the two dependency schemes seem equivalent in the abstract, their concrete real-world semantics are totally different. These different package management systems have different practices and different meanings for dependencies. (Interestingly, the Python world also has debates about the meaning and proper use of dependency versions.) The epistemological problem Consider some package A which is known to depend on B. In general, it is not trivial to know which versions of B will be satisfactory. I.e., whether a new B, with potentially-breaking changes, will actually break A. Sometimes tooling can be used which calculates this (eg, the Debian shlibdeps system for runtime dependencies) but this is unusual - especially for build-time dependencies. Which versions of B are OK can normally only be discovered by a human consideration of changelogs etc., or by having a computer try particular combinations. Few ecosystems with dependencies, in the Free Software community at least, make an attempt to precisely calculate the versions of B that are actually required to build some A. So it turns out that there are three cases for a particular combination of A and B: it is believed to work; it is known not to work; and: it is not known whether it will work. And, I am not aware of any dependency system that has an explicit machine-readable representation for the unknown state, so that they can say something like A is known to depend on B; versions of B before v1 are known to break; version v2 is known to work . (Sometimes statements like that can be found in human-readable docs.) That leaves two possibilities for the semantics of a dependency A depends B, version(s) V..W: Precise: A will definitely work if B matches V..W, and Optimistic: We have no reason to think B breaks with any of V..W. At first sight the latter does not seem useful, since how would the package manager find a working combination? Taking Debian as an example, which uses optimistic version dependencies, the answer is as follows: The primary information about what package versions to use is not only the dependencies, but mostly in which Debian release is being targeted. (Other systems using optimistic version dependencies could use the date of the build, i.e. use only packages that are current .)
Precise Optimistic
People involved in version management Package developers,
downstream developers/users.
Package developers,
downstream developer/users,
distribution QA and release managers.
Package developers declare versions V and dependency ranges V..W so that It definitely works. A wide range of B can satisfy the declared requirement.
The principal version data used by the package manager Only dependency versions. Contextual, eg, Releases - set(s) of packages available.
Version dependencies are for Selecting working combinations (out of all that ever existed). Sequencing (ordering) of updates; QA.
Expected use pattern by a downstream Downstream can combine any
declared-good combination.
Use a particular release of the whole system. Mixing-and-matching requires additional QA and remedial work.
Downstreams are protected from breakage by Pessimistically updating versions and dependencies whenever anything might go wrong. Whole-release QA.
A substantial deployment will typically contain Multiple versions of many packages. A single version of each package, except where there are actual incompatibilities which are too hard to fix.
Package updates are driven by Top-down:
Depending package updates the declared metadata.
Bottom-up:
Depended-on package is updated in the repository for the work-in-progress release.
So, while Rust and Debian have systems that look superficially similar, they contain fundamentally different kinds of information. Simply representing the Rust versions directly into Debian doesn t work. What is currently done by the Debian Rust Team is to manually patch the dependency specifications, to relax them. This is very labour-intensive, and there is little automation supporting either decisionmaking or actually applying the resulting changes. What to do Desired end goal To update a Rust package in Debian, that many things depend on, one need simply update that package. Debian s sophisticated build and CI infrastructure will try building all the reverse-dependencies against the new version. Packages that actually fail against the new dependency are flagged as suffering from release-critical problems. Debian Rust developers then update those other packages too. If the problems turn out to be too difficult, it is possible to roll back. If a problem with a depending packages is not resolved in a timely fashion, priority is given to updating core packages, and the depending package falls by the wayside (since it is empirically unmaintainable, given available effort). There is no routine manual patching of dependency metadata (or of anything else). Radical proposal Debian should not precisely follow upstream Rust semver dependency information. Instead, Debian should optimistically try the combinations of packages that we want to have. The resulting breakages will be discovered by automated QA; they will have to be fixed by manual intervention of some kind, but usually, simply updating the depending package will be sufficient. This no longer ensures (unlike the upstream Rust scheme) that the result is expected to build and work if the dependencies are satisfied. But as discussed, we don t really need that property in Debian. More important is the new property we gain: that we are able to mix and match versions that we find work in practice, without a great deal of manual effort. Or to put it another way, in Debian we should do as a Rust upstream maintainer does when they do the regular update dependencies for new semvers task: we should update everything, see what breaks, and fix those. (In theory a Rust upstream package maintainer is supposed to do some additional checks or something. But the practices are not standardised and any checks one does almost never reveal anything untoward, so in practice I think many Rust upstreams just update and see what happens. The Rust upstream community has other mechanisms - often, reactive ones - to deal with any problems. Debian should subscribe to those same information sources, eg RustSec.) Nobbling cargo Somehow, when cargo is run to build Rust things against these Debian packages, cargo s dependency system will have to be overridden so that the version of the package that is actually selected by Debian s package manager is used by cargo without complaint. We probably don t want to change the Rust version numbers of Debian Rust library packages, so this should be done by either presenting cargo with an automatically-massaged Cargo.toml where the dependency version restrictions are relaxed, or by using a modified version of cargo which has special option(s) to relax certain dependencies. Handling breakage Rust packages in Debian should already be provided with autopkgtests so that ci.debian.net will detect build breakages. Build breakages will stop the updated dependency from migrating to the work-in-progress release, Debian testing. To resolve this, and allow forward progress, we will usually upload a new version of the dependency containing an appropriate Breaks, and either file an RC bug against the depending package, or update it. This can be done after the upload of the base package. Thus, resolution of breakage due to incompatibilities will be done collaboratively within the Debian archive, rather than ad-hoc locally. And it can be done without blocking. My proposal prioritises the ability to make progress in the core, over stability and in particular over retaining leaf packages. This is not Debian s usual approach but given the Rust ecosystem s practical attitudes to API design, versioning, etc., I think the instability will be manageable. In practice fixing leaf packages is not usually really that hard, but it s still work and the question is what happens if the work doesn t get done. After all we are always a shortage of effort - and we probably still will be, even if we get rid of the makework clerical work of patching dependency versions everywhere (so that usually no work is needed on depending packages). Exceptions to the one-version rule There will have to be some packages that we need to keep multiple versions of. We won t want to update every depending package manually when this happens. Instead, we ll probably want to set a version number split: rdepends which want version <X will get the old one. Details - a sketch I m going to sketch out some of the details of a scheme I think would work. But I haven t thought this through fully. This is still mostly at the handwaving stage. If my ideas find favour, we ll have to do some detailed review and consider a whole bunch of edge cases I m glossing over. The dependency specification consists of two halves: the depending .deb s Depends (or, for a leaf package, Build-Depends) and the base .deb Version and perhaps Breaks and Provides. Even though libraries vastly outnumber leaf packages, we still want to avoid updating leaf Debian source packages simply to bump dependencies. Dependency encoding proposal Compared to the existing scheme, I suggest we implement the dependency relaxation by changing the depended-on package, rather than the depending one. So we retain roughly the existing semver translation for Depends fields. But we drop all local patching of dependency versions. Into every library source package we insert a new Debian-specific metadata file declaring the earliest version that we uploaded. When we translate a library source package to a .deb, the binary package build adds Provides for every previous version. The effect is that when one updates a base package, the usual behaviour is to simply try to use it to satisfy everything that depends on that base package. The Debian CI will report the build or test failures of all the depending packages which the API changes broke. We will have a choice, then: Breakage handling - update broken depending packages individually If there are only a few packages that are broken, for each broken dependency, we add an appropriate Breaks to the base binary package. (The version field in the Breaks should be chosen narrowly, so that it is possible to resolve it without changing the major version of the dependency, eg by making a minor source change.) When can then do one of the following: Breakage handling - declare new incompatible API in Debian If the API changes are widespread and many dependencies are affected, we should represent this by changing the in-Debian-source-package metadata to arrange for fewer Provides lines to be generated - withdrawing the Provides lines for earlier APIs. Hopefully examination of the upstream changelog will show what the main compat break is, and therefore tell us which Provides we still want to retain. This is like declaring Breaks for all the rdepends. We should do it if many rdepends are affected. Then, for each rdependency, we must choose one of the responses in the bullet points above. In practice this will often be a mass bug filing campaign, or large update campaign. Breakage handling - multiple versions Sometimes there will be a big API rewrite in some package, and we can t easily update all of the rdependencies because the upstream ecosystem is fragmented and the work involved in reconciling it all is too substantial. When this happens we will bite the bullet and include multiple versions of the base package in Debian. The old version will become a new source package with a version number in its name. This is analogous to how key C/C++ libraries are handled. Downsides of this scheme The first obvious downside is that assembling some arbitrary set of Debian Rust library packages, that satisfy the dependencies declared by Debian, is no longer necessarily going to work. The combinations that Debian has tested - Debian releases - will work, though. And at least, any breakage will affect only people building Rust code using Debian-supplied libraries. Another less obvious problem is that because there is no such thing as Build-Breaks (in a Debian binary package), the per-package update scheme may result in no way to declare that a particular library update breaks the build of a particular leaf package. In other words, old source packages might no longer build when exposed to newer versions of their build-dependencies, taken from a newer Debian release. This is a thing that already happens in Debian, with source packages in other languages, though. Semver violation I am proposing that Debian should routinely compile Rust packages against dependencies in violation of the declared semver, and ship the results to Debian s millions of users. This sounds quite alarming! But I think it will not in fact lead to shipping bad binaries, for the following reasons: The Rust community strongly values safety (in a broad sense) in its APIs. An API which is merely capable of insecure (or other seriously bad) use is generally considered to be wrong. For example, such situations are regarded as vulnerabilities by the RustSec project, even if there is no suggestion that any actually-broken caller source code exists, let alone that actually-broken compiled code is likely. The Rust community also values alerting programmers to problems. Nontrivial semantic changes to APIs are typically accompanied not merely by a semver bump, but also by changes to names or types, precisely to ensure that broken combinations of code do not compile. Or to look at it another way, in Debian we would simply be doing what many Rust upstream developers routinely do: bump the versions of their dependencies, and throw it at the wall and hope it sticks. We can mitigate the risks the same way a Rust upstream maintainer would: when updating a package we should of course review the upstream changelog for any gotchas. We should look at RustSec and other upstream ecosystem tracking and authorship information. Difficulties for another day As I said, I see some other issues with Rust in Debian. Thanks all for your attention!

comment count unavailable comments

28 June 2021

Shirish Agarwal: Indian Capital Markets, BSE, NSE

I had been meaning to write on the above topic for almost a couple of months now but just kept procrastinating about it. That push came to a shove when Sucheta Dalal and Debasis Basu shared their understanding, wisdom, and all in the new book called Absolute Power Inside story of the National Stock Exchange s amazing success, leading to hubris, regulatory capture and algo scam . Now while I will go into the details of the new book as currently, I have not bought it but even if I had bought it and shared some of the revelations from it, it wouldn t have done justice to either the book or what is sharing before knowing some of the background before it.

Before I jump ahead, I would suggest people to read my sort of introductory blog post on banking history so they know where I m coming from. I m going to deviate a bit from Banking as this is about trade and capital markets, although Banking would come in later on. And I will also be sharing some cultural insights along with history so people are aware of why things happened the way they did. Calicut, Calcutta, Kolkata, one-time major depot around the world Now, one cannot start any topic about trade without talking about Kolkata. While today, it seems like a bastion of communism, at one time it was one of the major trade depots around the world. Both William Dalrymple and the Chinese have many times mentioned Kolkata as being one of the major centers of trade. This was between the 13th and the late 19th century. A cursory look throws up this article which talks about Kolkata or Calicut as it was known as a major trade depot. There are of course many, many articles and even books which do tell about how Kolkata was a major trade depot. Now between the 13th and 19th century, a lot of changes happened which made Kolkata poorer and shifted trade to Mumbai/Bombay which in those times was nothing but just a port city like many others.

The Rise of the Zamindar Around the 15th century when Babur Invaded Hindustan, he realized that Hindustan is too big a country to be governed alone. And Hindustan was much broader than independent India today. So he created the title of Zamindars. Interestingly, if you look at the Mughal period, they were much more in tune with Hindustani practices than the British who came later. They used the caste divisions and hierarchy wisely making sure that the status quo was maintained as far as castes/creed were concerned. While in-fighting with various rulers continued, it was more or less about land and power other than anything else. When the Britishers came they co-opted the same arrangement with a minor adjustment. While in the before system, the zamindars didn t have powers to be landowners. The Britishers gave them land ownerships. A huge percentage of thess zamindars especially in Bengal were from my own caste Banias or Baniyas. The problem and the solution for the Britishers had been this was a large land to control and exploit and the number of British officers and nobles were very less. So they gave virtually a lot of powers to the Banias. The only thing the British insisted on were very high rents from the newly minted Zamindars. The Zamindar in turn used the powers of personal fiefdom to give loans at very high interest rates when the poor were unable to pay the interest rate, they would take the land while at the same time slavery was forced on both men and women, many a time rapes and affairs. While there have been many records shedding light on it, don t think it could be any more powerful as enacted and shared by Shabana Azmi in Ankur:the Seedling. Another prominent grouping was formed around the same time was the Bhadralok. Now as shared Bhadralok while having all the amenities of belonging to the community, turned a blind eye to the excesses being done by the Zamindars. How much they played a hand in the decimation of Bengal has been a matter of debate, but they did have a hand, that much is not contested.

The Rise of Stock Exchanges Sadly and interestingly, many people believe and continue to believe that stock exchanges is recent phenomena. The first stock exchange though was the Calcutta Stock Exchange rather than the Bombay Stock Exchange. How valuable was Calcutta to the Britishers in its early years can be gauged from the fact that at one time it was made the capital of India in 1772 . In fact, after the Grand Trunk Road (on which there had been even Train names in both countries) x number of books have been written of the trade between Calcutta and Peshawar (Now in Pakistan). And it was not just limited to trade but also cultural give-and-take between the two centers. Even today, if you look at YT (Youtube) and look up some interviews of old people, you find many interesting anecdotes of people sharing both culture and trade.

The problem of the 60 s and rise of BSE
After India became independent and the Constitutional debates happened, the new elites understood that there cannot be two power centers that could govern India. On one hand, were the politicians who had come to power on the back of the popular vote, the other was the Zamindars, who more often than not had abused their powers which resulted in widespread poverty. The Britishers are to blame, but so do the middlemen as they became willing enablers to the same system of oppression. Hence, you had the 1951 amendment to the Constitution and the 1956 Zamindari Abolition Act. In fact, you can find much more of an in-depth article both about Zamindars and their final abolition here. Now once Zamindari was gone, there was nothing to replace it with. The Zamindars ousted of their old roles turned and tried to become Industrialists. The problem was that the poor and the downtrodden had already had experiences with the Zamindars. Also, some Industrialists from North and West also came to Bengal but they had no understanding of either the language or the cultural understanding of what had happened in Bengal. And notice that I have not talked about both the famines and the floods that wrecked Bengal since time immemorial and some of the ones which got etched on soul of Bengal and has marks even today  The psyche of the Bengali and the Bhadralok has gone through enormous shifts. I have met quite a few and do see the guilt they feel. If one wonders as to how socialist parties are able to hold power in Bengal, look no further than Tarikh which tells and shares with you that even today how many Bengalis still feel somewhat lost.

The Rise of BSE Now, while Kolkata Stock Exchange had been going down, for multiple reasons other than listed above. From the 1950s onwards Jawaharlal Nehru had this idea of 5-year plans, borrowed from socialist countries such as Russia, China etc. His vision and ambition for the newly minted Indian state were huge, while at the same time he understood we were poor. The loot by East India Company and the Britishers and on top of that the division of wealth with Pakistan even though the majority of Muslims chose and remained with India. Travel on Indian Railways was a risky affair. My grandfather had shared numerous tales where he used to fill money in socks and put the socks on in boots when going between either Delhi Kolkata or Pune Kolkata. Also, as the Capital became Delhi, it unofficially was for many years, the transparency from Kolkata-based firms became less. So many Kolkata firms either mismanaged and shut down while Maharashtra, my own state, saw a huge boon in Industrialization as well as farming. From the 1960s to the 1990s there were many booms and busts in the stock exchanges but most were manageable.

While the 60s began on a good note as Goa was finally freed from the Portuguese army and influence, the 1962 war with the Chinese made many a soul question where we went wrong. Jawaharlal Nehru went all over the world to ask for help but had to return home empty-handed. Bollywood showed a world of bell-bottoms and cars and whatnot, while the majority were still trying to figure out how to put two square meals on the table. India suffered one of the worst famines in those times. People had to ration food. Families made do with either one meal or just roti (flatbread) rather than rice. In Bengal, things were much more severe. There were huge milk shortages, so Bengalis were told to cut down on sweets. This enraged the Bangalis as nothing else could. Note If one wants to read how bad Indians felt at that time, all one has to read is V.S. Naipaul s An Area of darkness . This was also the time when quite a few Indians took their first step out of India. While Air India had just started, the fares were prohibitive. Those who were not well off, either worked on ships or went via passenger or cargo ships to Dubai/Qatar middle-east. Some went to Russia and some even to States. While today s migr s want to settle in the west forever and have their children and grandchildren grow up in the West, in the 1960s and 70s the idea was far different. The main purpose for a vast majority was to get jobs and whatnot, save maximum money and send it back to India as a remittance. The idea was to make enough money in 3-5-10 years, come back to India, and then lead a comfortable life. Sadly, there has hardly been any academic work done in India, at least to my knowledge to document the sacrifices done by Indians in search of jobs, life, purpose, etc. in the 1960s and 1970s. The 1970s was also when alternative cinema started its journey with people like Smita Patil, Naseeruddin Shah who portrayed people s struggles on-screen. Most of them didn t have commercial success because the movies and the stories were bleak. While the acting was superb, most Indians loved to be captured by fights, car-chases, and whatnot rather than the deary existence which they had. And the alt cinema forced them to look into the mirror, which was frowned upon both by the masses and the classes. So cinema which could have been a wake-up call for a lot of Indians failed. One of the most notable works of that decade, at least to me, was Manthan. 1961 was also marked by the launch of Economic Times and Financial Express which tells that there was some appetite for financial news and understanding. The 1970s was also a very turbulent time in the corporate sector and stock exchanges. Again, the companies which were listed were run by the very well-off and many of them had been abroad. At the same time, you had fly-by-night operators. One of the happenings which started in this decade is you had corporate wars and hostile takeovers, quite a few of them of which could well have a Web series or two of their own. This was also a decade marked by huge labor unrest, which again changed the face of Bombay/Mumbai. From the 1950s till the 1970s, Bombay was known for its mills. So large migrant communities from all over India came to Bombay to become the next Bollywood star and if that didn t happen, they would get jobs in the mills. Bombay/Mumbai has/had this unique feature that somehow you will make money to make ends meet. Of course, with the pandemic, even that has gone for a toss. Labor unrest was a defining character of that decade. Three movies, Kaala Patthar, Kalyug, and Ankush give a broad outlook of what happened in that decade. One thing which is present and omnipresent then and now is how time and time again we lost our demographic dividend. Again there was an exodus of young people who ventured out to seek fortunes elsewhere. The 1970s and 80s were also famous for the license Raj which they bought in. Just like the Soviets, there were waiting periods for everything. A telephone line meant waiting for things anywhere from 4 to 8 years. In 1987, when we applied and got a phone within 2-3 months, most of my relatives both from my mother and father s side could not believe we paid 0 to get a telephone line. We did pay the telephone guy INR 10/- which was a somewhat princely sum when he was installing it, even then they could not believe it as in Northern India, you couldn t get a phone line even if your number had come. You had to pay anywhere from INR 500/1000 or more to get a line. This was BSNL and to reiterate there were no alternatives at that time.

The 1990s and the Harshad Mehta Scam The 90s was when I was a teenager. You do all the stupid things for love, lust, whatever. That is also the time you are introduced really to the world of money. During my time, there were only three choices, Sciences, Commerce, and Arts. If History were your favorite subject then you would take Arts and if it was not, and you were not studious, then you would up commerce. This is how careers were chosen. So I enrolled in Commerce. Due to my grandfather and family on my mother s side interested in stocks both as a saving and compounding tool, I was able to see Pune Stock Exchange in action one day. The only thing I remember that day is people shouting loudly with various chits. I had no idea that deals of maybe thousands or even lakhs. The Pune Stock Exchange had been newly minted. I also participated in a couple of mock stock exchanges and came to understand that one has to be aggressive in order to win. You had to be really loud to be heard over others, you could not afford to be shy. Also, spread your risks. Sadly, nothing about the stock markets was there in the syllabus. 1991 was also when we saw the Iraq war, the balance of payments crisis in India, and didn t know that the Harshad Mehta scam was around the corner. Most of the scams in India have been caught because the person who was doing it was flashy. And this was the reason that even he was caught as Ms. Sucheta Dalal, a young beat reporter from Indian Express who had been covering Indian stock market. Many of her articles were thought-provoking. Now, a brief understanding is required to know before we actually get to the scam. Because of the 1991 balance of payments crisis, IMF rescued India on the condition that India throws its market open. In the 1980s itself, Rajeev Gandhi had wanted to partially make India open but both politicians and Industrialists advised him not to do the same, we are/were not ready. On 21st May 1991, Rajeev Gandhi was assassinated by the LTTE. A month later, due to the sympathy vote, the Narsimha Rao Govt. took power. While for most new Governments there is usually a honeymoon period lasting 6 months or so till they get settled in their roles before people start asking tough questions. It was not to be for this Govt. Immediately, The problem had been building for a few years. Although, in many ways, our economy was better than it is today. The only thing India didn t do well at that time was managing foreign exchange. As only a few Indians had both the money and the opportunity to go abroad and need for electronics was limited. One of the biggest imports of the time then and still today is Energy, Oil. While today it is Oil/Gas and electronics, at that time it was only OIl. The Oil import bill was ballooning while exports were more or less stagnant and mostly comprised of raw materials rather than finished products. Even today, it is largely this, one of the biggest Industrialists in India Ambani exports gas/oil while Adani exports coal. Anyways, the deficit was large enough to trigger a payment crisis. And Narsimha Rao had to throw open the Indian market almost overnight. Some changes became quickly apparent, while others took a long time to come.

Satellite Television and Entry of Foreign Banks Almost overnight, from 1 channel we became multi-channel. Star TV (Rupert Murdoch) bought us Bold and Beautiful, while CNN broadcasted the Iraq War. It was unbelievable for us that we were getting reports of what had happened 24-48 hours earlier. Fortunately or unfortunately, I was still very much a teenager to understand the import of what was happening. Even in my college, except for one or two-person, it wasn t a topic for debate or talk or even the economy. We were basically somehow cocooned in our own little world. But this was not the case for the rest of India and especially banks. The entry of foreign banks was a rude shock to Indian banks. The foreign banks were bringing both technology and sophistication in their offerings, and Indian Banks needed and wanted fast money to show hefty profits. Demand for credit wasn t much, at least nowhere the level it today is. At the same time, default on credit was nowhere high as today is. But that will require its own space and article. To quench the thirst for hefty profits by banks, Enter Harshad Mehta. At that point in time, banks were not permitted at all to invest in the securities/share market. They could only buy Government securities or bonds which had a coupon rate of say 8-10% which was nowhere enough to satisfy the need for hefty profits as desired by Indian banks. On top of it, that cash was blocked for a long time. Most of these Government bonds had anywhere between 10-20 year maturity date and some even longer. Now, one loophole in that was that the banks themselves could not buy these securities. They had to approach a registered broker of the share market who will do these transactions on their behalf. Here is where Mr. Mehta played his game. He shared both legal and illegal ways in which both the bank and he would prosper. While banking at one time was thought to be conservative and somewhat cautious, either because they were too afraid that Western private banks will take that pie or whatever their reasons might be, they agreed to his antics. To play the game, Harshad Mehta needed lots of cash, which the banks provided him in the guise of buying securities that were never bought, but the amounts were transferred to his account. He actively traded stocks, at the same time made a group, and also made the rumor mill work to his benefit. The share market is largely a reactionary market. It operates on patience, news, and rumor-mill. The effect of his shenanigans was that the price of a stock that was trending at say INR 200 reached the stratospheric height of INR 9000/- without any change in the fundamentals or outlook of the stock. His thirst didn t remain restricted to stocks but also ventured into the unglamorous world of Govt. securities where he started trading even in them in large quantities. In order to attract new clients, he coveted a fancy lifestyle. The fancy lifestyle was what caught the eye of Sucheta Dalal, and she started investigating the deals he was doing. Being a reporter, she had the advantage of getting many doors to open and get information that otherwise would be under lock and key. On 23rd April 1992, Sucheta Dalal broke the scam.

The Impact The impact was almost like a shock to the markets. Even today, it can be counted as one of the biggest scams in the Indian market if you adjust it for inflation. I haven t revealed much of the scam and what happened, simply because Sucheta Dalal and Debasis Basu wrote The Scam for that purpose. How do I shorten a story and experience which has been roughly written in 300 odd pages in one or two paragraphs, it is simply impossible. The impact though was severe. The Indian stock market became a bear market for two years. Sucheta Dalal was kicked out/made to resign out of Indian Express. The thing is simple, all newspapers survive on readership and advertisements with advertisements. Companies who were having a golden run, whether justified or not, on the bourses/Stock Exchange. For many companies, having a good number on the stock exchange was better than the company fundamentals. There was supposed to be a speedy fast-track court setup for Financial crimes, but it worked only for the Harshad Mehta case and still took over 5 years. It led to the creation of NSE (National Stock Exchange). It also led to the creation of SEBI, perhaps one of the most powerful regulators, giving it a wide range of powers and remit but on the ground more often that proved to be no more than a glorified postman. And the few times it used, it used on the wrong people and people had to go to courts to get justice. But then this is not about SEBI nor is this blog post about NSE. I have anyways shared about Absolute power above, so will not repeat the link here. The Anecdotal impact was widespread. Our own family broker took the extreme step. For my grandfather on the mother s side, he was like the second son. The news of his suicide devastated my grandfather quite a bit, which we realized much later when he was diagnosed with Alzheimer s. Our family stockbroker had been punting, taking lots of cash from the market at very high rates, betting on stocks wildly as the stock market was reaching for the stars when the market crashed, he was insolvent. How the family survived is a tale in itself. They had just got married just a few years ago and had a cute boy and girl soon after. While today, both are grown-up, at that time what the wife faced only she knows. There were also quite a few shareholders who also took the extreme step. The stock markets in those days were largely based on trust and even today is unless you are into day-trading. So there was always some money left on the table for the share/stockbroker which would be squared off in the next deal/transaction where again you will leave something. My grandfather once thought of going over and meeting them, and we went to the lane where their house is, seeing the line of people who had come for recovery of loans, we turned back with a heavy heart. There was another taboo that kinda got broken that day. The taboo was that the stock market is open to scams. From 1992 to 2021 has been a cycle of scams. Even now, today, the stock market is at unnatural highs. We know for sure that a lot of hot money is rolling around, a lot of American pension funds etc. Till it will work, it will work, some news something and that money will be moved out. Who will be left handing the can, the Indian investors? A Few days back, Ambani writes about Adani. Now while the facts shared are correct, is Adani the only one, the only company to have a small free float in the market. There probably are more than 1/4th or 1/3rd of well-respected companies who may have a similar configuration, the only problem is it is difficult to know who the proxies are. Now if I were to reflect and compare this either with the 1960s or even the 1990s I don t find much difference apart from the fact that the proxy is sitting in Mauritius. At the same time, today you can speculate on almost anything. Whether it is stocks, commodities, derivatives, foreign exchange, cricket matches etc. the list is endless. Since 2014, the rise in speculation rather than investment has been dramatic, almost stratospheric. Sadly, there are no studies or even attempts made to document this. How much official and unofficial speculation is there in the market nobody knows. Money markets have become both fluid and non-transparent. In theory, you have all sorts of regulators, but it is still very much like the Wild West. One thing to note that even Income tax had to change and bring it provisions to account for speculative income.So, starting from being totally illegitimate, it has become kind of legal and is part of Income Tax. And if speculation is not wrong, why not make Indian cricket officially a speculative event, that will be honest and GOI will get part of the proceeds.

Conclusion I wish there was some positive conclusion I could drive, but sadly there is not. Just today read two articles about the ongoing environmental issues in Himachal Pradesh. As I had shared even earlier, the last time I visited those places in 2011, and even at that time I was devastated to see the kind of construction going on. Jogiwara Road which they showed used to be flat single ground/first floor dwellings, most of which were restaurants and whatnot. I had seen the water issues both in Himachal and UT (Uttarakhand) back then and this is when they made huge dams. In U.S. they are removing dams and here we want more dams

21 June 2021

Shirish Agarwal: Accessibility, Freenode and American imperialism.

Accessibility This is perhaps one of the strangest ways and yet also perhaps the straightest way to start the blog post. For the past weeks/months, a strange experience has been there. I am using a Logitech wireless keyboard and mouse for almost a decade. Now, for the past few months and weeks we observed a somewhat rare phenomena . While in-between us we have a single desktop computer. So me and mum take turns to be on the Desktop. At times, however, the system would sit idle and after some time it goes to low-power mode/sleep mode after 30 minutes. Then, when you want to come back, you obviously have to give your login credentials. At times, the keyboard refuses to input any data in the login screen. Interestingly, the mouse still functions. Much more interesting is the fact that both the mouse and the keyboard use the same transceiver sensor to send data. And I had changed batteries to ensure it was not a power issue but still no input :(. While my mother uses and used the power switch (I did teach her how to hold it for few minutes and then let it go) but for self, tried another thing. Using the mouse I logged of the session thinking perhaps some race condition or something might be in the session which was not letting the keystrokes be inputted into the system and having a new session might resolve it. But this was not to be  Luckily, on the screen you do have the option to reboot or power off. I did a reboot and lo, behold the system was able to input characters again. And this has happened time and again. I tried to find GOK and failed to remember that GOK had been retired. I looked up the accessibility page on Debian wiki. Very interesting, very detailed but sadly it did not and does not provide the backup I needed. I tried out florence but found that the app. is buggy. Moreover, the instructions provided on the lightdm screen does not work. I do not get the on-screen keyboard while I followed the instructions. Just to be clear this is all on Debian testing which is gonna be Debian stable soonish  I even tried the same with xvkbd but no avail. I do use mate as my desktop-manager so maybe the instructions need some refinement ???? $ cat /etc/lightdm/lightdm-gtk-greeter.conf grep keyboard
# a11y-states = states of accessibility features: name save state on exit, -name
disabled at start (default value for unlisted), +name enabled at start. Allowed names: contrast, font, keyboard, reader.
keyboard=xvkbd no-gnome focus &
# keyboard-position = x y[;width height] ( 50%,center -0;50% 25% by default) Works only for onboard
#keyboard= Interestingly, Debian does provide two more on-screen keyboards, matchbox as well as onboard which comes from Ubuntu. While I have both of them installed. I find xvkbd to be enough for my work, the only issue seems to be I cannot get it from the drop-down box of accessibility at the login screen. Just to make sure that I have not gone to Gnome-display manager, I did run

$ sudo dpkg-reconfigure gdm3 Only to find out that I am indeed running lightdm. So I am a bit confused why it doesn t come up as an option when I have the login window/login manager running. FWIW I do run metacity as the window manager as it plays nice with all the various desktop environments I have, almost all of them. So this is where I m stuck. If I do get any help, I probably would also add those instructions to the wiki page, so it would be convenient to the next person who comes with the same issue. I also need to figure out some way to know whether there is some race-condition or something which is happening, have no clue how would I go about it without having whole lot of noise. I am sure there are others who may have more of an idea. FWIW, I did search unix.stackexchange as well as reddit/debian to see if I could see any meaningful posts but came up empty.

Freenode I had not been using IRC for quite some time now. The reasons have been multiple issues with Riot (now element) taking the whole space on my desktop. I did get alerted to the whole thing about a week after the whole thing went down. Somebody messaged me DM. I *think* I put up a thread or a mini-thread about IRC or something in response to somebody praising telegram/WhatsApp or one of those apps. That probably triggered the DM. It took me a couple of minutes to hit upon this. I was angry and depressed, seeing the behavior of the new overlords of freenode. I did see that lot of channels moved over to Libera. It was also interesting to see that some communities were thinking of moving to some other obscure platform, which again could be held hostage to the same thing. One could argue one way or the other, but that would be tiresome and fact is any network needs lot of help to be grown and nurtured, whether it is online or offline. I also saw that Libera was also using a software Solanum which is ircv3 compliant. Now having done this initial investigation, it was time to move to an IRC client. The Libera documentation is and was pretty helpful in telling which IRC clients would be good with their network. So I first tried hexchat. I installed it and tried to add Libera server credentials, it didn t work. Did see that they had fixed the bug in sid/unstable and now it s in testing. But at the time it was in sid, the bug-fixed and I wanted to have something which just ran the damn thing. I chanced upon quassel. I had played around with quassel quite a number of times before, so I knew I could play/use it. Hence, I installed it and was able to use it on the first try. I did use the encrypted server and just had to tweak some settings before I could use it with some help with their documentation. Although, have to say that even quassel upstream needs to get its documentation in order. It is just all over the place, and they haven t put any effort into streamlining the documentation, so that finding things becomes easier. But that can be said of many projects upstream. There is one thing though that all of these IRC clients lack. The lack of a password manager. Now till that isn t fixed it will suck because you need another secure place to put your password/s. You either put it on your desktop somewhere (insecure) or store it in the cloud somewhere (somewhat secure but again need to remember that password), whatever you do is extra work. I am sure there will be a day when authenticating with Nickserv will be an automated task and people can just get on talking on channels and figuring out how to be part of the various communities. As can be seen, even now there is a bit of a learning curve for both newbies and people who know a bit about systems to get it working. Now, I know there are a lot of things that need to be fixed in the anonymity, security place if I put that sort of hat. For e.g. wouldn t it be cool if either the IRC client or one of its add-on gave throwaway usernames and passwords. The passwords would be complex. This would make it easier who are paranoid about security and many do and would have. As an example we can see of Fuchs. Now if the gentleman or lady is working in a professional capacity and would come to know of their real identity and perceive rightly or wrongly the role of that person, it will affect their career. Now, should it? I am sure a lot of people would be divided on the issue. Personally, as far as I am concerned, I would say no because whether right or wrong, whatever they were doing they were doing on their own time. Not on company time. So it doesn t concern the company at all. If we were to let companies police the behavior outside the time, individuals would be in a lot of trouble. Although, have to say that is a trend that has been seen in companies that are firing people either on the left or right. A recent example that comes to mind is Emily Wilder who was fired by Associated Press. Interestingly, she was interviewed by Democracy now, and it did come out that she is a Jew. As can be seen and understood there is a lot of nuance to her story and not the way she was fired. It doesn t give a good taste in the mouth, but then getting fired nobody does. On few forums, people did share of people getting fired of their job because they were dancing (cops). Again, it all depends, for me again, hats off to anybody who feels like dancing or whatever because there are just so many depressing stories all around.

Banned and FOE On few forums I was banned because I was talking about Brexit and American imperialism, both of which are seem to ruffle a few feathers in quite a few places. For instance, many people for obvious reasons do not like this video

Now I m sorry I am not able to and have not been able to give invidious links for the past few months. The reason being invidious itself went through some changes and the changes are good and bad. For e.g. now you need to share your google id with a third-party which at least to my mind is not a good idea. But that probably is another story altogether and it probably will need its own place. Coming back to the video itself, this was shared by Anthony hazard and the Title is The Atlantic slave trade: What too few textbooks told you . I did see this video quite a few years ago and still find it hard to swallow that tens of millions of Africans were bought as slaves to the Americas, although to be fair it does start with the Spanish settlement in the land which would be called the U.S. but they bought slaves with themselves. They even got the American natives, i.e. people from different tribes which made up America at that point. One point to note is that U.S. got its independence on July 4, 1776 so all the people before that were called as European settlers for want of a better word. Some or many of these European settlers would be convicts who were sent from UK. But as shared in the article, that would only happen with U.S. itself is mature and open enough for that discussion. Going back to the original point though, these European or American settlers bought lot of slaves from Africa. The video does also shed some of the cruelty the Europeans or Americans did on the slaves, men and women in different ways. The most revelatory part though which I also forget many a times that because lot of people were taken from Africa and many of them men, it did lead to imbalances in the African societies not just in weddings but economics in general. It also developed a theory called Critical Race theory in which it tries to paint the Africans as an inferior race otherwise how would Christianity work where their own good book says All men are born equal . That does in part explain why the African countries are still so far behind their European or American counterparts. But Africa can still be proud as they are richer than us, yup India. Sadly, I don t think America is ready to have that conversation anytime soon or if ever. And if it were to do, it would have to out-do any truth and reconciliation Committee which the world has seen. A mere apology or two would not just cut it. The problems of America sadly are not limited to just Africans but the natives of the land, for e.g. the Lakota people. In 1868, they put a letter stating we will give the land back to the Lakota people forever, but then the gold rush happened. In 2007, when the Lakota stated their proposal for independence, the U.S. through its force denied. So much for the paper, it was written on. Now from what I came to know over the years, the American natives are called First nations . Time and time again the American Govt. has tried or been foul towards them. Some of the examples include The Yucca Mountain nuclear waste repository . The same is and was the case with The Keystone pipeline which is now dead. Now one could say that it is America s internal matter and I would fully agree but when they speak of internal matters of other countries, then we should have the same freedom. But this is not restricted to just internal matters, sadly. Since the 1950 s i.e. the advent of the cold war, America s foreign policy made Regime changes all around the world. Sharing some of the examples from the Cold War

Iran 1953
Guatemala 1954
Democratic Republic of the Congo 1960
Republic of Ghana 1966
Iraq 1968
Chile 1973
Argentina 1976
Afghanistan 1978-1980s
Grenada
Nicaragua 1981-1990
1. Destabilization through CIA assets
2. Arming the Contras
El Salvador 1980-92
Philippines 1986 Even after the Cold War ended the situation was anonymolus, meaning they still continued with their old behavior. After the end of Cold War

Guatemala 1993
Serbia 2000
Iraq 2003-
Afghanistan 2001 ongoing There is a helpful Wikipedia article titled History of CIA which basically lists most of the covert regime changes done by U.S. The abvoe is merely a sub-set of the actions done by U.S. Now are all the behaviors above of a civilized nation ? And if one cares to notice, one would notice that all the above countries in the list which had the regime change had either Oil or precious metals. So U.S. is and was being what it accuses China, a profiteer. But this isn t just the U.S. China story but more about the American abuse of its power. My own country, India paid IMF loans till 1991 and we paid through the nose. There were economic sanctions against India. But then, this is again not just about U.S. India. Even with Europe or more precisely Norway which didn t want to side with America because their intelligence showed that no WMD were present in Iraq, the relationship still has issues.

Pandemic and the World So I do find that this whole blaming of China by U.S. quite theatrical and full of double-triple standards. Very early during the debates, it came to light that the Spanish Flu actually originated in Kensas, U.S.

What was also interesting as I found in the Pentagon Papers much before The Watergate scandal came out that U.S. had realized that China would be more of a competitor than Russia. And this itself was in 1960 s itself. This shows the level of intelligence that the Americans had. From what I can recollect from whatever I have read of that era, China was still mostly an agri-based economy. So, how the U.S. was able to deduce that China will surpass other economies is beyond me even now. They surely must have known something that even we today do not. One of the other interesting observations and understanding that I got while researching that every year we transfer an average of 7500 diseases from animal to humans and that should be a scary figure. I think more than anything else, loss of habitat and use of animals from food to clothing to medicine is probably the reason we are getting such diseases. I am also sure that there probably are and have been similar number of transfer of diseases from humans to animals as well but for well-known biases and whatnot those studies are neither done or are under-funded. There are and have been reports of something like 850,000 undiscovered viruses which various mammals and birds have. Also I did find that most of such pandemics are hard to identify, for e.g. SARS 1 took about 15 years, Ebola we don t know till date from where it came. Even HIV has questions for us. Hell, even why does hearing go away is a mystery to us. In all of this, we want to say China is culpable. And while China may or may not be culpable, only time will tell, this is surely the opportunity for all countries to spend and make capacities in public health. Countries which will take lessons from it and improve their public healthcare models will hopefully will not suffer as those who will suffer and are continuing to suffer now  To those who feel that habitat loss of animals is untrue, I would suggest them to see Sherni which depicts the human/animal conflict in all its brutality. I am gonna warn in advance that the ending is not nice but what can you expect from a country in which forest area cover has constantly declined and the Govt. itself is only interested in headline management

The only positive story I can share from India is that finally the Modi Govt. has said we will do free vaccine immunization for everybody. Although the pace is nothing to write home about. One additional thing they relaxed was instead of going to Cowin or any other portal, people could simply walk in using their identity papers. Although, given the pace of vaccinations, it is going to take anywhere between 13-18 months or more depending on availability of vaccines.

Looking forward to all and any replies have a virtual keyboard, preferably xvkbd as that is good enough for my use-case.

25 April 2021

Antoine Beaupr : Lost article ideas

I wrote for LWN for about two years. During that time, I wrote (what seems to me an impressive) 34 articles, but I always had a pile of ideas in the back of my mind. Those are ideas, notes, and scribbles lying around. Some were just completely abandoned because they didn't seem a good fit for LWN. Concretely, I stored those in branches in a git repository, and used the branch name (and, naively, the last commit log) as indicators of the topic. This was the state of affairs when I left:
remotes/private/attic/novena                    822ca2bb add letter i sent to novena, never published
remotes/private/attic/secureboot                de09d82b quick review, add note and graph
remotes/private/attic/wireguard                 5c5340d1 wireguard review, tutorial and comparison with alternatives
remotes/private/backlog/dat                     914c5edf Merge branch 'master' into backlog/dat
remotes/private/backlog/packet                  9b2c6d1a ham radio packet innovations and primer
remotes/private/backlog/performance-tweaks      dcf02676 config notes for http2
remotes/private/backlog/serverless              9fce6484 postponed until kubecon europe
remotes/private/fin/cost-of-hosting             00d8e499 cost-of-hosting article online
remotes/private/fin/kubecon                     f4fd7df2 remove published or spun off articles
remotes/private/fin/kubecon-overview            21fae984 publish kubecon overview article
remotes/private/fin/kubecon2018                 1edc5ec8 add series
remotes/private/fin/netconf                     3f4b7ece publish the netconf articles
remotes/private/fin/netdev                      6ee66559 publish articles from netdev 2.2
remotes/private/fin/pgp-offline                 f841deed pgp offline branch ready for publication
remotes/private/fin/primes                      c7e5b912 publish the ROCA paper
remotes/private/fin/runtimes                    4bee1d70 prepare publication of runtimes articles
remotes/private/fin/token-benchmarks            5a363992 regenerate timestamp automatically
remotes/private/ideas/astropy                   95d53152 astropy or python in astronomy
remotes/private/ideas/avaneya                   20a6d149 crowdfunded blade-runner-themed GPLv3 simcity-like simulator
remotes/private/ideas/backups-benchmarks        fe2f1f13 review of backup software through performance and features
remotes/private/ideas/cumin                     7bed3945 review of the cumin automation tool from WM foundation
remotes/private/ideas/future-of-distros         d086ca0d modern packaging problems and complex apps
remotes/private/ideas/on-dying                  a92ad23f another dying thing
remotes/private/ideas/openpgp-discovery         8f2782f0 openpgp discovery mechanisms (WKD, etc), thanks to jonas meurer
remotes/private/ideas/password-bench            451602c0 bruteforce estimates for various password patterns compared with RSA key sizes
remotes/private/ideas/prometheus-openmetrics    2568dbd6 openmetrics standardizing prom metrics enpoints
remotes/private/ideas/telling-time              f3c24a53 another way of telling time
remotes/private/ideas/wallabako                 4f44c5da talk about wallabako, read-it-later + kobo hacking
remotes/private/stalled/bench-bench-bench       8cef0504 benchmarking http benchmarking tools
remotes/private/stalled/debian-survey-democracy 909bdc98 free software surveys and debian democracy, volunteer vs paid work
Wow, what a mess! Let's see if I can make sense of this:

Attic Those are articles that I thought about, then finally rejected, either because it didn't seem worth it, or my editors rejected it, or I just moved on:
  • novena: the project is ooold now, didn't seem to fit a LWN article. it was basically "how can i build my novena now" and "you guys rock!" it seems like the MNT Reform is the brain child of the Novena now, and I dare say it's even cooler!
  • secureboot: my LWN editors were critical of my approach, and probably rightly so - it's a really complex subject and I was probably out of my depth... it's also out of date now, we did manage secureboot in Debian
  • wireguard: LWN ended up writing extensive coverage, and I was biased against Donenfeld because of conflicts in a previous project

Backlog Those were articles I was planning to write about next.
  • dat: I already had written Sharing and archiving data sets with Dat, but it seems I had more to say... mostly performance issues, beaker, no streaming, limited adoption... to be investigated, I guess?
  • packet: a primer on data communications over ham radio, and the cool new tech that has emerged in the free software world. those are mainly notes about Pat, Direwolf, APRS and so on... just never got around to making sense of it or really using the tech...
  • performance-tweaks: "optimizing websites at the age of http2", the unwritten story of the optimization of this website with HTTP/2 and friends
  • serverless: god. one of the leftover topics at Kubecon, my notes on this were thin, and the actual subject, possibly even thinner... the only lie worse than the cloud is that there's no server at all! concretely, that's a pile of notes about Kubecon which I wanted to sort through. Probably belongs in the attic now.

Fin Those are finished articles, they were published on my website and LWN, but the branches were kept because previous drafts had private notes that should not be published.

Ideas A lot of those branches were actually just an empty commit, with the commitlog being the "pitch", more or less. I'd send that list to my editors, sometimes with a few more links (basically the above), and they would nudge me one way or the other. Sometimes they would actively discourage me to write about something, and I would do it anyways, send them a draft, and they would patiently make me rewrite it until it was a decent article. This was especially hard with the terminal emulator series, which took forever to write and even got my editors upset when they realized I had never installed Fedora (I ended up installing it, and I was proven wrong!)

Stalled Oh, and then there's those: those are either "ideas" or "backlog" that got so far behind that I just moved them out of the way because I was tired of seeing them in my list.
  • stalled/bench-bench-bench benchmarking http benchmarking tools, a horrible mess of links, copy-paste from terminals, and ideas about benchmarking... some of this trickled out into this benchmarking guide at Tor, but not much more than the list of tools
  • stalled/debian-survey-democracy: "free software surveys and Debian democracy, volunteer vs paid work"... A long standing concern of mine is that all Debian work is supposed to be volunteer, and paying explicitly for work inside Debian has traditionally been frowned upon, even leading to serious drama and dissent (remember Dunc-Tank)? back when I was writing for LWN, I was also doing paid work for Debian LTS. I also learned that a lot (most?) Debian Developers were actually being paid by their job to work on Debian. So I was confused by this apparent contradiction, especially given how the LTS project has been mostly accepted, while Dunc-Tank was not... See also this talk at Debconf 16. I had hopes that this study would show the "hunch" people have offered (that most DDs are paid to work on Debian) but it seems to show the reverse (only 36% of DDs, and 18% of all respondents paid). So I am still confused and worried about the sustainability of Debian.

What do you think? So that's all I got. As people might have noticed here, I have much less time to write these days, but if there's any subject in there I should pick, what is the one that you would find most interesting? Oh! and I should mention that you can write to LWN! If you think people should know more about some Linux thing, you can get paid to write for it! Pitch it to the editors, they won't bite. The worst that can happen is that they say "yes" and there goes two years of your life learning to write. Because no, you don't know how to write, no one does. You need an editor to write. That's why this article looks like crap and has a smiley. :)

20 December 2020

Russ Allbery: Review: Can't Even

Review: Can't Even, by Anne Helen Petersen
Publisher: Houghton Mifflin
Copyright: 2020
ISBN: 0-358-31659-6
Format: Kindle
Pages: 230
Like many other people, I first became aware of Anne Helen Petersen's journalism when her Buzzfeed article "How Millennials Became the Burnout Generation" went viral. Can't Even is the much-awaited (at least by me) book-length expansion of that thesis: The United States is, as a society, burning out, and that burnout is falling on millennials the hardest. We're not recognizing the symptoms because we think burnout looks like something dramatic and flashy. But for most people burnout looks less like a nervous breakdown and more like constant background anxiety and lack of energy.
Laura, who lives in Chicago and works as a special ed teacher, never wants to see her friends, or date, or cook she's so tired, she just wants to melt into the couch. "But then I can't focus on what I'm watching, and end up unfocused again, and not completely relaxing," she explained. "Here I am telling you I don't even relax right! I feel bad about feeling bad! But by the time I have leisure time, I just want to be alone!"
Petersen explores this idea across childhood, education, work, family, and parenting, but the core of her thesis is the precise opposite of the pervasive myth that millennials are entitled and lazy (a persistent generational critique that Petersen points out was also leveled at their Baby Boomer parents in the 1960s and 1970s). Millennials aren't slackers; they're workaholics from childhood, for whom everything has become a hustle and a second (or third or fourth) job. The struggle with "adulting" is a symptom of the burnout on the other side of exhaustion, the mental failures that happen when you've forced yourself to keep going on empty so many times that it's left lingering damage. Petersen is a synthesizing writer who draws together the threads of other books rather than going deep on a novel concept, so if you've been reading about work, psychology, stress, and productivity, many of the ideas here will be familiar. But she's been reading the same authors that I've been reading (Tressie McMillan Cottom, Emily Guendelsberger, Brigid Schulte, and even Cal Newport), and this was the book that helped me pull those analyses together into a coherent picture. That picture starts with the shift of risk in the 1970s and 1980s from previously stable corporations with long-lasting jobs and retirement pensions onto individual employees. The corresponding rise in precarity and therefore fear led to a concerted effort to re-establish a feeling of control. Baby Boomers doubled down on personal responsibility and personal capability, replacing unstructured childhood for their kids with planned activities and academic achievement. That generation, in turn, internalized the need for constant improvement, constant grading, and constant achievement, accepting an implied bargain that if they worked very hard, got good grades, got into good schools, and got a good degree, it would pay off in a good life and financial security. They were betrayed. The payoff never happened; many millennials graduated into the Great Recession and the worst economy since World War II. In response, millennials doubled down on the only path to success they were taught. They took on more debt, got more education, moved back in with their parents to cut expenses, and tried even harder.
Even after watching our parents get shut out, fall from, or simply struggle anxiously to maintain the American Dream, we didn't reject it. We tried to work harder, and better, more efficiently, with more credentials, to achieve it.
Once one has this framework in mind, it's startling how pervasive the "just try harder" message is and how deeply we've internalized it. It is at the center of the time management literature: Getting Things Done focuses almost entirely on individual efficiency. Later time management work has become more aware of the importance of pruning the to-do list and doing fewer things, but addresses that through techniques for individual prioritization. Cal Newport is more aware than most that constant busyness and multitasking interacts poorly with the human brain, and has taken a few tentative steps towards treating the problem as systemic rather than individual, but his focus is still primarily on individual choices. Even when tackling a problem that is clearly societal, such as the monetization of fear and outrage on social media, the solutions are all individual: recognize that those platforms are bad for you, make an individual determination that your attention is being exploited, and quit social media through your personal force of will. And this isn't just productivity systems. Most of public discussion of environmentalism in the United States is about personal energy consumption, your individual carbon footprint, household recycling, and whether you personally should eat meat. Discussions of monopoly and monopsony become debates over whether you personally should buy from Amazon. Concerns about personal privacy turn into advocacy for using an ad blocker or shaming people for using Google products. Articles about the growth of right-wing extremism become exhortations to take responsibility for the right-wing extremist in your life and argue them out of their beliefs over the dinner table. Every major systemic issue facing society becomes yet another personal obligation, another place we are failing as individuals, something else that requires trying harder, learning more, caring more, doing more. This advice is well-meaning (mostly; sometimes it is an intentional and cynical diversion), and can even be effective with specific problems. But it's also a trap. If you're feeling miserable, you just haven't found the right combination of time-block scheduling, kanban, and bullet journaling yet. If you're upset at corporate greed and the destruction of the environment, the change starts with you and your household. The solution is in your personal hands; you just have try a little harder, work a little harder, make better decisions, and spend money more ethically (generally by buying more expensive products). And therefore, when we're already burned out, every topic becomes another failure, increasing our already excessive guilt and anxiety. Believing that we're in control, even when we're not, does have psychological value. That's part of what makes it such a beguiling trap. While drafting this review, I listened to Ezra Klein's interview with Robert Sapolsky on poverty and stress, and one of the points he made is that, when mildly or moderately bad things happen, believing you have control is empowering. It lets you recast the setback as a larger disaster that you were able to prevent and avoid a sense of futility. But when something major goes wrong, believing you have control is actively harmful to your mental health. The tragedy is now also a personal failure, leading to guilt and internal recrimination on top of the effects of the tragedy itself. This is why often the most comforting thing we can say to someone else after a personal disaster is "there's nothing you could have done." Believing we can improve our lives if we just try a little harder does work, until it doesn't. And because it does work for smaller things, it's hard to abandon; in the short term, believing we're at the mercy of forces outside our control feels even worse. So we double down on self-improvement, giving ourselves even more things to attempt to do and thus burning out even more. Petersen is having none of this, and her anger is both satisfying and clarifying.
In writing that article, and this book, I haven't cured anyone's burnout, including my own. But one thing did become incredibly clear. This isn't a personal problem. It's a societal one and it will not be cured by productivity apps, or a bullet journal, or face mask skin treatments, or overnight fucking oats. We gravitate toward those personal cures because they seem tenable, and promise that our lives can be recentered, and regrounded, with just a bit more discipline, a new app, a better email organization strategy, or a new approach to meal planning. But these are all merely Band-Aids on an open wound. They might temporarily stop the bleeding, but when they fall off, and we fail at our new-found discipline, we just feel worse.
Structurally, Can't Even is half summaries of other books and essays put into this overall structure and half short profiles and quotes from millennials that illustrate her point. This is Petersen's typical journalistic style if you're familiar with her other work. It gains a lot from the voices of individuals, but it can also feel like argument from anecdote. If there's a epistemic flaw in this book, it's that Petersen defends her arguments more with examples than with scientific study. I've read enough of the other books she cites, many of which do go into the underlying studies and statistics, to know that her argument is well-grounded, but I think Can't Even works better as a roadmap and synthesis than as a primary source of convincing data. The other flaw that I'll mention is that although Petersen tries very hard to incorporate poorer and non-white millennials, I don't think the effort was successful, and I'm not sure it was possible within the structure of this book. She frequently makes a statement that's accurate and insightful for millennials from white, middle-class families, acknowledges that it doesn't entirely apply to, for example, racial minorities, and then moves on without truly reconciling those two perspectives. I think this is a deep structural problem: One's experience of American life is very different depending on race and class, and the phenomenon that Petersen is speaking to is to an extent specific to those social classes who had a more comfortable and relaxing life and are losing it. One way to see the story of the modern economy is that white people are becoming as precarious as everyone else already was, and are reacting by making the lives of non-white people yet more miserable. Petersen is accurately pointing to significant changes in relationships with employers, productivity, family, and the ideology of individualism, but experiencing that as a change is more applicable to white people than non-white people. That means there are, in a way, two books here: one about the slow collapse of the white middle class into constant burnout, and a different book about the much longer-standing burnout of being non-white in the United States and our systemic failure to address the causes of it. Petersen tries to gesture at the second book, but she's not the person to write it and those two books cannot comfortably live between the same covers. The gestures therefore feel awkward and forced, and while the discomfort itself serves some purpose, it lacks the insight that Petersen brings to the rest of the book. Those critiques aside, I found Can't Even immensely clarifying. It's the first book that explained to me in a way I understood what's so demoralizing and harmful about Instagram and its allure of cosplaying as a successful person. It helped me understand how productivity and individual political choices fit into a system that emphasizes individual action as an excuse to not address collective problems. And it also gave me a strange form of hope, because if something can't go on forever, it will, at some point, stop.
Millennials have been denigrated and mischaracterized, blamed for struggling in situations that set us up to fail. But if we have the endurance and aptitude and wherewithal to work ourselves this deeply into the ground, we also have the strength to fight. We have little savings and less stability. Our anger is barely contained. We're a pile of ashes smoldering, a bad memory of our best selves. Underestimate us at your peril: We have so little left to lose.
Nothing will change without individual people making different decisions and taking different actions than they are today. But we have gone much too far down the path of individual, atomized actions that may produce feelings of personal virtue but that are a path to ineffectiveness and burnout when faced with systemic problems. We need to make different choices, yes, but choices towards solidarity and movement politics rather than personal optimization. There is a backlash coming. If we let it ground itself in personal grievance, it could turn ugly and take a racist and nationalist direction. But that's not, by in large, what millennials have done, and that makes me optimistic. If we embrace the energy of that backlash and help shape it to be more inclusive, just, and fair, we can rediscover the effectiveness of collective solutions for collective problems. Rating: 8 out of 10

22 July 2020

Bits from Debian: Let's celebrate DebianDay 2020 around the world

We encourage our community to celebrate around the world the 27th Debian anniversary with organized DebianDay events. This year due to the COVID-19 pandemic we cannot organize in-person events, so we ask instead that contributors, developers, teams, groups, maintainers, and users promote The Debian Project and Debian activities online on August 16th (and/or 15th). Communities can organize a full schedule of online activities throughout the day. These activities can include talks, workshops, active participation with contributions such as translations assistance or editing, debates, BoFs, and all of this in your local language using tools such as Jitsi for capturing audio and video from presenters for later streaming to YouTube. If you are not aware of any local community organizing a full event or you don't want to join one, you can solo design your own activity using OBS and stream it to YouTube. You can watch an OBS tutorial here. Don't forget to record your activity as it will be a nice idea to upload it to Peertube later. Please add your event/activity on the DebianDay wiki page and let us know about and advertise it on Debian micronews. To share it, you have several options: PS: DebConf20 online is coming! It will be held from August 23rd to 29th, 2020. Registration is already open.

30 April 2017

Chris Lamb: Free software activities in April 2017

Here is my monthly update covering what I have been doing in the free software world (previous month):
Reproducible builds

Whilst anyone can inspect the source code of free software for malicious flaws, most software is distributed pre-compiled to end users. The motivation behind the Reproducible Builds effort is to permit verification that no flaws have been introduced either maliciously or accidentally during this compilation process by promising identical results are always generated from a given source, thus allowing multiple third-parties to come to a consensus on whether a build was compromised. I have generously been awarded a grant from the Core Infrastructure Initiative to fund my work in this area. This month I:
I also made the following changes to diffoscope, our recursive and content-aware diff utility used to locate and diagnose reproducibility issues:
  • New features:
    • Add support for comparing Ogg Vorbis files. (0436f9b)
  • Bug fixes:
    • Prevent a traceback when using --new-file with containers. (#861286)
    • Don't crash on invalid archives; print a useful error instead. (#833697).
    • Don't print error output from bzip2 call. (21180c4)
  • Cleanups:
    • Prevent abstraction-level violations by defining visual diff support on Presenter classes. (7b68309)
    • Show Debian packages installed in test output. (c86a9e1)


Debian
Debian LTS

This month I have been paid to work 18 hours on Debian Long Term Support (LTS). In that time I did the following:
  • "Frontdesk" duties, triaging CVEs, etc.
  • Issued DLA 882-1 for the tryton-server general application platform to fix a path suffix injection attack.
  • Issued DLA 883-1 for curl preventing a buffer read overrun vulnerability.
  • Issued DLA 884-1 for collectd (a statistics collection daemon) to close a potential infinite loop vulnerability.
  • Issued DLA 885-1 for the python-django web development framework patching two open redirect & XSS attack issues.
  • Issued DLA 890-1 for ming, a library to create Flash files, closing multiple heap-based buffer overflows.
  • Issued DLA 892-1 and DLA 891-1 for the libnl3/libnl Netlink protocol libraries, fixing integer overflow issues which could have allowed arbitrary code execution.

Uploads
  • redis (4:4.0-rc3-1) New upstream RC release.
  • adminer:
    • 4.3.0-2 Fix debian/watch file.
    • 4.3.1-1 New upstream release.
  • bfs:
    • 1.0-1 Initial release.
    • 1.0-2 Drop fstype tests as they rely on /etc/mtab being available. (#861471)
  • python-django:
    • 1:1.10.7-1 New upstream security release.
    • 1:1.11-1 New upstream stable release to experimental.

I sponsored the following uploads: I also performed the following QA uploads:
  • gtkglext (1.2.0-7) Correct installation location of gdkglext-config.h after "Multi-Archification" in 1.2.0-5. (#860007)
Finally, I made the following non-maintainer uploads (NMUs):
  • python-formencode (1.3.0-2) Don't ship files in /usr/lib/python 2.7,3 /dist-packages/docs. (#860146)
  • django-assets (0.12-2) Patch pytest plugin to check whether we are running in a Django context, otherwise we can break unrelated testsuites. (#859916)


FTP Team

As a Debian FTP assistant I ACCEPTed 155 packages: aiohttp-cors, bear, colorize, erlang-p1-xmpp, fenrir, firejail, fizmo-console, flask-ldapconn, flask-socketio, fontmanager.app, fonts-blankenburg, fortune-zh, fw4spl, fzy, gajim-antispam, gdal, getdns, gfal2, gmime, golang-github-go-macaron-captcha, golang-github-go-macaron-i18n, golang-github-gogits-chardet, golang-github-gopherjs-gopherjs, golang-github-jroimartin-gocui, golang-github-lunny-nodb, golang-github-markbates-goth, golang-github-neowaylabs-wabbit, golang-github-pkg-xattr, golang-github-siddontang-goredis, golang-github-unknwon-cae, golang-github-unknwon-i18n, golang-github-unknwon-paginater, grpc, grr-client-templates, gst-omx, hddemux, highwayhash, icedove, indexed-gzip, jawn, khal, kytos-utils, libbloom, libdrilbo, libhtml-gumbo-perl, libmonospaceif, libpsortb, libundead, llvm-toolchain-4.0, minetest-mod-homedecor, mini-buildd, mrboom, mumps, nnn, node-anymatch, node-asn1.js, node-assert-plus, node-binary-extensions, node-bn.js, node-boom, node-brfs, node-browser-resolve, node-browserify-des, node-browserify-zlib, node-cipher-base, node-console-browserify, node-constants-browserify, node-delegates, node-diffie-hellman, node-errno, node-falafel, node-hash-base, node-hash-test-vectors, node-hash.js, node-hmac-drbg, node-https-browserify, node-jsbn, node-json-loader, node-json-schema, node-loader-runner, node-miller-rabin, node-minimalistic-crypto-utils, node-p-limit, node-prr, node-sha.js, node-sntp, node-static-module, node-tapable, node-tough-cookie, node-tunein, node-umd, open-infrastructure-storage-tools, opensvc, openvas, pgaudit, php-cassandra, protracker, pygame, pypng, python-ase, python-bip32utils, python-ltfatpy, python-pyqrcode, python-rpaths, python-statistics, python-xarray, qtcharts-opensource-src, r-cran-cellranger, r-cran-lexrankr, r-cran-pwt9, r-cran-rematch, r-cran-shinyjs, r-cran-snowballc, ruby-ddplugin, ruby-google-protobuf, ruby-rack-proxy, ruby-rails-assets-underscore, rustc, sbt, sbt-launcher-interface, sbt-serialization, sbt-template-resolver, scopt, seqsero, shim-signed, sniproxy, sortedcollections, starjava-array, starjava-connect, starjava-datanode, starjava-fits, starjava-registry, starjava-table, starjava-task, starjava-topcat, starjava-ttools, starjava-util, starjava-vo, starjava-votable, switcheroo-control, systemd, tilix, tslib, tt-rss-notifier-chrome, u-boot, unittest++, vc, vim-ledger, vis, wesnoth-1.13, wolfssl, wuzz, xandikos, xtensor-python & xwallpaper. I additionally filed 14 RC bugs against packages that had incomplete debian/copyright files against getdns, gfal2, grpc, mrboom, mumps, opensvc, python-ase, sniproxy, starjava-topcat, starjava-ttools, unittest++, wolfssl, xandikos & xtensor-python.

25 March 2017

Bits from Debian: Debian Project Leader elections 2017

It's that time of year again for the Debian Project: the elections of its Project Leader! The Project Leader position is described in the Debian Constitution. Two Debian Developers run this year to become Project Leader: Mehdi Dogguy, who has held the office for the last year, and Chris Lamb. We are in the middle of the campaigning period that will last until the end of April 1st. The candidates and Debian contributors are already engaging in debates and discussions on the debian-vote mailing list. The voting period starts on April 2nd, and during the following two weeks, Debian Developers can vote to choose the person that will fit that role for one year. The results will be published on April 16th with the term for new the project leader starting the following day.

8 February 2017

Antoine Beaupr : Reliably generating good passwords

Passwords are used everywhere in our modern life. Between your email account and your bank card, a lot of critical security infrastructure relies on "something you know", a password. Yet there is little standard documentation on how to generate good passwords. There are some interesting possibilities for doing so; this article will look at what makes a good password and some tools that can be used to generate them. There is growing concern that our dependence on passwords poses a fundamental security flaw. For example, passwords rely on humans, who can be coerced to reveal secret information. Furthermore, passwords are "replayable": if your password is revealed or stolen, anyone can impersonate you to get access to your most critical assets. Therefore, major organizations are trying to move away from single password authentication. Google, for example, is enforcing two factor authentication for its employees and is considering abandoning passwords on phones as well, although we have yet to see that controversial change implemented. Yet passwords are still here and are likely to stick around for a long time until we figure out a better alternative. Note that in this article I use the word "password" instead of "PIN" or "passphrase", which all roughly mean the same thing: a small piece of text that users provide to prove their identity.

What makes a good password? A "good password" may mean different things to different people. I will assert that a good password has the following properties:
  • high entropy: hard to guess for machines
  • transferable: easy to communicate for humans or transfer across various protocols for computers
  • memorable: easy to remember for humans
High entropy means that the password should be unpredictable to an attacker, for all practical purposes. It is tempting (and not uncommon) to choose a password based on something else that you know, but unfortunately those choices are likely to be guessable, no matter how "secret" you believe it is. Yes, with enough effort, an attacker can figure out your birthday, the name of your first lover, your mother's maiden name, where you were last summer, or other secrets people think they have. The only solution here is to use a password randomly generated with enough randomness or "entropy" that brute-forcing the password will be practically infeasible. Considering that a modern off-the-shelf graphics card can guess millions of passwords per second using freely available software like hashcat, the typical requirement of "8 characters" is not considered enough anymore. With proper hardware, a powerful rig can crack such passwords offline within about a day. Even though a recent US National Institute of Standards and Technology (NIST) draft still recommends a minimum of eight characters, we now more often hear recommendations of twelve characters or fourteen characters. A password should also be easily "transferable". Some characters, like & or !, have special meaning on the web or the shell and can wreak havoc when transferred. Certain software also has policies of refusing (or requiring!) some special characters exactly for that reason. Weird characters also make it harder for humans to communicate passwords across voice channels or different cultural backgrounds. In a more extreme example, the popular Signal software even resorted to using only digits to transfer key fingerprints. They outlined that numbers are "easy to localize" (as opposed to words, which are language-specific) and "visually distinct". But the critical piece is the "memorable" part: it is trivial to generate a random string of characters, but those passwords are hard for humans to remember. As xkcd noted, "through 20 years of effort, we've successfully trained everyone to use passwords that are hard for human to remember but easy for computers to guess". It explains how a series of words is a better password than a single word with some characters replaced. Obviously, you should not need to remember all passwords. Indeed, you may store some in password managers (which we'll look at in another article) or write them down in your wallet. In those cases, what you need is not a password, but something I would rather call a "token", or, as Debian Developer Daniel Kahn Gillmor (dkg) said in a private email, a "high entropy, compact, and transferable string". Certain APIs are specifically crafted to use tokens. OAuth, for example, generates "access tokens" that are random strings that give access to services. But in our discussion, we'll use the term "token" in a broader sense. Notice how we removed the "memorable" property and added the "compact" one: we want to efficiently convert the most entropy into the shortest password possible, to work around possibly limiting password policies. For example, some bank cards only allow 5-digit security PINs and most web sites have an upper limit in the password length. The "compact" property applies less to "passwords" than tokens, because I assume that you will only use a password in select places: your password manager, SSH and OpenPGP keys, your computer login, and encryption keys. Everything else should be in a password manager. Those tools are generally under your control and should allow large enough passwords that the compact property is not particularly important.

Generating secure passwords We'll look now at how to generate a strong, transferable, and memorable password. These are most likely the passwords you will deal with most of the time, as security tokens used in other settings should actually never show up on screen: they should be copy-pasted or automatically typed in forms. The password generators described here are all operated from the command line. Password managers often have embedded password generators, but usually don't provide an easy way to generate a password for the vault itself. The previously mentioned xkcd cartoon is probably a common cultural reference in the security crowd and I often use it to explain how to choose a good passphrase. It turns out that someone actually implemented xkcd author Randall Munroe's suggestion into a program called xkcdpass:
    $ xkcdpass
    estop mixing edelweiss conduct rejoin flexitime
In verbose mode, it will show the actual entropy of the generated passphrase:
    $ xkcdpass -V
    The supplied word list is located at /usr/lib/python3/dist-packages/xkcdpass/static/default.txt.
    Your word list contains 38271 words, or 2^15.22 words.
    A 6 word password from this list will have roughly 91 (15.22 * 6) bits of entropy,
    assuming truly random word selection.
    estop mixing edelweiss conduct rejoin flexitime
Note that the above password has 91 bits of entropy, which is about what a fifteen-character password would have, if chosen at random from uppercase, lowercase, digits, and ten symbols:
    log2((26 + 26 + 10 + 10)^15) = approx. 92.548875
It's also interesting to note that this is closer to the entropy of a fifteen-letter base64 encoded password: since each character is six bits, you end up with 90 bits of entropy. xkcdpass is scriptable and easy to use. You can also customize the word list, separators, and so on with different command-line options. By default, xkcdpass uses the 2 of 12 word list from 12 dicts, which is not specifically geared toward password generation but has been curated for "common words" and words of different sizes. Another option is the diceware system. Diceware works by having a word list in which you look up words based on dice rolls. For example, rolling the five dice "1 4 2 1 4" would give the word "bilge". By rolling those dice five times, you generate a five word password that is both memorable and random. Since paper and dice do not seem to be popular anymore, someone wrote that as an actual program, aptly called diceware. It works in a similar fashion, except that passwords are not space separated by default:
    $ diceware
    AbateStripDummy16thThanBrock
Diceware can obviously change the output to look similar to xkcdpass, but can also accept actual dice rolls for those who do not trust their computer's entropy source:
    $ diceware -d ' ' -r realdice -w en_orig
    Please roll 5 dice (or a single dice 5 times).
    What number shows dice number 1? 4
    What number shows dice number 2? 2
    What number shows dice number 3? 6
    [...]
    Aspire O's Ester Court Born Pk
The diceware software ships with a few word lists, and the default list has been deliberately created for generating passwords. It is derived from the standard diceware list with additions from the SecureDrop project. Diceware ships with the EFF word list that has words chosen for better recognition, but it is not enabled by default, even though diceware recommends using it when generating passwords with dice. That is because the EFF list was added later on. The project is currently considering making the EFF list be the default. One disadvantage of diceware is that it doesn't actually show how much entropy the generated password has those interested need to compute it for themselves. The actual number depends on the word list: the default word list has 13 bits of entropy per word (since it is exactly 8192 words long), which means the default 6 word passwords have 78 bits of entropy:
    log2(8192) * 6 = 78
Both of these programs are rather new, having, for example, entered Debian only after the last stable release, so they may not be directly available for your distribution. The manual diceware method, of course, only needs a set of dice and a word list, so that is much more portable, and both the diceware and xkcdpass programs can be installed through pip. However, if this is all too complicated, you can take a look at Openwall's passwdqc, which is older and more widely available. It generates more memorable passphrases while at the same time allowing for better control over the level of entropy:
    $ pwqgen
    vest5Lyric8wake
    $ pwqgen random=78
    Theme9accord=milan8ninety9few
For some reason, passwdqc restricts the entropy of passwords between the bounds of 24 and 85 bits. That tool is also much less customizable than the other two: what you see here is pretty much what you get. The 4096-word list is also hardcoded in the C source code; it comes from a Usenet sci.crypt posting from 1997. A key feature of xkcdpass and diceware is that you can craft your own word list, which can make dictionary-based attacks harder. Indeed, with such word-based password generators, the only viable way to crack those passwords is to use dictionary attacks, because the password is so long that character-based exhaustive searches are not workable, since they would take centuries to complete. Changing from the default dictionary therefore brings some advantage against attackers. This may be yet another "security through obscurity" procedure, however: a naive approach may be to use a dictionary localized to your native language (for example, in my case, French), but that would deter only an attacker that doesn't do basic research about you, so that advantage is quickly lost to determined attackers. One should also note that the entropy of the password doesn't depend on which word list is chosen, only its length. Furthermore, a larger dictionary only expands the search space logarithmically; in other words, doubling the word-list length only adds a single bit of entropy. It is actually much better to add a word to your password than words to the word list that generates it.

Generating security tokens As mentioned before, most password managers feature a way to generate strong security tokens, with different policies (symbols or not, length, etc). In general, you should use your password manager's password-generation functionality to generate tokens for sites you visit. But how are those functionalities implemented and what can you do if your password manager (for example, Firefox's master password feature) does not actually generate passwords for you? pass, the standard UNIX password manager, delegates this task to the widely known pwgen program. It turns out that pwgen has a pretty bad track record for security issues, especially in the default "phoneme" mode, which generates non-uniformly distributed passwords. While pass uses the more "secure" -s mode, I figured it was worth removing that option to discourage the use of pwgen in the default mode. I made a trivial patch to pass so that it generates passwords correctly on its own. The gory details are in this email. It turns out that there are lots of ways to skin this particular cat. I was suggesting the following pipeline to generate the password:
    head -c $entropy /dev/random   base64   tr -d '\n='
The above command reads a certain number of bytes from the kernel (head -c $entropy /dev/random) encodes that using the base64 algorithm and strips out the trailing equal sign and newlines (for large passwords). This is what Gillmor described as a "high-entropy compact printable/transferable string". The priority, in this case, is to have a token that is as compact as possible with the given entropy, while at the same time using a character set that should cause as little trouble as possible on sites that restrict the characters you can use. Gillmor is a co-maintainer of the Assword password manager, which chose base64 because it is widely available and understood and only takes up 33% more space than the original 8-bit binary encoding. After a lengthy discussion, the pass maintainer, Jason A. Donenfeld, chose the following pipeline:
    read -r -n $length pass < <(LC_ALL=C tr -dc "$characters" < /dev/urandom)
The above is similar, except it uses tr to directly to read characters from the kernel, and selects a certain set of characters ($characters) that is defined earlier as consisting of [:alnum:] for letters and digits and [:graph:] for symbols, depending on the user's configuration. Then the read command extracts the chosen number of characters from the output and stores the result in the pass variable. A participant on the mailing list, Brian Candler, has argued that this wastes entropy as the use of tr discards bits from /dev/urandom with little gain in entropy when compared to base64. But in the end, the maintainer argued that reading "reading from /dev/urandom has no [effect] on /proc/sys/kernel/random/entropy_avail on Linux" and dismissed the objection. Another password manager, KeePass uses its own routines to generate tokens, but the procedure is the same: read from the kernel's entropy source (and user-generated sources in case of KeePass) and transform that data into a transferable string.

Conclusion While there are many aspects to password management, we have focused on different techniques for users and developers to generate secure but also usable passwords. Generating a strong yet memorable password is not a trivial problem as the security vulnerabilities of the pwgen software showed. Furthermore, left to their own devices, users will generate passwords that can be easily guessed by a skilled attacker, especially if they can profile the user. It is therefore essential we provide easy tools for users to generate strong passwords and encourage them to store secure tokens in password managers.
Note: this article first appeared in the Linux Weekly News.

Next.