Search Results: "ah"

1 July 2021

Paul Wise: FLOSS Activities June 2021

Focus This month I didn't have any particular focus. I just worked on issues in my info bubble.

Changes

Issues

Review
  • Spam: reported 3 Debian bug reports and 135 Debian mailing list posts
  • Debian wiki: RecentChanges for the month
  • Debian BTS usertags: changes for the month
  • Debian screenshots:
    • approved php-horde endless-sky claws-mail memtester
    • rejected python-gdal/weboob-qt (unrelated software)

Administration
  • Debian: restart bacula director
  • Debian wiki: approve accounts

Communication
  • This month I left freenode, an IRC network I had been on for at least 16 years, for reasons that you probably all read about. I think the biggest lesson I take from this situation and ones happening around the same time is that proper governance in peer production projects is absolutely critical.
  • Respond to queries from Debian users and contributors on the mailing lists and IRC

Sponsors The purple-discord/flower work was sponsored by my employers. All other work was done on a volunteer basis.

30 June 2021

Aigars Mahinovs: Keeping it as simple as possible

You know that you've had the same server too long when they discontinue the entire class of servers you were using and you need to migrate to a new instance. And you know you've not done anything with that server (and the blog running on it) for too long when you have no idea how that thing is actually working. Its a good opportunity to start over from scratch, and a good motivation to the new thing as simply as humanly possible, or even simpler. So I am switching to a statically generated blog as well. Not sure what took me so long, but thank good the tooling has really improved since the last time I looked. It was as simple as picking Nikola, finding its import_feed plugin, changing the BLOG_RSS_LIMIT in my Django Mezzanine blog to a thousand (to export all posts via RSS/ATOM feed), fixing some bugs in the import_feed plugin, waiting a few minutes for the full feed to generate and to be imported, adjusting the config of the resulting site, posting that to git and writing a simple shell script to pull that repo periodically and call nikola build on it, as well as config to serve ther result via ngnix. Done. After that creating a new blog post is just nikola new_post and editing it in vim and pushing to git. I prefer Markdown, but it supports all kinds of formats. And the old posts are just stored as HTML. Really simple. I think I will spend more time fighting with Google to allow me to forward email from my domain to my GMail postbox without it refusing all of it as spam.

28 June 2021

Shirish Agarwal: Indian Capital Markets, BSE, NSE

I had been meaning to write on the above topic for almost a couple of months now but just kept procrastinating about it. That push came to a shove when Sucheta Dalal and Debasis Basu shared their understanding, wisdom, and all in the new book called Absolute Power Inside story of the National Stock Exchange s amazing success, leading to hubris, regulatory capture and algo scam . Now while I will go into the details of the new book as currently, I have not bought it but even if I had bought it and shared some of the revelations from it, it wouldn t have done justice to either the book or what is sharing before knowing some of the background before it.

Before I jump ahead, I would suggest people to read my sort of introductory blog post on banking history so they know where I m coming from. I m going to deviate a bit from Banking as this is about trade and capital markets, although Banking would come in later on. And I will also be sharing some cultural insights along with history so people are aware of why things happened the way they did. Calicut, Calcutta, Kolkata, one-time major depot around the world Now, one cannot start any topic about trade without talking about Kolkata. While today, it seems like a bastion of communism, at one time it was one of the major trade depots around the world. Both William Dalrymple and the Chinese have many times mentioned Kolkata as being one of the major centers of trade. This was between the 13th and the late 19th century. A cursory look throws up this article which talks about Kolkata or Calicut as it was known as a major trade depot. There are of course many, many articles and even books which do tell about how Kolkata was a major trade depot. Now between the 13th and 19th century, a lot of changes happened which made Kolkata poorer and shifted trade to Mumbai/Bombay which in those times was nothing but just a port city like many others.

The Rise of the Zamindar Around the 15th century when Babur Invaded Hindustan, he realized that Hindustan is too big a country to be governed alone. And Hindustan was much broader than independent India today. So he created the title of Zamindars. Interestingly, if you look at the Mughal period, they were much more in tune with Hindustani practices than the British who came later. They used the caste divisions and hierarchy wisely making sure that the status quo was maintained as far as castes/creed were concerned. While in-fighting with various rulers continued, it was more or less about land and power other than anything else. When the Britishers came they co-opted the same arrangement with a minor adjustment. While in the before system, the zamindars didn t have powers to be landowners. The Britishers gave them land ownerships. A huge percentage of thess zamindars especially in Bengal were from my own caste Banias or Baniyas. The problem and the solution for the Britishers had been this was a large land to control and exploit and the number of British officers and nobles were very less. So they gave virtually a lot of powers to the Banias. The only thing the British insisted on were very high rents from the newly minted Zamindars. The Zamindar in turn used the powers of personal fiefdom to give loans at very high interest rates when the poor were unable to pay the interest rate, they would take the land while at the same time slavery was forced on both men and women, many a time rapes and affairs. While there have been many records shedding light on it, don t think it could be any more powerful as enacted and shared by Shabana Azmi in Ankur:the Seedling. Another prominent grouping was formed around the same time was the Bhadralok. Now as shared Bhadralok while having all the amenities of belonging to the community, turned a blind eye to the excesses being done by the Zamindars. How much they played a hand in the decimation of Bengal has been a matter of debate, but they did have a hand, that much is not contested.

The Rise of Stock Exchanges Sadly and interestingly, many people believe and continue to believe that stock exchanges is recent phenomena. The first stock exchange though was the Calcutta Stock Exchange rather than the Bombay Stock Exchange. How valuable was Calcutta to the Britishers in its early years can be gauged from the fact that at one time it was made the capital of India in 1772 . In fact, after the Grand Trunk Road (on which there had been even Train names in both countries) x number of books have been written of the trade between Calcutta and Peshawar (Now in Pakistan). And it was not just limited to trade but also cultural give-and-take between the two centers. Even today, if you look at YT (Youtube) and look up some interviews of old people, you find many interesting anecdotes of people sharing both culture and trade.

The problem of the 60 s and rise of BSE
After India became independent and the Constitutional debates happened, the new elites understood that there cannot be two power centers that could govern India. On one hand, were the politicians who had come to power on the back of the popular vote, the other was the Zamindars, who more often than not had abused their powers which resulted in widespread poverty. The Britishers are to blame, but so do the middlemen as they became willing enablers to the same system of oppression. Hence, you had the 1951 amendment to the Constitution and the 1956 Zamindari Abolition Act. In fact, you can find much more of an in-depth article both about Zamindars and their final abolition here. Now once Zamindari was gone, there was nothing to replace it with. The Zamindars ousted of their old roles turned and tried to become Industrialists. The problem was that the poor and the downtrodden had already had experiences with the Zamindars. Also, some Industrialists from North and West also came to Bengal but they had no understanding of either the language or the cultural understanding of what had happened in Bengal. And notice that I have not talked about both the famines and the floods that wrecked Bengal since time immemorial and some of the ones which got etched on soul of Bengal and has marks even today  The psyche of the Bengali and the Bhadralok has gone through enormous shifts. I have met quite a few and do see the guilt they feel. If one wonders as to how socialist parties are able to hold power in Bengal, look no further than Tarikh which tells and shares with you that even today how many Bengalis still feel somewhat lost.

The Rise of BSE Now, while Kolkata Stock Exchange had been going down, for multiple reasons other than listed above. From the 1950s onwards Jawaharlal Nehru had this idea of 5-year plans, borrowed from socialist countries such as Russia, China etc. His vision and ambition for the newly minted Indian state were huge, while at the same time he understood we were poor. The loot by East India Company and the Britishers and on top of that the division of wealth with Pakistan even though the majority of Muslims chose and remained with India. Travel on Indian Railways was a risky affair. My grandfather had shared numerous tales where he used to fill money in socks and put the socks on in boots when going between either Delhi Kolkata or Pune Kolkata. Also, as the Capital became Delhi, it unofficially was for many years, the transparency from Kolkata-based firms became less. So many Kolkata firms either mismanaged and shut down while Maharashtra, my own state, saw a huge boon in Industrialization as well as farming. From the 1960s to the 1990s there were many booms and busts in the stock exchanges but most were manageable.

While the 60s began on a good note as Goa was finally freed from the Portuguese army and influence, the 1962 war with the Chinese made many a soul question where we went wrong. Jawaharlal Nehru went all over the world to ask for help but had to return home empty-handed. Bollywood showed a world of bell-bottoms and cars and whatnot, while the majority were still trying to figure out how to put two square meals on the table. India suffered one of the worst famines in those times. People had to ration food. Families made do with either one meal or just roti (flatbread) rather than rice. In Bengal, things were much more severe. There were huge milk shortages, so Bengalis were told to cut down on sweets. This enraged the Bangalis as nothing else could. Note If one wants to read how bad Indians felt at that time, all one has to read is V.S. Naipaul s An Area of darkness . This was also the time when quite a few Indians took their first step out of India. While Air India had just started, the fares were prohibitive. Those who were not well off, either worked on ships or went via passenger or cargo ships to Dubai/Qatar middle-east. Some went to Russia and some even to States. While today s migr s want to settle in the west forever and have their children and grandchildren grow up in the West, in the 1960s and 70s the idea was far different. The main purpose for a vast majority was to get jobs and whatnot, save maximum money and send it back to India as a remittance. The idea was to make enough money in 3-5-10 years, come back to India, and then lead a comfortable life. Sadly, there has hardly been any academic work done in India, at least to my knowledge to document the sacrifices done by Indians in search of jobs, life, purpose, etc. in the 1960s and 1970s. The 1970s was also when alternative cinema started its journey with people like Smita Patil, Naseeruddin Shah who portrayed people s struggles on-screen. Most of them didn t have commercial success because the movies and the stories were bleak. While the acting was superb, most Indians loved to be captured by fights, car-chases, and whatnot rather than the deary existence which they had. And the alt cinema forced them to look into the mirror, which was frowned upon both by the masses and the classes. So cinema which could have been a wake-up call for a lot of Indians failed. One of the most notable works of that decade, at least to me, was Manthan. 1961 was also marked by the launch of Economic Times and Financial Express which tells that there was some appetite for financial news and understanding. The 1970s was also a very turbulent time in the corporate sector and stock exchanges. Again, the companies which were listed were run by the very well-off and many of them had been abroad. At the same time, you had fly-by-night operators. One of the happenings which started in this decade is you had corporate wars and hostile takeovers, quite a few of them of which could well have a Web series or two of their own. This was also a decade marked by huge labor unrest, which again changed the face of Bombay/Mumbai. From the 1950s till the 1970s, Bombay was known for its mills. So large migrant communities from all over India came to Bombay to become the next Bollywood star and if that didn t happen, they would get jobs in the mills. Bombay/Mumbai has/had this unique feature that somehow you will make money to make ends meet. Of course, with the pandemic, even that has gone for a toss. Labor unrest was a defining character of that decade. Three movies, Kaala Patthar, Kalyug, and Ankush give a broad outlook of what happened in that decade. One thing which is present and omnipresent then and now is how time and time again we lost our demographic dividend. Again there was an exodus of young people who ventured out to seek fortunes elsewhere. The 1970s and 80s were also famous for the license Raj which they bought in. Just like the Soviets, there were waiting periods for everything. A telephone line meant waiting for things anywhere from 4 to 8 years. In 1987, when we applied and got a phone within 2-3 months, most of my relatives both from my mother and father s side could not believe we paid 0 to get a telephone line. We did pay the telephone guy INR 10/- which was a somewhat princely sum when he was installing it, even then they could not believe it as in Northern India, you couldn t get a phone line even if your number had come. You had to pay anywhere from INR 500/1000 or more to get a line. This was BSNL and to reiterate there were no alternatives at that time.

The 1990s and the Harshad Mehta Scam The 90s was when I was a teenager. You do all the stupid things for love, lust, whatever. That is also the time you are introduced really to the world of money. During my time, there were only three choices, Sciences, Commerce, and Arts. If History were your favorite subject then you would take Arts and if it was not, and you were not studious, then you would up commerce. This is how careers were chosen. So I enrolled in Commerce. Due to my grandfather and family on my mother s side interested in stocks both as a saving and compounding tool, I was able to see Pune Stock Exchange in action one day. The only thing I remember that day is people shouting loudly with various chits. I had no idea that deals of maybe thousands or even lakhs. The Pune Stock Exchange had been newly minted. I also participated in a couple of mock stock exchanges and came to understand that one has to be aggressive in order to win. You had to be really loud to be heard over others, you could not afford to be shy. Also, spread your risks. Sadly, nothing about the stock markets was there in the syllabus. 1991 was also when we saw the Iraq war, the balance of payments crisis in India, and didn t know that the Harshad Mehta scam was around the corner. Most of the scams in India have been caught because the person who was doing it was flashy. And this was the reason that even he was caught as Ms. Sucheta Dalal, a young beat reporter from Indian Express who had been covering Indian stock market. Many of her articles were thought-provoking. Now, a brief understanding is required to know before we actually get to the scam. Because of the 1991 balance of payments crisis, IMF rescued India on the condition that India throws its market open. In the 1980s itself, Rajeev Gandhi had wanted to partially make India open but both politicians and Industrialists advised him not to do the same, we are/were not ready. On 21st May 1991, Rajeev Gandhi was assassinated by the LTTE. A month later, due to the sympathy vote, the Narsimha Rao Govt. took power. While for most new Governments there is usually a honeymoon period lasting 6 months or so till they get settled in their roles before people start asking tough questions. It was not to be for this Govt. Immediately, The problem had been building for a few years. Although, in many ways, our economy was better than it is today. The only thing India didn t do well at that time was managing foreign exchange. As only a few Indians had both the money and the opportunity to go abroad and need for electronics was limited. One of the biggest imports of the time then and still today is Energy, Oil. While today it is Oil/Gas and electronics, at that time it was only OIl. The Oil import bill was ballooning while exports were more or less stagnant and mostly comprised of raw materials rather than finished products. Even today, it is largely this, one of the biggest Industrialists in India Ambani exports gas/oil while Adani exports coal. Anyways, the deficit was large enough to trigger a payment crisis. And Narsimha Rao had to throw open the Indian market almost overnight. Some changes became quickly apparent, while others took a long time to come.

Satellite Television and Entry of Foreign Banks Almost overnight, from 1 channel we became multi-channel. Star TV (Rupert Murdoch) bought us Bold and Beautiful, while CNN broadcasted the Iraq War. It was unbelievable for us that we were getting reports of what had happened 24-48 hours earlier. Fortunately or unfortunately, I was still very much a teenager to understand the import of what was happening. Even in my college, except for one or two-person, it wasn t a topic for debate or talk or even the economy. We were basically somehow cocooned in our own little world. But this was not the case for the rest of India and especially banks. The entry of foreign banks was a rude shock to Indian banks. The foreign banks were bringing both technology and sophistication in their offerings, and Indian Banks needed and wanted fast money to show hefty profits. Demand for credit wasn t much, at least nowhere the level it today is. At the same time, default on credit was nowhere high as today is. But that will require its own space and article. To quench the thirst for hefty profits by banks, Enter Harshad Mehta. At that point in time, banks were not permitted at all to invest in the securities/share market. They could only buy Government securities or bonds which had a coupon rate of say 8-10% which was nowhere enough to satisfy the need for hefty profits as desired by Indian banks. On top of it, that cash was blocked for a long time. Most of these Government bonds had anywhere between 10-20 year maturity date and some even longer. Now, one loophole in that was that the banks themselves could not buy these securities. They had to approach a registered broker of the share market who will do these transactions on their behalf. Here is where Mr. Mehta played his game. He shared both legal and illegal ways in which both the bank and he would prosper. While banking at one time was thought to be conservative and somewhat cautious, either because they were too afraid that Western private banks will take that pie or whatever their reasons might be, they agreed to his antics. To play the game, Harshad Mehta needed lots of cash, which the banks provided him in the guise of buying securities that were never bought, but the amounts were transferred to his account. He actively traded stocks, at the same time made a group, and also made the rumor mill work to his benefit. The share market is largely a reactionary market. It operates on patience, news, and rumor-mill. The effect of his shenanigans was that the price of a stock that was trending at say INR 200 reached the stratospheric height of INR 9000/- without any change in the fundamentals or outlook of the stock. His thirst didn t remain restricted to stocks but also ventured into the unglamorous world of Govt. securities where he started trading even in them in large quantities. In order to attract new clients, he coveted a fancy lifestyle. The fancy lifestyle was what caught the eye of Sucheta Dalal, and she started investigating the deals he was doing. Being a reporter, she had the advantage of getting many doors to open and get information that otherwise would be under lock and key. On 23rd April 1992, Sucheta Dalal broke the scam.

The Impact The impact was almost like a shock to the markets. Even today, it can be counted as one of the biggest scams in the Indian market if you adjust it for inflation. I haven t revealed much of the scam and what happened, simply because Sucheta Dalal and Debasis Basu wrote The Scam for that purpose. How do I shorten a story and experience which has been roughly written in 300 odd pages in one or two paragraphs, it is simply impossible. The impact though was severe. The Indian stock market became a bear market for two years. Sucheta Dalal was kicked out/made to resign out of Indian Express. The thing is simple, all newspapers survive on readership and advertisements with advertisements. Companies who were having a golden run, whether justified or not, on the bourses/Stock Exchange. For many companies, having a good number on the stock exchange was better than the company fundamentals. There was supposed to be a speedy fast-track court setup for Financial crimes, but it worked only for the Harshad Mehta case and still took over 5 years. It led to the creation of NSE (National Stock Exchange). It also led to the creation of SEBI, perhaps one of the most powerful regulators, giving it a wide range of powers and remit but on the ground more often that proved to be no more than a glorified postman. And the few times it used, it used on the wrong people and people had to go to courts to get justice. But then this is not about SEBI nor is this blog post about NSE. I have anyways shared about Absolute power above, so will not repeat the link here. The Anecdotal impact was widespread. Our own family broker took the extreme step. For my grandfather on the mother s side, he was like the second son. The news of his suicide devastated my grandfather quite a bit, which we realized much later when he was diagnosed with Alzheimer s. Our family stockbroker had been punting, taking lots of cash from the market at very high rates, betting on stocks wildly as the stock market was reaching for the stars when the market crashed, he was insolvent. How the family survived is a tale in itself. They had just got married just a few years ago and had a cute boy and girl soon after. While today, both are grown-up, at that time what the wife faced only she knows. There were also quite a few shareholders who also took the extreme step. The stock markets in those days were largely based on trust and even today is unless you are into day-trading. So there was always some money left on the table for the share/stockbroker which would be squared off in the next deal/transaction where again you will leave something. My grandfather once thought of going over and meeting them, and we went to the lane where their house is, seeing the line of people who had come for recovery of loans, we turned back with a heavy heart. There was another taboo that kinda got broken that day. The taboo was that the stock market is open to scams. From 1992 to 2021 has been a cycle of scams. Even now, today, the stock market is at unnatural highs. We know for sure that a lot of hot money is rolling around, a lot of American pension funds etc. Till it will work, it will work, some news something and that money will be moved out. Who will be left handing the can, the Indian investors? A Few days back, Ambani writes about Adani. Now while the facts shared are correct, is Adani the only one, the only company to have a small free float in the market. There probably are more than 1/4th or 1/3rd of well-respected companies who may have a similar configuration, the only problem is it is difficult to know who the proxies are. Now if I were to reflect and compare this either with the 1960s or even the 1990s I don t find much difference apart from the fact that the proxy is sitting in Mauritius. At the same time, today you can speculate on almost anything. Whether it is stocks, commodities, derivatives, foreign exchange, cricket matches etc. the list is endless. Since 2014, the rise in speculation rather than investment has been dramatic, almost stratospheric. Sadly, there are no studies or even attempts made to document this. How much official and unofficial speculation is there in the market nobody knows. Money markets have become both fluid and non-transparent. In theory, you have all sorts of regulators, but it is still very much like the Wild West. One thing to note that even Income tax had to change and bring it provisions to account for speculative income.So, starting from being totally illegitimate, it has become kind of legal and is part of Income Tax. And if speculation is not wrong, why not make Indian cricket officially a speculative event, that will be honest and GOI will get part of the proceeds.

Conclusion I wish there was some positive conclusion I could drive, but sadly there is not. Just today read two articles about the ongoing environmental issues in Himachal Pradesh. As I had shared even earlier, the last time I visited those places in 2011, and even at that time I was devastated to see the kind of construction going on. Jogiwara Road which they showed used to be flat single ground/first floor dwellings, most of which were restaurants and whatnot. I had seen the water issues both in Himachal and UT (Uttarakhand) back then and this is when they made huge dams. In U.S. they are removing dams and here we want more dams

21 June 2021

Shirish Agarwal: Accessibility, Freenode and American imperialism.

Accessibility This is perhaps one of the strangest ways and yet also perhaps the straightest way to start the blog post. For the past weeks/months, a strange experience has been there. I am using a Logitech wireless keyboard and mouse for almost a decade. Now, for the past few months and weeks we observed a somewhat rare phenomena . While in-between us we have a single desktop computer. So me and mum take turns to be on the Desktop. At times, however, the system would sit idle and after some time it goes to low-power mode/sleep mode after 30 minutes. Then, when you want to come back, you obviously have to give your login credentials. At times, the keyboard refuses to input any data in the login screen. Interestingly, the mouse still functions. Much more interesting is the fact that both the mouse and the keyboard use the same transceiver sensor to send data. And I had changed batteries to ensure it was not a power issue but still no input :(. While my mother uses and used the power switch (I did teach her how to hold it for few minutes and then let it go) but for self, tried another thing. Using the mouse I logged of the session thinking perhaps some race condition or something might be in the session which was not letting the keystrokes be inputted into the system and having a new session might resolve it. But this was not to be  Luckily, on the screen you do have the option to reboot or power off. I did a reboot and lo, behold the system was able to input characters again. And this has happened time and again. I tried to find GOK and failed to remember that GOK had been retired. I looked up the accessibility page on Debian wiki. Very interesting, very detailed but sadly it did not and does not provide the backup I needed. I tried out florence but found that the app. is buggy. Moreover, the instructions provided on the lightdm screen does not work. I do not get the on-screen keyboard while I followed the instructions. Just to be clear this is all on Debian testing which is gonna be Debian stable soonish  I even tried the same with xvkbd but no avail. I do use mate as my desktop-manager so maybe the instructions need some refinement ???? $ cat /etc/lightdm/lightdm-gtk-greeter.conf grep keyboard
# a11y-states = states of accessibility features: name save state on exit, -name
disabled at start (default value for unlisted), +name enabled at start. Allowed names: contrast, font, keyboard, reader.
keyboard=xvkbd no-gnome focus &
# keyboard-position = x y[;width height] ( 50%,center -0;50% 25% by default) Works only for onboard
#keyboard= Interestingly, Debian does provide two more on-screen keyboards, matchbox as well as onboard which comes from Ubuntu. While I have both of them installed. I find xvkbd to be enough for my work, the only issue seems to be I cannot get it from the drop-down box of accessibility at the login screen. Just to make sure that I have not gone to Gnome-display manager, I did run

$ sudo dpkg-reconfigure gdm3 Only to find out that I am indeed running lightdm. So I am a bit confused why it doesn t come up as an option when I have the login window/login manager running. FWIW I do run metacity as the window manager as it plays nice with all the various desktop environments I have, almost all of them. So this is where I m stuck. If I do get any help, I probably would also add those instructions to the wiki page, so it would be convenient to the next person who comes with the same issue. I also need to figure out some way to know whether there is some race-condition or something which is happening, have no clue how would I go about it without having whole lot of noise. I am sure there are others who may have more of an idea. FWIW, I did search unix.stackexchange as well as reddit/debian to see if I could see any meaningful posts but came up empty.

Freenode I had not been using IRC for quite some time now. The reasons have been multiple issues with Riot (now element) taking the whole space on my desktop. I did get alerted to the whole thing about a week after the whole thing went down. Somebody messaged me DM. I *think* I put up a thread or a mini-thread about IRC or something in response to somebody praising telegram/WhatsApp or one of those apps. That probably triggered the DM. It took me a couple of minutes to hit upon this. I was angry and depressed, seeing the behavior of the new overlords of freenode. I did see that lot of channels moved over to Libera. It was also interesting to see that some communities were thinking of moving to some other obscure platform, which again could be held hostage to the same thing. One could argue one way or the other, but that would be tiresome and fact is any network needs lot of help to be grown and nurtured, whether it is online or offline. I also saw that Libera was also using a software Solanum which is ircv3 compliant. Now having done this initial investigation, it was time to move to an IRC client. The Libera documentation is and was pretty helpful in telling which IRC clients would be good with their network. So I first tried hexchat. I installed it and tried to add Libera server credentials, it didn t work. Did see that they had fixed the bug in sid/unstable and now it s in testing. But at the time it was in sid, the bug-fixed and I wanted to have something which just ran the damn thing. I chanced upon quassel. I had played around with quassel quite a number of times before, so I knew I could play/use it. Hence, I installed it and was able to use it on the first try. I did use the encrypted server and just had to tweak some settings before I could use it with some help with their documentation. Although, have to say that even quassel upstream needs to get its documentation in order. It is just all over the place, and they haven t put any effort into streamlining the documentation, so that finding things becomes easier. But that can be said of many projects upstream. There is one thing though that all of these IRC clients lack. The lack of a password manager. Now till that isn t fixed it will suck because you need another secure place to put your password/s. You either put it on your desktop somewhere (insecure) or store it in the cloud somewhere (somewhat secure but again need to remember that password), whatever you do is extra work. I am sure there will be a day when authenticating with Nickserv will be an automated task and people can just get on talking on channels and figuring out how to be part of the various communities. As can be seen, even now there is a bit of a learning curve for both newbies and people who know a bit about systems to get it working. Now, I know there are a lot of things that need to be fixed in the anonymity, security place if I put that sort of hat. For e.g. wouldn t it be cool if either the IRC client or one of its add-on gave throwaway usernames and passwords. The passwords would be complex. This would make it easier who are paranoid about security and many do and would have. As an example we can see of Fuchs. Now if the gentleman or lady is working in a professional capacity and would come to know of their real identity and perceive rightly or wrongly the role of that person, it will affect their career. Now, should it? I am sure a lot of people would be divided on the issue. Personally, as far as I am concerned, I would say no because whether right or wrong, whatever they were doing they were doing on their own time. Not on company time. So it doesn t concern the company at all. If we were to let companies police the behavior outside the time, individuals would be in a lot of trouble. Although, have to say that is a trend that has been seen in companies that are firing people either on the left or right. A recent example that comes to mind is Emily Wilder who was fired by Associated Press. Interestingly, she was interviewed by Democracy now, and it did come out that she is a Jew. As can be seen and understood there is a lot of nuance to her story and not the way she was fired. It doesn t give a good taste in the mouth, but then getting fired nobody does. On few forums, people did share of people getting fired of their job because they were dancing (cops). Again, it all depends, for me again, hats off to anybody who feels like dancing or whatever because there are just so many depressing stories all around.

Banned and FOE On few forums I was banned because I was talking about Brexit and American imperialism, both of which are seem to ruffle a few feathers in quite a few places. For instance, many people for obvious reasons do not like this video

Now I m sorry I am not able to and have not been able to give invidious links for the past few months. The reason being invidious itself went through some changes and the changes are good and bad. For e.g. now you need to share your google id with a third-party which at least to my mind is not a good idea. But that probably is another story altogether and it probably will need its own place. Coming back to the video itself, this was shared by Anthony hazard and the Title is The Atlantic slave trade: What too few textbooks told you . I did see this video quite a few years ago and still find it hard to swallow that tens of millions of Africans were bought as slaves to the Americas, although to be fair it does start with the Spanish settlement in the land which would be called the U.S. but they bought slaves with themselves. They even got the American natives, i.e. people from different tribes which made up America at that point. One point to note is that U.S. got its independence on July 4, 1776 so all the people before that were called as European settlers for want of a better word. Some or many of these European settlers would be convicts who were sent from UK. But as shared in the article, that would only happen with U.S. itself is mature and open enough for that discussion. Going back to the original point though, these European or American settlers bought lot of slaves from Africa. The video does also shed some of the cruelty the Europeans or Americans did on the slaves, men and women in different ways. The most revelatory part though which I also forget many a times that because lot of people were taken from Africa and many of them men, it did lead to imbalances in the African societies not just in weddings but economics in general. It also developed a theory called Critical Race theory in which it tries to paint the Africans as an inferior race otherwise how would Christianity work where their own good book says All men are born equal . That does in part explain why the African countries are still so far behind their European or American counterparts. But Africa can still be proud as they are richer than us, yup India. Sadly, I don t think America is ready to have that conversation anytime soon or if ever. And if it were to do, it would have to out-do any truth and reconciliation Committee which the world has seen. A mere apology or two would not just cut it. The problems of America sadly are not limited to just Africans but the natives of the land, for e.g. the Lakota people. In 1868, they put a letter stating we will give the land back to the Lakota people forever, but then the gold rush happened. In 2007, when the Lakota stated their proposal for independence, the U.S. through its force denied. So much for the paper, it was written on. Now from what I came to know over the years, the American natives are called First nations . Time and time again the American Govt. has tried or been foul towards them. Some of the examples include The Yucca Mountain nuclear waste repository . The same is and was the case with The Keystone pipeline which is now dead. Now one could say that it is America s internal matter and I would fully agree but when they speak of internal matters of other countries, then we should have the same freedom. But this is not restricted to just internal matters, sadly. Since the 1950 s i.e. the advent of the cold war, America s foreign policy made Regime changes all around the world. Sharing some of the examples from the Cold War

Iran 1953
Guatemala 1954
Democratic Republic of the Congo 1960
Republic of Ghana 1966
Iraq 1968
Chile 1973
Argentina 1976
Afghanistan 1978-1980s
Grenada
Nicaragua 1981-1990
1. Destabilization through CIA assets
2. Arming the Contras
El Salvador 1980-92
Philippines 1986 Even after the Cold War ended the situation was anonymolus, meaning they still continued with their old behavior. After the end of Cold War

Guatemala 1993
Serbia 2000
Iraq 2003-
Afghanistan 2001 ongoing There is a helpful Wikipedia article titled History of CIA which basically lists most of the covert regime changes done by U.S. The abvoe is merely a sub-set of the actions done by U.S. Now are all the behaviors above of a civilized nation ? And if one cares to notice, one would notice that all the above countries in the list which had the regime change had either Oil or precious metals. So U.S. is and was being what it accuses China, a profiteer. But this isn t just the U.S. China story but more about the American abuse of its power. My own country, India paid IMF loans till 1991 and we paid through the nose. There were economic sanctions against India. But then, this is again not just about U.S. India. Even with Europe or more precisely Norway which didn t want to side with America because their intelligence showed that no WMD were present in Iraq, the relationship still has issues.

Pandemic and the World So I do find that this whole blaming of China by U.S. quite theatrical and full of double-triple standards. Very early during the debates, it came to light that the Spanish Flu actually originated in Kensas, U.S.

What was also interesting as I found in the Pentagon Papers much before The Watergate scandal came out that U.S. had realized that China would be more of a competitor than Russia. And this itself was in 1960 s itself. This shows the level of intelligence that the Americans had. From what I can recollect from whatever I have read of that era, China was still mostly an agri-based economy. So, how the U.S. was able to deduce that China will surpass other economies is beyond me even now. They surely must have known something that even we today do not. One of the other interesting observations and understanding that I got while researching that every year we transfer an average of 7500 diseases from animal to humans and that should be a scary figure. I think more than anything else, loss of habitat and use of animals from food to clothing to medicine is probably the reason we are getting such diseases. I am also sure that there probably are and have been similar number of transfer of diseases from humans to animals as well but for well-known biases and whatnot those studies are neither done or are under-funded. There are and have been reports of something like 850,000 undiscovered viruses which various mammals and birds have. Also I did find that most of such pandemics are hard to identify, for e.g. SARS 1 took about 15 years, Ebola we don t know till date from where it came. Even HIV has questions for us. Hell, even why does hearing go away is a mystery to us. In all of this, we want to say China is culpable. And while China may or may not be culpable, only time will tell, this is surely the opportunity for all countries to spend and make capacities in public health. Countries which will take lessons from it and improve their public healthcare models will hopefully will not suffer as those who will suffer and are continuing to suffer now  To those who feel that habitat loss of animals is untrue, I would suggest them to see Sherni which depicts the human/animal conflict in all its brutality. I am gonna warn in advance that the ending is not nice but what can you expect from a country in which forest area cover has constantly declined and the Govt. itself is only interested in headline management

The only positive story I can share from India is that finally the Modi Govt. has said we will do free vaccine immunization for everybody. Although the pace is nothing to write home about. One additional thing they relaxed was instead of going to Cowin or any other portal, people could simply walk in using their identity papers. Although, given the pace of vaccinations, it is going to take anywhere between 13-18 months or more depending on availability of vaccines.

Looking forward to all and any replies have a virtual keyboard, preferably xvkbd as that is good enough for my use-case.

20 June 2021

Mike Gabriel: BBB Packaging for Debian, a short Heads-Up

Over the past days, I have received tons of positive feedback on my previous blog post about forming the Debian BBB Packaging Team [1]. Feedback arrived via mail, IRC, [matrix] and Mastodon. Awesome. Thanks for sharing your thoughts, folks... Therefore, here comes a short ... Heads-Up on the current Ongoings ... around packaging BigBlueButton for Debian: Credits light+love
Mike Gabriel

[1] https://sunweavers.net/blog/node/133
[2] https://bigbluebutton.org/event-page/
[3] https://docs.google.com/document/d/1kpYJxYFVuWhB84bB73kmAQoGIS59ari1_hn2...

Julian Andres Klode: Migrating away from apt-key

This is an edited copy of an email I sent to provide guidance to users of apt-key as to how to handle things in a post apt-key world. The manual page already provides all you need to know for replacing apt-key add usage:
Note: Instead of using this command a keyring should be placed directly in the /etc/apt/trusted.gpg.d/ directory with a descriptive name and either gpg or asc as file extension
So it s kind of surprising people need step by step instructions for how to copy/download a file into a directory. I ll also discuss the alternative security snakeoil approach with signed-by that s become popular. Maybe we should not have added signed-by, people seem to forget that debs still run maintainer scripts as root. Aside from this email, Debian users should look into extrepo, which manages curated external repositories for you.

Direct translation Assume you currently have:
wget -qO- https://myrepo.example/myrepo.asc   sudo apt-key add  
To translate this directly for bionic and newer, you can use:
sudo wget -qO /etc/apt/trusted.gpg.d/myrepo.asc https://myrepo.example/myrepo.asc
or to avoid downloading as root:
wget -qO-  https://myrepo.example/myrepo.asc   sudo tee -a /etc/apt/trusted.gpg.d/myrepo.asc
Older (and all) releases only support unarmored files with an extension .gpg. If you care about them, provide one, and use
sudo wget -qO /etc/apt/trusted.gpg.d/myrepo.gpg https://myrepo.example/myrepo.gpg
Some people will tell you to download the .asc and pipe it to gpg --dearmor, but gpg might not be installed, so really, just offer a .gpg one instead that is supported on all systems. wget might not be available everywhere so you can use apt-helper:
sudo /usr/lib/apt/apt-helper download-file https://myrepo.example/myrepo.asc /etc/apt/trusted.gpg.d/myrepo.asc
or, to avoid downloading as root:
/usr/lib/apt/apt-helper download-file https://myrepo.example/myrepo.asc /tmp/myrepo.asc && sudo mv /tmp/myrepo.asc /etc/apt/trusted.gpg.d

Pretending to be safer by using signed-by People say it s good practice to not use trusted.gpg.d and install the file elsewhere and then refer to it from the sources.list entry by using signed-by=<path to the file>. So this looks a lot safer, because now your key can t sign other unrelated repositories. In practice, security increase is minimal, since package maintainer scripts run as root anyway. But I guess it s better for publicity :) As an example, here are the instructions to install signal-desktop from signal.org. As mentioned, gpg --dearmor use in there is not a good idea, and I d personally not tell people to modify /usr as it s supposed to be managed by the package manager, but we don t have an /etc/apt/keyrings or similar at the moment; it s fine though if the keyring is installed by the package. You can also just add the file there as a starting point, and then install a keyring package overriding it (pretend there is a signal-desktop-keyring package below that would override the .gpg we added).
# NOTE: These instructions only work for 64 bit Debian-based
# Linux distributions such as Ubuntu, Mint etc.
# 1. Install our official public software signing key
wget -O- https://updates.signal.org/desktop/apt/keys.asc   gpg --dearmor > signal-desktop-keyring.gpg
cat signal-desktop-keyring.gpg   sudo tee -a /usr/share/keyrings/signal-desktop-keyring.gpg > /dev/null
# 2. Add our repository to your list of repositories
echo 'deb [arch=amd64 signed-by=/usr/share/keyrings/signal-desktop-keyring.gpg] https://updates.signal.org/desktop/apt xenial main'  \
  sudo tee -a /etc/apt/sources.list.d/signal-xenial.list
# 3. Update your package database and install signal
sudo apt update && sudo apt install signal-desktop
I do wonder why they do wget gpg --dearmor, pipe that into the file and then cat sudo tee it, instead of having that all in one pipeline. Maybe they want nicer progress reporting.

Scenario-specific guidance We have three scenarios: For system image building, shipping the key in /etc/apt/trusted.gpg.d seems reasonable to me; you are the vendor sort of, so it can be globally trusted. Chrome-style debs and repository config debs: If you ship a deb, embedding the sources.list.d snippet (calling it $myrepo.list) and shipping a $myrepo.gpg in /usr/share/keyrings is the best approach. Whether you ship that in product debs aka vscode/chromium or provide a repository configuration deb (let s call it myrepo-repo.deb) and then tell people to run apt update followed by apt install <package inside the repo> depends on how many packages are in the repo, I guess. Manual instructions (signal style): The third case, where you tell people to run wget themselves, I find tricky. As we see in signal, just stuffing keyring files into /usr/share/keyrings is popular, despite /usr supposed to be managed by the package manager. We don t have another dir inside /etc (or /usr/local), so it s hard to suggest something else. There s no significant benefit from actually using signed-by, so it s kind of extra work for little gain, though.

Addendum: Future work This part is new, just for this blog post. Let s look at upcoming changes and how they make things easier.

Bundled .sources files Assuming I get my merge request merged, the next version of APT (2.4/2.3.something) will do away with all the complexity and allow you to embed the key directly into a deb822 .sources file (which have been available for some time now):
Types: deb
URIs: https://myrepo.example/ https://myotherrepo.example/
Suites: stable not-so-stable
Components: main
Signed-By:
 -----BEGIN PGP PUBLIC KEY BLOCK-----
 .
 mDMEYCQjIxYJKwYBBAHaRw8BAQdAD/P5Nvvnvk66SxBBHDbhRml9ORg1WV5CvzKY
 CuMfoIS0BmFiY2RlZoiQBBMWCgA4FiEErCIG1VhKWMWo2yfAREZd5NfO31cFAmAk
 IyMCGyMFCwkIBwMFFQoJCAsFFgIDAQACHgECF4AACgkQREZd5NfO31fbOwD6ArzS
 dM0Dkd5h2Ujy1b6KcAaVW9FOa5UNfJ9FFBtjLQEBAJ7UyWD3dZzhvlaAwunsk7DG
 3bHcln8DMpIJVXht78sL
 =IE0r
 -----END PGP PUBLIC KEY BLOCK-----
Then you can just provide a .sources files to users, they place it into sources.list.d, and everything magically works Probably adding a nice apt add-source command for it I guess. Well, python-apt s aptsources package still does not support deb822 sources, and never will, we ll need an aptsources2 for that for backwards-compatibility reasons, and then port software-properties and other users to it.

OpenPGP vs aptsign We do have a better, tighter replacement for gpg in the works which uses Ed25519 keys to sign Release files. It s temporarily named aptsign, but it s a generic signer for single-section deb822 files, similar to signify/minisign. We believe that this solves the security nightmare that our OpenPGP integration is while reducing complexity at the same time. Keys are much shorter, so the bundled sources file above will look much nicer.

19 June 2021

Joachim Breitner: Leaving DFINITY

Last Thursday was my last working day at DFINITY. There are various reasons why I felt that after almost three years the DFINITY Foundation isn t quite the right place for me anymore, and this plan has been in the making for a while. Primarily, there are personal pull factors that strongly suggest that I ll take a break from full time employment, so I decided to see the launch of the Internet Computer through and then leave. DFINITY has hired some amazing people, and it was a great pleasure to work with them. I learned a lot (some Rust, a lot of Nix, and just how merciless Conway s law is), and I dare say I had the opportunity to do some good work, contributing my part to make the Internet Computer a reality. I am especially proud of the Interface Specification and the specification-driven design principles behind it. It even comes with a model reference implementation and acceptance test suite, and although we didn t quite get to do formalization, those familiar with the DeepSpec project will recognize some influence of their concept of deep specifications . Besides that, there is of course my work on the Motoko programming language, where I build the backend,a the Candid interoperability layer, where I helped with the formalism, formulated the a generic soundness criterion for Interface Description Languages in a higher order settings and formally verified that in Coq. Fortunately, all of this work is now Free Software or at least Open Source. With so much work poured into this project, I continue to care about it, and you ll see me post on the the developer forum and hack on Motoko. As the Internet Computer becomes gradually more open, I hope I can be gradually more involved again. But even without me contributing full-time I am sure that DFINITY and the Internet Computer will do well; when I left there were still plenty of smart, capable and enthusiastic people forging ahead. So what s next? So far, I have rushed every professional transition in my life: When starting my PhD, when starting my postdoc, when starting my job at DFINITY, and every time I regretted it. So this time, I will take a proper break and will explore the world a bit (as far as that is possible given the pandemic). I will stresslessly contribute to various open source projects. I also hope to do more public outreach and teaching, writing more blog posts again, recording screencasts and giving talks and lectures. If you want to invite me to your user group/seminar/company/workshop, please let me know! Also, I might be up for small interesting projects in a while. Beyond these, I have no concrete plans and am looking forward to the inspiration I might get from hiking through the Scandinavian wilderness. If you happen to stumble across my tent, please stop for a tea.

15 June 2021

Rapha&#235;l Hertzog: Freexian s report about Debian Long Term Support, May 2021

A Debian LTS logo
Like each month, have a look at the work funded by Freexian s Debian LTS offering. Debian project funding In May, we again put aside 2100 EUR to fund Debian projects. There was no proposals for new projects received, thus we re looking forward to receive more projects from various Debian teams! Please do not hesitate to submit a proposal, if there is a project that could benefit from the funding! We re looking forward to receive more projects from various Debian teams! Learn more about the rationale behind this initiative in this article. Debian LTS contributors In May, 12 contributors have been paid to work on Debian LTS, their reports are available: Evolution of the situation In May we released 33 DLAs and mostly skipped our public IRC meeting and the end of the month. In June we ll have another team meeting using video as lined out on our LTS meeting page.
Also, two months ago we announced that Holger would step back from his coordinator role and today we are announcing that he is back for the time being, until a new coordinator is found.
Finally, we would like to remark once again that we are constantly looking for new contributors. Please contact Holger if you are interested! The security tracker currently lists 41 packages with a known CVE and the dla-needed.txt file has 21 packages needing an update. Thanks to our sponsors Sponsors that joined recently are in bold.

14 June 2021

Fran ois Marier: Self-hosting an Ikiwiki blog

8.5 years ago, I moved my blog to Ikiwiki and Branchable. It's now time for me to take the next step and host my blog on my own server. This is how I migrated from Branchable to my own Apache server.

Installing Ikiwiki dependencies Here are all of the extra Debian packages I had to install on my server:
apt install ikiwiki ikiwiki-hosting-common gcc libauthen-passphrase-perl libcgi-formbuilder-perl libcrypt-sslauthen-passphrase-perl libcgi-formbuilder-perl libcrypt-ssleay-perl libjson-xs-perl librpc-xml-perl python-docutils libxml-feed-perl libsearch-xapian-perl libmailtools-perl highlight-common libsearch-xapian-perl xapian-omega
apt install --no-install-recommends ikiwiki-hosting-web libgravatar-url-perl libmail-sendmail-perl libcgi-session-perl
apt purge libnet-openid-consumer-perl
Then I enabled the CGI module in Apache:
a2enmod cgi
and un-commented the following in /etc/apache2/mods-available/mime.conf:
AddHandler cgi-script .cgi

Creating a separate user account Since Ikiwiki needs to regenerate my blog whenever a new article is pushed to the git repo or a comment is accepted, I created a restricted user account for it:
adduser blog
adduser blog sshuser
chsh -s /usr/bin/git-shell blog

git setup Thanks to Branchable storing blogs in git repositories, I was able to import my blog using a simple git clone in /home/blog (the srcdir):
git clone --bare git://feedingthecloud.branchable.com/ source.git
Note that the name of the directory (source.git) is important for the ikiwikihosting plugin to work. Then I pulled the .setup file out of the setup branch in that repo and put it in /home/blog/.ikiwiki/FeedingTheCloud.setup. After that, I deleted the setup branch and the origin remote from that clone:
git branch -d setup
git remote rm origin
Following the recommended git configuration, I created a working directory (the repository) for the blog user to modify the blog as needed:
cd /home/blog/
git clone /home/blog/source.git FeedingTheCloud
I added my own ssh public key to /home/blog/.ssh/authorized_keys so that I could push to the srcdir from my laptop. Finaly, I generated a new ssh key without a passphrase:
ssh-keygen -t ed25519
and added it as deploy key to the GitHub repo which acts as a read-only mirror of my blog.

Ikiwiki config While I started with the Branchable setup file, I changed the following things in it:
adminemail: webmaster@fmarier.org
srcdir: /home/blog/FeedingTheCloud
destdir: /var/www/blog
url: https://feeding.cloud.geek.nz
cgiurl: https://feeding.cloud.geek.nz/blog.cgi
cgi_wrapper: /var/www/blog/blog.cgi
cgi_wrappermode: 675
add_plugins:
- goodstuff
- lockedit
- comments
- blogspam
- sidebar
- attachment
- favicon
- format
- highlight
- search
- theme
- moderatedcomments
- flattr
- calendar
- headinganchors
- notifyemail
- anonok
- autoindex
- date
- relativedate
- htmlbalance
- pagestats
- sortnaturally
- ikiwikihosting
- gitpush
- emailauth
disable_plugins:
- brokenlinks
- fortune
- more
- openid
- orphans
- passwordauth
- progress
- recentchanges
- repolist
- toggle
- txt
sslcookie: 1
cookiejar:
  file: /home/blog/.ikiwiki/cookies
useragent: ikiwiki
git_wrapper: /home/blog/source.git/hooks/post-update
urlalias:
- http://feeds.cloud.geek.nz/
- http://www.feeding.cloud.geek.nz/
owner: francois@fmarier.org
hostname: feeding.cloud.geek.nz
emailauth_sender: login@fmarier.org
allowed_attachments: admin()
Then I created the destdir:
mkdir /var/www/blog
chown blog:blog /var/www/blog
and generated the initial copy of the blog as the blog user:
ikiwiki --setup .ikiwiki/FeedingTheCloud.setup --wrappers --rebuild
One thing that failed to generate properly was the tag cloug (from the pagestats plugin). I have not been able to figure out why it fails to generate any output when run this way, but if I push to the repo and let the git hook handle the rebuilding of the wiki, the tag cloud is generated correctly. Consequently, fixing this is not high on my list of priorities, but if you happen to know what the problem is, please reach out.

Apache config Here's the Apache config I put in /etc/apache2/sites-available/blog.conf:
<VirtualHost *:443>
    ServerName feeding.cloud.geek.nz
    SSLEngine On
    SSLCertificateFile /etc/letsencrypt/live/feeding.cloud.geek.nz/fullchain.pem
    SSLCertificateKeyFile /etc/letsencrypt/live/feeding.cloud.geek.nz/privkey.pem
    Header set Strict-Transport-Security: "max-age=63072000; includeSubDomains; preload"
    Include /etc/fmarier-org/blog-common
</VirtualHost>
<VirtualHost *:443>
    ServerName www.feeding.cloud.geek.nz
    ServerAlias feeds.cloud.geek.nz
    SSLEngine On
    SSLCertificateFile /etc/letsencrypt/live/feeding.cloud.geek.nz/fullchain.pem
    SSLCertificateKeyFile /etc/letsencrypt/live/feeding.cloud.geek.nz/privkey.pem
    Redirect permanent / https://feeding.cloud.geek.nz/
</VirtualHost>
<VirtualHost *:80>
    ServerName feeding.cloud.geek.nz
    ServerAlias www.feeding.cloud.geek.nz
    ServerAlias feeds.cloud.geek.nz
    Redirect permanent / https://feeding.cloud.geek.nz/
</VirtualHost>
and the common config I put in /etc/fmarier-org/blog-common:
ServerAdmin webmaster@fmarier.org
DocumentRoot /var/www/blog
LogLevel core:info
CustomLog $ APACHE_LOG_DIR /blog-access.log combined
ErrorLog $ APACHE_LOG_DIR /blog-error.log
AddType application/rss+xml .rss
<Location /blog.cgi>
        Options +ExecCGI
</Location>
before enabling all of this using:
a2ensite blog
apache2ctl configtest
systemctl restart apache2.service
The feeds.cloud.geek.nz domain used to be pointing to Feedburner and so I need to maintain it in order to avoid breaking RSS feeds from folks who added my blog to their reader a long time ago.

Server-side improvements Since I'm now in control of the server configuration, I was able to make several improvements to how my blog is served. First of all, I enabled the HTTP/2 and Brotli modules:
a2enmod http2
a2enmod brotli
and enabled Brotli compression by putting the following in /etc/apache2/conf-available/compression.conf:
<IfModule mod_brotli.c>
  <IfDefine !TRANSFER_COMPRESSION>
    Define TRANSFER_COMPRESSION BROTLI_COMPRESS
  </IfDefine>
</IfModule>
<IfModule mod_deflate.c>
  <IfDefine !TRANSFER_COMPRESSION>
    Define TRANSFER_COMPRESSION DEFLATE
  </IfDefine>
</IfModule>
<IfDefine TRANSFER_COMPRESSION>
  <IfModule mod_filter.c>
    AddOutputFilterByType $ TRANSFER_COMPRESSION  text/html text/plain text/xml text/css text/javascript
    AddOutputFilterByType $ TRANSFER_COMPRESSION  application/x-javascript application/javascript application/ecmascript
    AddOutputFilterByType $ TRANSFER_COMPRESSION  application/rss+xml
    AddOutputFilterByType $ TRANSFER_COMPRESSION  application/xml
  </IfModule>
</IfDefine>
and replacing /etc/apache2/mods-available/deflate.conf with the following:
# Moved to /etc/apache2/conf-available/compression.conf as per https://bugs.debian.org/972632
before enabling this new config:
a2enconf compression
Next, I made my blog available as a Tor onion service by putting the following in /etc/apache2/sites-available/blog.conf:
<VirtualHost *:443>
    ServerName feeding.cloud.geek.nz
    ServerAlias xfdug5vmfi6oh42fp6ahhrqdjcf7ysqat6fkp5dhvde4d7vlkqixrsad.onion
    Header set Onion-Location "http://xfdug5vmfi6oh42fp6ahhrqdjcf7ysqat6fkp5dhvde4d7vlkqixrsad.onion% REQUEST_URI s"
    Header set alt-svc 'h2="xfdug5vmfi6oh42fp6ahhrqdjcf7ysqat6fkp5dhvde4d7vlkqixrsad.onion:443"; ma=315360000; persist=1'
    ... 
<VirtualHost *:80>
    ServerName xfdug5vmfi6oh42fp6ahhrqdjcf7ysqat6fkp5dhvde4d7vlkqixrsad.onion
    Include /etc/fmarier-org/blog-common
</VirtualHost>
Then I followed the Mozilla Observatory recommendations and enabled the following security headers:
Header set Content-Security-Policy: "default-src 'none'; report-uri https://fmarier.report-uri.com/r/d/csp/enforce ; style-src 'self' 'unsafe-inline' ; img-src 'self' https://seccdn.libravatar.org/ ; script-src https://feeding.cloud.geek.nz/ikiwiki/ https://xfdug5vmfi6oh42fp6ahhrqdjcf7ysqat6fkp5dhvde4d7vlkqixrsad.onion/ikiwiki/ http://xfdug5vmfi6oh42fp6ahhrqdjcf7ysqat6fkp5dhvde4d7vlkqixrsad.onion/ikiwiki/ 'unsafe-inline' 'sha256-pA8FbKo4pYLWPDH2YMPqcPMBzbjH/RYj0HlNAHYoYT0=' 'sha256-Kn5E/7OLXYSq+EKMhEBGJMyU6bREA9E8Av9FjqbpGKk=' 'sha256-/BTNlczeBxXOoPvhwvE1ftmxwg9z+WIBJtpk3qe7Pqo=' ; base-uri 'self'; form-action 'self' ; frame-ancestors 'self'"
Header set X-Frame-Options: "SAMEORIGIN"
Header set Referrer-Policy: "same-origin"
Header set X-Content-Type-Options: "nosniff"
Note that the Mozilla Observatory is mistakenly identifying HTTP onion services as insecure, so you can ignore that failure. I also used the Mozilla TLS config generator to improve the TLS config for my server. Then I added security.txt and gpc.json to the root of my git repo and then added the following aliases to put these files in the right place:
Alias /.well-known/gpc.json /var/www/blog/gpc.json
Alias /.well-known/security.txt /var/www/blog/security.txt
I also followed these instructions to create a sitemap for my blog with the following alias:
Alias /sitemap.xml /var/www/blog/sitemap/index.rss
Finally, I simplified a few error pages to save bandwidth:
ErrorDocument 301 " "
ErrorDocument 302 " "
ErrorDocument 404 "Not Found"

Monitoring 404s Another advantage of running my own web server is that I can monitor the 404s easily using logcheck by putting the following in /etc/logcheck/logcheck.logfiles:
/var/log/apache2/blog-error.log 
Based on that, I added a few redirects to point bots and users to the location of my RSS feed:
Redirect permanent /atom /index.atom
Redirect permanent /comments.rss /comments/index.rss
Redirect permanent /comments.atom /comments/index.atom
Redirect permanent /FeedingTheCloud /index.rss
Redirect permanent /feed /index.rss
Redirect permanent /feed/ /index.rss
Redirect permanent /feeds/posts/default /index.rss
Redirect permanent /rss /index.rss
Redirect permanent /rss/ /index.rss
and to tell them to stop trying to fetch obsolete resources:
Redirect gone /~ff/FeedingTheCloud
Redirect gone /gittip_button.png
Redirect gone /ikiwiki.cgi
I also used these 404s to discover a few old Feedburner URLs that I could redirect to the right place using archive.org:
Redirect permanent /feeds/1572545745827565861/comments/default /posts/watch-all-of-your-logs-using-monkeytail/comments.atom
Redirect permanent /feeds/1582328597404141220/comments/default /posts/news-feeds-rssatom-for-mythtvorg-and/comments.atom
...
Redirect permanent /feeds/8490436852808833136/comments/default /posts/recovering-lost-git-commits/comments.atom
Redirect permanent /feeds/963415010433858516/comments/default /posts/debugging-openwrt-routers-by-shipping/comments.atom
I also put the following robots.txt in the git repo in order to stop a bunch of authentication errors coming from crawlers:
User-agent: *
Disallow: /blog.cgi
Disallow: /ikiwiki.cgi

Future improvements There are a few things I'd like to improve on my current setup. The first one is to remove the iwikihosting and gitpush plugins and replace them with a small script which would simply git push to the read-only GitHub mirror. Then I could uninstall the ikiwiki-hosting-common and ikiwiki-hosting-web since that's all I use them for. Next, I would like to have proper support for signed git pushes. At the moment, I have the following in /home/blog/source.git/config:
[receive]
    advertisePushOptions = true
    certNonceSeed = "(random string)"
but I'd like to also reject unsigned pushes. While my blog now has a CSP policy which doesn't rely on unsafe-inline for scripts, it does rely on unsafe-inline for stylesheets. I tried to remove this but the actual calls to allow seemed to be located deep within jQuery and so I gave up. Update: now fixed. Finally, I'd like to figure out a good way to deal with articles which don't currently have comments. At the moment, if you try to subscribe to their comment feed, it returns a 404. For example:
[Sun Jun 06 17:43:12.336350 2021] [core:info] [pid 30591:tid 140253834704640] [client 66.249.66.70:57381] AH00128: File does not exist: /var/www/blog/posts/using-iptables-with-network-manager/comments.atom
This is obviously not ideal since many feed readers will refuse to add a feed which is currently not found even though it could become real in the future. If you know of a way to fix this, please let me know.

12 June 2021

Norbert Preining: Future of Cinnamon in Debian

OK, this is not an easy post. I have been maintaining Cinnamon in Debian for quite some time, since around the times version 4 came out. The soon (hahaha) to be released Bullseye will carry the last release of the 4-track, but version 5 is already waiting, After Bullseye, the future of Cinnamon in Debian currently looks bleak. Since my switch to KDE/Plasma, I haven t used Cinnamon in months. Only occasionally I tested new releases, but never gave them a real-world test. Having left Gnome3 for it s complete lack of usability for pro-users, I escaped to Cinnamon and found a good home there for quite some time using modern technology but keeping user interface changes conservative. For long time I haven t even contemplated using KDE, having been burned during the bad days of KDE3/4 when bloat-as-bloat-can-be was the best description. What revelation it was that KDE/Plasma was more lightweight, faster, responsive, integrated, customizable, all in all simple great. Since my switch to KDE/Plasma I think not for a second I have missed anything from the Gnome3 or Cinnamon world. And that means, I will most probably NOT packaging Cinnamon 5, nor do any real packaging work of Cinnamon for Debian in the future. Of course, I will try to keep maintenance of the current set of packages for Bullseye, but for the next release, I think it is time that someone new steps in. Cinnamon packaging taught me a lot on how to deal with multiple related packages, which is of great use in the KDE packaging world. If someone steps forward, I will surely be around for support and help, but as long as nobody takes the banner, it will mean the end of Cinnamon in Debian. Please contact me if you are interested!

9 June 2021

Thorsten Alteholz: My Debian Activities in May 2021

FTP master This month I accepted 85 and rejected 6 packages. The overall number of packages that got accepted was only 88. Yeah, Debian is frozen but hopefully will unfreeze soon. Debian LTS This was my eighty-third month that I did some work for the Debian LTS initiative, started by Raphael Hertzog at Freexian. This month my all in all workload has been 29.75h. During that time I did LTS and normal security uploads of: I also made some progress with gpac and struggle with dozens of issues here. Last but not least I did some days of frontdesk duties, which for whatever reason was rather time-consuming this month. Debian ELTS This month was the thirty-fifth ELTS month. During my allocated time I uploaded: I also made some progress with python3.4 Last but not least I did some days of frontdesk duties. Other stuff On my neverending golang challenge I again uploaded some packages either for NEW or as source upload. Last but not least I adopted gnucobol.

8 June 2021

Norbert Preining: KDE/Plasma 5.22 for Debian

Today, KDE released version 5.22 of the Plasma desktop with the usual long list of updates and improvements. And packages for Debian are ready for consumption! Ah and yes, KDE Gear 21.04 is also ready! As usual, I am providing packages via my OBS builds. If you have used my packages till now, then you only need to change the plasma521 line to read plasma522. Just for your convenience, if you want the full set of stuff, here are the apt-source entries:
deb https://download.opensuse.org/repositories/home:/npreining:/debian-kde:/other-deps/Debian_Unstable/ ./
deb https://download.opensuse.org/repositories/home:/npreining:/debian-kde:/frameworks/Debian_Unstable/ ./
deb https://download.opensuse.org/repositories/home:/npreining:/debian-kde:/plasma522/Debian_Unstable/ ./
deb https://download.opensuse.org/repositories/home:/npreining:/debian-kde:/apps2104/Debian_Unstable/ ./
deb https://download.opensuse.org/repositories/home:/npreining:/debian-kde:/other/Debian_Unstable/ ./
and for testing the same with Debian_unstable replaced with Debian_Testing. As usual, don t forget that you need to import my OBS gpg key to make these repos work! The sharp eye might have detected also the apps2104 line, yes the newly renamed KDE Gear suite of packages is also available in my OBS builds (and in Debian/experimental). Uploads to Debian Currently, the frameworks and most of the KDE Gear (Apps) 21.04 are in Debian/experimental. I will upload Plasma 5.22 to experimental as soon as NEW processing of two packages is finished (which might take another few months). After the release of bullseye, all the current versions will be uploaded to unstable as usual. Enjoy the new Plasma!

7 June 2021

Mike Gabriel: UBports: Packaging of Lomiri Operating Environment for Debian (part 05)

Before and during FOSDEM 2020, I agreed with the people (developers, supporters, managers) of the UBports Foundation to package the Unity8 Operating Environment for Debian. Since 27th Feb 2020, Unity8 has now become Lomiri. Recent Uploads to Debian related to Lomiri (Feb - May 2021) Over the past 4 months I attended 14 of the weekly scheduled UBports development sync sessions and worked on the following bits and pieces regarding Lomiri in Debian: The largest amount of work (and time) went into getting lomiri-ui-toolkit ready for upload. That code component is a absolutely massive beast and dearly intertwined with Qt5 (and unit tests fail with every new warning a new Qt5.x introduces). This bit of work I couldn't do alone (see below in "Credits" section). The next projects / packages ahead are some smaller packages (content-hub, gmenuharness, etc.) before we will finally come to lomiri (i.e. main bit of the Lomiri Operating Environment) itself. Credits Many big thanks go to everyone on the UBports project, but especially to Ratchanan Srirattanamet who lived inside of lomiri-ui-toolkit for more than two weeks, it seemed. Also, thanks to Florian Leeber for being my point of contact for topics regarding my cooperation with the UBports Foundation. Packaging Status The current packaging status of Lomiri related packages in Debian can be viewed at:
https://qa.debian.org/developer.php?login=team%2Bubports%40tracker.debia... light+love
Mike Gabriel (aka sunweaver)

6 June 2021

Iustin Pop: Goodbye Travis, hello GitHub Actions

My very cyclical open-source work For some reason, I only manage to do coding at home every some months - mostly 6 months apart, so twice a year (even worse than my blogging frequency :P). As such, I missed the whole discussion about travis-ci (the .org version) going away, etc. So when I finally did some work on corydalis a few weeks ago, and had a travis build failure (restoring a large cache simply timed out, either travis or S3 has some hiccup - it cleared by itself a day later), I opened the travis-ci interface to see the scary banner ( travis-ci.org is shutting down ), and asked myself what s happening. The deadline was in less than a week even Long story short, Travis was a good home for many years, but they were bought and are doing significant changes to their OSS support, so it s time to move on. I anyway wanted to learn GitHub Actions for a while (ahem, free time intervened), so this was a good (forced) opportunity .

Proper composable CI The advantage of Travis infrastructure was that the build configuration was really simple. It had very few primitives: pre-steps (before_install), install steps, and the actual things to do to test, post-steps (after_sucess) and a few other small helpers (caching, apt packages, etc.) This made it really easy to just pick up and write a config, plus it had the advantage of allowing to test configs from the web UI without needing to push. This simplicity was unfortunately also its significant limiter: the way to do complex things in steps was simply to add more shell commands. GitHub actions, together with its marketplace, changes this entirely. There are no built-in actions, the language just defines the build/job/step hierarchy, and one glues together whatever steps they want. This has the disadvantage that even checking out the code needs to be explicitly written in all workflows (so boilerplate, if you don t need customisation), but it opens up a huge opportunity for composition, since it allows people to publish actions (steps) that you just import, encapsulating all the work. So, after learning how to write a moderately complicated workflow (complicated as in multiple Python version, some of them needing different OS version, and multi-OS), it was straightforward to port this to all my projects - just somewhat tedious. I ve now shutdown all builds on Travis, just can t find a way to delete my account

Better multi-OS, worse (missing) multi-arch In theory, Travis supports Linux, MacOS, FreeBSD and Windows, but I ve found that support for non-Linux is not quite as good. Maybe I missed things, but multi-version Python builds on MacOS were not as nicely supported as Linux; Windows is quite early, and very limited; and I haven t tested FreeBSD. GitHub is more restrictive - Linux, MacOS and Windows - but I found support for MacOS and Windows better for my use cases. If your use case is testing multiple MacOS versions, Travis wins, if it s more varied languages/etc. on the single available MacOS version, GitHub works better. On the multi-arch side, Travis wins hands-down. Four different native architectures, and enabling one more is just adding another key to the arch list. With GitHub, if I understand right, you either have to use docker+emulation, or use self-hosted runners. So here it really matters what is more important to you. Maybe in the future GitHub will support more arches, but right now, Travis wins for this use-case.

Summary For my specific use-case, GitHub Actions is a better fit right now. The marketplace has advantages (I ll explain better in a future post), the actions are a very nice way to encapsulate functionality, and it s still available/free (up to a limit) for open source projects. I don t know what the future of Travis for OSS will be, but all I heard so far is very concerning. However, I ll still miss a few things. For example, an overall dashboard for all my projects, like this one:
Travis dashboard Travis dashboard
I couldn t find any such thing on GitHub, so I just use my set of badges. Then cache management. Travis allows you to clear the cache, and it does auto-update the cache. GitHub caches are immutable once built, so you have to:
  • watch if changed dependencies/dependency chains result in things that are no longer cached;
  • if so, need to manually bump the cache key, resulting in a commit for purely administrative purposes.
For languages where you have a clean full chain of dependencies recorded (e.g. node s package-lock.json, stack s stack.yaml.lock), this is trivial to achieve, but gets complicated if you add OS dependencies, languages which don t record all this, etc. Hmm, maybe I should just embed the year/month in the cache name - cheating, but automated cheating. With all said and done, I think GHA is much less refined, but with more potential. Plus, the pace of innovation on Travis side was quite slow (likely money problems, hence them being bought, etc.) So one TODO done: learn GitHub Actions , even if a bit unplanned

3 June 2021

Jonathan McDowell: Digging into Kubernetes containers

Having build a single node Kubernetes cluster and had a poke at what it s doing in terms of networking the next thing I want to do is figure out what it s doing in terms of containers. You might argue this should have come before networking, but to me the networking piece is more non-standard than the container piece, so I wanted to understand that first. Let s start with a process listing on the host. ps faxno user,stat,cmd There are a number of processes from the host kernel we don t care about:
kernel processes
    USER STAT CMD
       0 S    [kthreadd]
       0 I<    \_ [rcu_gp]
       0 I<    \_ [rcu_par_gp]
       0 I<    \_ [kworker/0:0H-events_highpri]
       0 I<    \_ [mm_percpu_wq]
       0 S     \_ [rcu_tasks_rude_]
       0 S     \_ [rcu_tasks_trace]
       0 S     \_ [ksoftirqd/0]
       0 I     \_ [rcu_sched]
       0 S     \_ [migration/0]
       0 S     \_ [cpuhp/0]
       0 S     \_ [cpuhp/1]
       0 S     \_ [migration/1]
       0 S     \_ [ksoftirqd/1]
       0 I<    \_ [kworker/1:0H-kblockd]
       0 S     \_ [cpuhp/2]
       0 S     \_ [migration/2]
       0 S     \_ [ksoftirqd/2]
       0 I<    \_ [kworker/2:0H-events_highpri]
       0 S     \_ [cpuhp/3]
       0 S     \_ [migration/3]
       0 S     \_ [ksoftirqd/3]
       0 I<    \_ [kworker/3:0H-kblockd]
       0 S     \_ [kdevtmpfs]
       0 I<    \_ [netns]
       0 S     \_ [kauditd]
       0 S     \_ [khungtaskd]
       0 S     \_ [oom_reaper]
       0 I<    \_ [writeback]
       0 S     \_ [kcompactd0]
       0 SN    \_ [ksmd]
       0 SN    \_ [khugepaged]
       0 I<    \_ [kintegrityd]
       0 I<    \_ [kblockd]
       0 I<    \_ [blkcg_punt_bio]
       0 I<    \_ [edac-poller]
       0 I<    \_ [devfreq_wq]
       0 I<    \_ [kworker/0:1H-kblockd]
       0 S     \_ [kswapd0]
       0 I<    \_ [kthrotld]
       0 I<    \_ [acpi_thermal_pm]
       0 I<    \_ [ipv6_addrconf]
       0 I<    \_ [kstrp]
       0 I<    \_ [zswap-shrink]
       0 I<    \_ [kworker/u9:0-hci0]
       0 I<    \_ [kworker/2:1H-kblockd]
       0 I<    \_ [ata_sff]
       0 I<    \_ [sdhci]
       0 S     \_ [irq/39-mmc0]
       0 I<    \_ [sdhci]
       0 S     \_ [irq/42-mmc1]
       0 S     \_ [scsi_eh_0]
       0 I<    \_ [scsi_tmf_0]
       0 S     \_ [scsi_eh_1]
       0 I<    \_ [scsi_tmf_1]
       0 I<    \_ [kworker/1:1H-kblockd]
       0 I<    \_ [kworker/3:1H-kblockd]
       0 S     \_ [jbd2/sda5-8]
       0 I<    \_ [ext4-rsv-conver]
       0 S     \_ [watchdogd]
       0 S     \_ [scsi_eh_2]
       0 I<    \_ [scsi_tmf_2]
       0 S     \_ [usb-storage]
       0 I<    \_ [cfg80211]
       0 S     \_ [irq/130-mei_me]
       0 I<    \_ [cryptd]
       0 I<    \_ [uas]
       0 S     \_ [irq/131-iwlwifi]
       0 S     \_ [card0-crtc0]
       0 S     \_ [card0-crtc1]
       0 S     \_ [card0-crtc2]
       0 I<    \_ [kworker/u9:2-hci0]
       0 I     \_ [kworker/3:0-events]
       0 I     \_ [kworker/2:0-events]
       0 I     \_ [kworker/1:0-events_power_efficient]
       0 I     \_ [kworker/3:2-events]
       0 I     \_ [kworker/1:1]
       0 I     \_ [kworker/u8:1-events_unbound]
       0 I     \_ [kworker/0:2-events]
       0 I     \_ [kworker/2:2]
       0 I     \_ [kworker/u8:0-events_unbound]
       0 I     \_ [kworker/0:1-events]
       0 I     \_ [kworker/0:0-events]
There are various basic host processes, including my SSH connections, and Docker. I note it s using containerd. We also see kubelet, the Kubernetes node agent.
host processes
    USER STAT CMD
       0 Ss   /sbin/init
       0 Ss   /lib/systemd/systemd-journald
       0 Ss   /lib/systemd/systemd-udevd
     101 Ssl  /lib/systemd/systemd-timesyncd
       0 Ssl  /sbin/dhclient -4 -v -i -pf /run/dhclient.enx00e04c6851de.pid -lf /var/lib/dhcp/dhclient.enx00e04c6851de.leases -I -df /var/lib/dhcp/dhclient6.enx00e04c6851de.leases enx00e04c6851de
       0 Ss   /usr/sbin/cron -f
     104 Ss   /usr/bin/dbus-daemon --system --address=systemd: --nofork --nopidfile --systemd-activation --syslog-only
       0 Ssl  /usr/sbin/dockerd -H fd://
       0 Ssl  /usr/sbin/rsyslogd -n -iNONE
       0 Ss   /usr/sbin/smartd -n
       0 Ss   /lib/systemd/systemd-logind
       0 Ssl  /usr/bin/containerd
       0 Ss+  /sbin/agetty -o -p -- \u --noclear tty1 linux
       0 Ss   sshd: /usr/sbin/sshd -D [listener] 0 of 10-100 startups
       0 Ss    \_ sshd: root@pts/1
       0 Ss        \_ -bash
       0 R+            \_ ps faxno user,stat,cmd
       0 Ss    \_ sshd: noodles [priv]
    1000 S         \_ sshd: noodles@pts/0
    1000 Ss+           \_ -bash
       0 Ss   /lib/systemd/systemd --user
       0 S     \_ (sd-pam)
    1000 Ss   /lib/systemd/systemd --user
    1000 S     \_ (sd-pam)
       0 Ssl  /usr/bin/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --kubeconfig=/etc/kubernetes/kubelet.conf --config=/var/lib/kubelet/config.yaml --network-plugin=cni --pod-infra-container-image=k8s.gcr.io/pause:3.4.1
And that just leaves a bunch of container related processes:
container processes
       0 Sl   /usr/bin/containerd-shim-runc-v2 -namespace moby -id fd95c597ff3171ff110b7bf440229e76c5108d5d93be75ffeab54869df734413 -address /run/containerd/containerd.sock
       0 Ss    \_ /pause
       0 Sl   /usr/bin/containerd-shim-runc-v2 -namespace moby -id c2ff2c50f0bc052feda2281741c4f37df7905e3b819294ec645148ae13c3fe1b -address /run/containerd/containerd.sock
       0 Ss    \_ /pause
       0 Sl   /usr/bin/containerd-shim-runc-v2 -namespace moby -id 589c1545d9e0cdf8ea391745c54c8f4db49f5f437b1a2e448e7744b2c12f8856 -address /run/containerd/containerd.sock
       0 Ss    \_ /pause
       0 Sl   /usr/bin/containerd-shim-runc-v2 -namespace moby -id 6f417fd8a8c573a2b8f792af08cdcd7ce663457f0f7218c8d55afa3732e6ee94 -address /run/containerd/containerd.sock
       0 Ss    \_ /pause
       0 Sl   /usr/bin/containerd-shim-runc-v2 -namespace moby -id afa9798c9f663b21df8f38d9634469e6b4db0984124547cd472a7789c61ef752 -address /run/containerd/containerd.sock
       0 Ssl   \_ kube-scheduler --authentication-kubeconfig=/etc/kubernetes/scheduler.conf --authorization-kubeconfig=/etc/kubernetes/scheduler.conf --bind-address=127.0.0.1 --kubeconfig=/etc/kubernetes/scheduler.conf --leader-elect=true --port=0
       0 Sl   /usr/bin/containerd-shim-runc-v2 -namespace moby -id 4b3708b62f4d427690f5979848c59fce522dab6c62a9c53b806ffbaef3f88e62 -address /run/containerd/containerd.sock
       0 Ssl   \_ kube-controller-manager --authentication-kubeconfig=/etc/kubernetes/controller-manager.conf --authorization-kubeconfig=/etc/kubernetes/controller-manager.conf --bind-address=127.0.0.1 --client-ca-file=/etc/kubernetes/pki/ca.crt --cluster-name=kubernetes --cluster-signing-cert-file=/etc/kubernetes/pki/ca.crt --cluster-signing-key-file=/etc/kubernetes/pki/ca.key --controllers=*,bootstrapsigner,tokencleaner --kubeconfig=/etc/kubernetes/controller-manager.conf --leader-elect=true --port=0 --requestheader-client-ca-file=/etc/kubernetes/pki/front-proxy-ca.crt --root-ca-file=/etc/kubernetes/pki/ca.crt --service-account-private-key-file=/etc/kubernetes/pki/sa.key --use-service-account-credentials=true
       0 Sl   /usr/bin/containerd-shim-runc-v2 -namespace moby -id 89f35bf7a825eb97db7035d29aa475a3a1c8aaccda0860a46388a3a923cd10bc -address /run/containerd/containerd.sock
       0 Ssl   \_ kube-apiserver --advertise-address=192.168.53.147 --allow-privileged=true --authorization-mode=Node,RBAC --client-ca-file=/etc/kubernetes/pki/ca.crt --enable-admission-plugins=NodeRestriction --enable-bootstrap-token-auth=true --etcd-cafile=/etc/kubernetes/pki/etcd/ca.crt --etcd-certfile=/etc/kubernetes/pki/apiserver-etcd-client.crt --etcd-keyfile=/etc/kubernetes/pki/apiserver-etcd-client.key --etcd-servers=https://127.0.0.1:2379 --insecure-port=0 --kubelet-client-certificate=/etc/kubernetes/pki/apiserver-kubelet-client.crt --kubelet-client-key=/etc/kubernetes/pki/apiserver-kubelet-client.key --kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname --proxy-client-cert-file=/etc/kubernetes/pki/front-proxy-client.crt --proxy-client-key-file=/etc/kubernetes/pki/front-proxy-client.key --requestheader-allowed-names=front-proxy-client --requestheader-client-ca-file=/etc/kubernetes/pki/front-proxy-ca.crt --requestheader-extra-headers-prefix=X-Remote-Extra- --requestheader-group-headers=X-Remote-Group --requestheader-username-headers=X-Remote-User --secure-port=6443 --service-account-issuer=https://kubernetes.default.svc.cluster.local --service-account-key-file=/etc/kubernetes/pki/sa.pub --service-account-signing-key-file=/etc/kubernetes/pki/sa.key --service-cluster-ip-range=10.96.0.0/12 --tls-cert-file=/etc/kubernetes/pki/apiserver.crt --tls-private-key-file=/etc/kubernetes/pki/apiserver.key
       0 Sl   /usr/bin/containerd-shim-runc-v2 -namespace moby -id 2dabff6e4f59c96d931d95781d28314065b46d0e6f07f8c65dc52aa465f69456 -address /run/containerd/containerd.sock
       0 Ssl   \_ etcd --advertise-client-urls=https://192.168.53.147:2379 --cert-file=/etc/kubernetes/pki/etcd/server.crt --client-cert-auth=true --data-dir=/var/lib/etcd --initial-advertise-peer-urls=https://192.168.53.147:2380 --initial-cluster=udon=https://192.168.53.147:2380 --key-file=/etc/kubernetes/pki/etcd/server.key --listen-client-urls=https://127.0.0.1:2379,https://192.168.53.147:2379 --listen-metrics-urls=http://127.0.0.1:2381 --listen-peer-urls=https://192.168.53.147:2380 --name=udon --peer-cert-file=/etc/kubernetes/pki/etcd/peer.crt --peer-client-cert-auth=true --peer-key-file=/etc/kubernetes/pki/etcd/peer.key --peer-trusted-ca-file=/etc/kubernetes/pki/etcd/ca.crt --snapshot-count=10000 --trusted-ca-file=/etc/kubernetes/pki/etcd/ca.crt
       0 Sl   /usr/bin/containerd-shim-runc-v2 -namespace moby -id 73fae81715b670255b66419a7959798b287be7bbb41e96f8b711fa529aa02f0d -address /run/containerd/containerd.sock
       0 Ss    \_ /pause
       0 Sl   /usr/bin/containerd-shim-runc-v2 -namespace moby -id 26d92a720c560caaa5f8a0217bc98e486b1c032af6c7c5d75df508021d462878 -address /run/containerd/containerd.sock
       0 Ssl   \_ /usr/local/bin/kube-proxy --config=/var/lib/kube-proxy/config.conf --hostname-override=udon
       0 Sl   /usr/bin/containerd-shim-runc-v2 -namespace moby -id 7104f65b5d92a56a2df93514ed0a78cfd1090ca47b6ce4e0badc43be6c6c538e -address /run/containerd/containerd.sock
       0 Ss    \_ /pause
       0 Sl   /usr/bin/containerd-shim-runc-v2 -namespace moby -id 48d735f7f44e3944851563f03f32c60811f81409e7378641404035dffd8c1eb4 -address /run/containerd/containerd.sock
       0 Ssl   \_ /usr/bin/weave-npc
       0 S<        \_ /usr/sbin/ulogd -v
       0 Sl   /usr/bin/containerd-shim-runc-v2 -namespace moby -id 36b418e69ae7076fe5a44d16cef223d8908016474cb65910f2fd54cca470566b -address /run/containerd/containerd.sock
       0 Ss    \_ /bin/sh /home/weave/launch.sh
       0 Sl        \_ /home/weave/weaver --port=6783 --datapath=datapath --name=12:82:8f:ed:c7:bf --http-addr=127.0.0.1:6784 --metrics-addr=0.0.0.0:6782 --docker-api= --no-dns --db-prefix=/weavedb/weave-net --ipalloc-range=192.168.0.0/24 --nickname=udon --ipalloc-init consensus=0 --conn-limit=200 --expect-npc --no-masq-local
       0 Sl        \_ /home/weave/kube-utils -run-reclaim-daemon -node-name=udon -peer-name=12:82:8f:ed:c7:bf -log-level=debug
       0 Sl   /usr/bin/containerd-shim-runc-v2 -namespace moby -id 534c0a698478599277482d97a137fab8ef4d62db8a8a5cf011b4bead28246f70 -address /run/containerd/containerd.sock
       0 Ss    \_ /pause
       0 Sl   /usr/bin/containerd-shim-runc-v2 -namespace moby -id 9ffd6b668ddfbf3c64c6783bc6f4f6cc9e92bfb16c83fb214c2cbb4044993bf0 -address /run/containerd/containerd.sock
       0 Ss    \_ /pause
       0 Sl   /usr/bin/containerd-shim-runc-v2 -namespace moby -id 4a30785f91873a7e6a191e86928a789760a054e4fa6dcd7048a059b42cf19edf -address /run/containerd/containerd.sock
       0 Ssl   \_ /coredns -conf /etc/coredns/Corefile
       0 Sl   /usr/bin/containerd-shim-runc-v2 -namespace moby -id 649a507d45831aca1de5231b49afc8ff37d90add813e7ecd451d12eedd785b0c -address /run/containerd/containerd.sock
       0 Ssl   \_ /coredns -conf /etc/coredns/Corefile
       0 Sl   /usr/bin/containerd-shim-runc-v2 -namespace moby -id 62b369de8d8cece4d33ec9fda4d23a9718379a8df8b30173d68f20bff830fed2 -address /run/containerd/containerd.sock
       0 Ss    \_ /pause
       0 Sl   /usr/bin/containerd-shim-runc-v2 -namespace moby -id 7cbb177bee18dbdeed21fb90e74378e2081436ad5bf116b36ad5077fe382df30 -address /run/containerd/containerd.sock
       0 Ss    \_ /bin/bash /usr/local/bin/run.sh
       0 S         \_ nginx: master process nginx -g daemon off;
   65534 S             \_ nginx: worker process
       0 Ss   /lib/systemd/systemd --user
       0 S     \_ (sd-pam)
       0 Sl   /usr/bin/containerd-shim-runc-v2 -namespace moby -id 6669168db70db4e6c741e8a047942af06dd745fae4d594291d1d6e1077b05082 -address /run/containerd/containerd.sock
       0 Ss    \_ /pause
       0 Sl   /usr/bin/containerd-shim-runc-v2 -namespace moby -id d5fa78fa31f11a4c5fb9fd2e853a00f0e60e414a7bce2e0d8fcd1f6ab2b30074 -address /run/containerd/containerd.sock
     101 Ss    \_ /usr/bin/dumb-init -- /nginx-ingress-controller --publish-service=ingress-nginx/ingress-nginx-controller --election-id=ingress-controller-leader --ingress-class=nginx --configmap=ingress-nginx/ingress-nginx-controller --validating-webhook=:8443 --validating-webhook-certificate=/usr/local/certificates/cert --validating-webhook-key=/usr/local/certificates/key
     101 Ssl       \_ /nginx-ingress-controller --publish-service=ingress-nginx/ingress-nginx-controller --election-id=ingress-controller-leader --ingress-class=nginx --configmap=ingress-nginx/ingress-nginx-controller --validating-webhook=:8443 --validating-webhook-certificate=/usr/local/certificates/cert --validating-webhook-key=/usr/local/certificates/key
     101 S             \_ nginx: master process /usr/local/nginx/sbin/nginx -c /etc/nginx/nginx.conf
     101 Sl                \_ nginx: worker process
     101 Sl                \_ nginx: worker process
     101 Sl                \_ nginx: worker process
     101 Sl                \_ nginx: worker process
     101 S                 \_ nginx: cache manager process
There s a lot going on there. Some bits are obvious; we can see the nginx ingress controller, our echoserver (the other nginx process hanging off /usr/local/bin/run.sh), and some things that look related to weave. The rest appears to be Kubernete s related infrastructure. kube-scheduler, kube-controller-manager, kube-apiserver, kube-proxy all look like core Kubernetes bits. etcd is a distributed, reliable key-value store. coredns is a DNS server, with plugins for Kubernetes and etcd. What does Docker claim is happening?
docker ps
CONTAINER ID   IMAGE                                 COMMAND                  CREATED      STATUS      PORTS     NAMES
d5fa78fa31f1   k8s.gcr.io/ingress-nginx/controller   "/usr/bin/dumb-init  "   3 days ago   Up 3 days             k8s_controller_ingress-nginx-controller-5b74bc9868-bczdr_ingress-nginx_4d7d3d81-a769-4de9-a4fb-04763b7c1605_0
6669168db70d   k8s.gcr.io/pause:3.4.1                "/pause"                 3 days ago   Up 3 days             k8s_POD_ingress-nginx-controller-5b74bc9868-bczdr_ingress-nginx_4d7d3d81-a769-4de9-a4fb-04763b7c1605_0
7cbb177bee18   k8s.gcr.io/echoserver                 "/usr/local/bin/run. "   3 days ago   Up 3 days             k8s_echoserver_hello-node-59bffcc9fd-8hkgb_default_c7111c9e-7131-40e0-876d-be89d5ca1812_0
62b369de8d8c   k8s.gcr.io/pause:3.4.1                "/pause"                 3 days ago   Up 3 days             k8s_POD_hello-node-59bffcc9fd-8hkgb_default_c7111c9e-7131-40e0-876d-be89d5ca1812_0
649a507d4583   296a6d5035e2                          "/coredns -conf /etc "   4 days ago   Up 4 days             k8s_coredns_coredns-558bd4d5db-flrfq_kube-system_f8b2b52e-6673-4966-82b1-3fbe052a0297_0
4a30785f9187   296a6d5035e2                          "/coredns -conf /etc "   4 days ago   Up 4 days             k8s_coredns_coredns-558bd4d5db-4nvrg_kube-system_1976f4d6-647c-45ca-b268-95f071f064d5_0
9ffd6b668ddf   k8s.gcr.io/pause:3.4.1                "/pause"                 4 days ago   Up 4 days             k8s_POD_coredns-558bd4d5db-flrfq_kube-system_f8b2b52e-6673-4966-82b1-3fbe052a0297_0
534c0a698478   k8s.gcr.io/pause:3.4.1                "/pause"                 4 days ago   Up 4 days             k8s_POD_coredns-558bd4d5db-4nvrg_kube-system_1976f4d6-647c-45ca-b268-95f071f064d5_0
36b418e69ae7   df29c0a4002c                          "/home/weave/launch. "   4 days ago   Up 4 days             k8s_weave_weave-net-mchmg_kube-system_b9af9615-8cde-4a18-8555-6da1f51b7136_1
48d735f7f44e   weaveworks/weave-npc                  "/usr/bin/launch.sh"     4 days ago   Up 4 days             k8s_weave-npc_weave-net-mchmg_kube-system_b9af9615-8cde-4a18-8555-6da1f51b7136_0
7104f65b5d92   k8s.gcr.io/pause:3.4.1                "/pause"                 4 days ago   Up 4 days             k8s_POD_weave-net-mchmg_kube-system_b9af9615-8cde-4a18-8555-6da1f51b7136_0
26d92a720c56   4359e752b596                          "/usr/local/bin/kube "   4 days ago   Up 4 days             k8s_kube-proxy_kube-proxy-6d8kg_kube-system_8bf2d7ec-4850-427f-860f-465a9ff84841_0
73fae81715b6   k8s.gcr.io/pause:3.4.1                "/pause"                 4 days ago   Up 4 days             k8s_POD_kube-proxy-6d8kg_kube-system_8bf2d7ec-4850-427f-860f-465a9ff84841_0
89f35bf7a825   771ffcf9ca63                          "kube-apiserver --ad "   4 days ago   Up 4 days             k8s_kube-apiserver_kube-apiserver-udon_kube-system_1af8c5f362b7b02269f4d244cb0e6fbf_0
afa9798c9f66   a4183b88f6e6                          "kube-scheduler --au "   4 days ago   Up 4 days             k8s_kube-scheduler_kube-scheduler-udon_kube-system_629dc49dfd9f7446eb681f1dcffe6d74_0
2dabff6e4f59   0369cf4303ff                          "etcd --advertise-cl "   4 days ago   Up 4 days             k8s_etcd_etcd-udon_kube-system_c2a3008c1d9895f171cd394e38656ea0_0
4b3708b62f4d   e16544fd47b0                          "kube-controller-man "   4 days ago   Up 4 days             k8s_kube-controller-manager_kube-controller-manager-udon_kube-system_1d1b9018c3c6e7aa2e803c6e9ccd2eab_0
fd95c597ff31   k8s.gcr.io/pause:3.4.1                "/pause"                 4 days ago   Up 4 days             k8s_POD_kube-scheduler-udon_kube-system_629dc49dfd9f7446eb681f1dcffe6d74_0
589c1545d9e0   k8s.gcr.io/pause:3.4.1                "/pause"                 4 days ago   Up 4 days             k8s_POD_kube-controller-manager-udon_kube-system_1d1b9018c3c6e7aa2e803c6e9ccd2eab_0
6f417fd8a8c5   k8s.gcr.io/pause:3.4.1                "/pause"                 4 days ago   Up 4 days             k8s_POD_kube-apiserver-udon_kube-system_1af8c5f362b7b02269f4d244cb0e6fbf_0
c2ff2c50f0bc   k8s.gcr.io/pause:3.4.1                "/pause"                 4 days ago   Up 4 days             k8s_POD_etcd-udon_kube-system_c2a3008c1d9895f171cd394e38656ea0_0
Ok, that s interesting. Before we dig into it, what does Kubernetes say? (I ve trimmed the RESTARTS + AGE columns to make things fit a bit better here; they weren t interesting).
noodles@udon:~$ kubectl get pods --all-namespaces
NAMESPACE       NAME                                        READY   STATUS
default         hello-node-59bffcc9fd-8hkgb                 1/1     Running
ingress-nginx   ingress-nginx-admission-create-8jgkt        0/1     Completed
ingress-nginx   ingress-nginx-admission-patch-jdq4t         0/1     Completed
ingress-nginx   ingress-nginx-controller-5b74bc9868-bczdr   1/1     Running
kube-system     coredns-558bd4d5db-4nvrg                    1/1     Running
kube-system     coredns-558bd4d5db-flrfq                    1/1     Running
kube-system     etcd-udon                                   1/1     Running
kube-system     kube-apiserver-udon                         1/1     Running
kube-system     kube-controller-manager-udon                1/1     Running
kube-system     kube-proxy-6d8kg                            1/1     Running
kube-system     kube-scheduler-udon                         1/1     Running
kube-system     weave-net-mchmg                             2/2     Running
So there are a lot more Docker instances running than Kubernetes pods. What s happening there? Well, it turns out that Kubernetes builds pods from multiple different Docker instances. If you think of a traditional container as being comprised of a set of namespaces (process, network, hostname etc) and a cgroup then a pod is made up of the namespaces and then each docker instance within that pod has it s own cgroup. Ian Lewis has a much deeper discussion in What are Kubernetes Pods Anyway?, but my takeaway is that a pod is a set of sort-of containers that are coupled. We can see this more clearly if we ask systemd for the cgroup breakdown:
systemd-cgls
Control group /:
-.slice
 user.slice 
   user-0.slice 
     session-29.scope 
        515899 sshd: root@pts/1
        515913 -bash
       3519743 systemd-cgls
       3519744 cat
     user@0.service  
       init.scope 
         515902 /lib/systemd/systemd --user
         515903 (sd-pam)
   user-1000.slice 
     user@1000.service  
       init.scope 
         2564011 /lib/systemd/systemd --user
         2564012 (sd-pam)
     session-110.scope 
       2564007 sshd: noodles [priv]
       2564040 sshd: noodles@pts/0
       2564041 -bash
 init.scope 
   1 /sbin/init
 system.slice 
   containerd.service  
       21383 /usr/bin/containerd-shim-runc-v2 -namespace moby -id fd95c597ff31 
       21408 /usr/bin/containerd-shim-runc-v2 -namespace moby -id c2ff2c50f0bc 
       21432 /usr/bin/containerd-shim-runc-v2 -namespace moby -id 589c1545d9e0 
       21459 /usr/bin/containerd-shim-runc-v2 -namespace moby -id 6f417fd8a8c5 
       21582 /usr/bin/containerd-shim-runc-v2 -namespace moby -id afa9798c9f66 
       21607 /usr/bin/containerd-shim-runc-v2 -namespace moby -id 4b3708b62f4d 
       21640 /usr/bin/containerd-shim-runc-v2 -namespace moby -id 89f35bf7a825 
       21648 /usr/bin/containerd-shim-runc-v2 -namespace moby -id 2dabff6e4f59 
       22343 /usr/bin/containerd-shim-runc-v2 -namespace moby -id 73fae81715b6 
       22391 /usr/bin/containerd-shim-runc-v2 -namespace moby -id 26d92a720c56 
       26992 /usr/bin/containerd-shim-runc-v2 -namespace moby -id 7104f65b5d92 
       27405 /usr/bin/containerd-shim-runc-v2 -namespace moby -id 48d735f7f44e 
       27531 /usr/bin/containerd-shim-runc-v2 -namespace moby -id 36b418e69ae7 
       27941 /usr/bin/containerd-shim-runc-v2 -namespace moby -id 534c0a698478 
       27960 /usr/bin/containerd-shim-runc-v2 -namespace moby -id 9ffd6b668ddf 
       28131 /usr/bin/containerd-shim-runc-v2 -namespace moby -id 4a30785f9187 
       28159 /usr/bin/containerd-shim-runc-v2 -namespace moby -id 649a507d4583 
      514667 /usr/bin/containerd-shim-runc-v2 -namespace moby -id 62b369de8d8c 
      514976 /usr/bin/containerd-shim-runc-v2 -namespace moby -id 7cbb177bee18 
      698904 /usr/bin/containerd-shim-runc-v2 -namespace moby -id 6669168db70d 
      699284 /usr/bin/containerd-shim-runc-v2 -namespace moby -id d5fa78fa31f1 
     2805479 /usr/bin/containerd
   systemd-udevd.service 
     2805502 /lib/systemd/systemd-udevd
   cron.service 
     2805474 /usr/sbin/cron -f
   docker.service  
     528 /usr/sbin/dockerd -H fd://
   kubelet.service 
     2805501 /usr/bin/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap 
   systemd-journald.service 
     2805505 /lib/systemd/systemd-journald
   ssh.service 
     2805500 sshd: /usr/sbin/sshd -D [listener] 0 of 10-100 startups
   ifup@enx00e04c6851de.service 
     2805675 /sbin/dhclient -4 -v -i -pf /run/dhclient.enx00e04c6851de.pid -lf 
   rsyslog.service 
     2805488 /usr/sbin/rsyslogd -n -iNONE
   smartmontools.service 
     2805499 /usr/sbin/smartd -n
   dbus.service 
     527 /usr/bin/dbus-daemon --system --address=systemd: --nofork --nopidfile 
   systemd-timesyncd.service 
     2805513 /lib/systemd/systemd-timesyncd
   system-getty.slice 
     getty@tty1.service 
       536 /sbin/agetty -o -p -- \u --noclear tty1 linux
   systemd-logind.service 
     533 /lib/systemd/systemd-logind
 kubepods.slice 
   kubepods-burstable.slice 
     kubepods-burstable-pod1af8c5f362b7b02269f4d244cb0e6fbf.slice 
       docker-6f417fd8a8c573a2b8f792af08cdcd7ce663457f0f7218c8d55afa3732e6ee94.scope  
         21493 /pause
       docker-89f35bf7a825eb97db7035d29aa475a3a1c8aaccda0860a46388a3a923cd10bc.scope  
         21699 kube-apiserver --advertise-address=192.168.33.147 --allow-privi 
     kubepods-burstable-podf8b2b52e_6673_4966_82b1_3fbe052a0297.slice 
       docker-649a507d45831aca1de5231b49afc8ff37d90add813e7ecd451d12eedd785b0c.scope  
         28187 /coredns -conf /etc/coredns/Corefile
       docker-9ffd6b668ddfbf3c64c6783bc6f4f6cc9e92bfb16c83fb214c2cbb4044993bf0.scope  
         27987 /pause
     kubepods-burstable-podc2a3008c1d9895f171cd394e38656ea0.slice 
       docker-c2ff2c50f0bc052feda2281741c4f37df7905e3b819294ec645148ae13c3fe1b.scope  
         21481 /pause
       docker-2dabff6e4f59c96d931d95781d28314065b46d0e6f07f8c65dc52aa465f69456.scope  
         21701 etcd --advertise-client-urls=https://192.168.33.147:2379 --cert 
     kubepods-burstable-pod629dc49dfd9f7446eb681f1dcffe6d74.slice 
       docker-fd95c597ff3171ff110b7bf440229e76c5108d5d93be75ffeab54869df734413.scope  
         21491 /pause
       docker-afa9798c9f663b21df8f38d9634469e6b4db0984124547cd472a7789c61ef752.scope  
         21680 kube-scheduler --authentication-kubeconfig=/etc/kubernetes/sche 
     kubepods-burstable-podb9af9615_8cde_4a18_8555_6da1f51b7136.slice 
       docker-48d735f7f44e3944851563f03f32c60811f81409e7378641404035dffd8c1eb4.scope  
         27424 /usr/bin/weave-npc
         27458 /usr/sbin/ulogd -v
       docker-36b418e69ae7076fe5a44d16cef223d8908016474cb65910f2fd54cca470566b.scope  
         27549 /bin/sh /home/weave/launch.sh
         27629 /home/weave/weaver --port=6783 --datapath=datapath --name=12:82 
         27825 /home/weave/kube-utils -run-reclaim-daemon -node-name=udon -pee 
       docker-7104f65b5d92a56a2df93514ed0a78cfd1090ca47b6ce4e0badc43be6c6c538e.scope  
         27011 /pause
     kubepods-burstable-pod4d7d3d81_a769_4de9_a4fb_04763b7c1605.slice 
       docker-6669168db70db4e6c741e8a047942af06dd745fae4d594291d1d6e1077b05082.scope  
         698925 /pause
       docker-d5fa78fa31f11a4c5fb9fd2e853a00f0e60e414a7bce2e0d8fcd1f6ab2b30074.scope  
          699303 /usr/bin/dumb-init -- /nginx-ingress-controller --publish-ser 
          699316 /nginx-ingress-controller --publish-service=ingress-nginx/ing 
          699405 nginx: master process /usr/local/nginx/sbin/nginx -c /etc/ngi 
         1075085 nginx: worker process
         1075086 nginx: worker process
         1075087 nginx: worker process
         1075088 nginx: worker process
         1075089 nginx: cache manager process
     kubepods-burstable-pod1976f4d6_647c_45ca_b268_95f071f064d5.slice 
       docker-4a30785f91873a7e6a191e86928a789760a054e4fa6dcd7048a059b42cf19edf.scope  
         28178 /coredns -conf /etc/coredns/Corefile
       docker-534c0a698478599277482d97a137fab8ef4d62db8a8a5cf011b4bead28246f70.scope  
         27995 /pause
     kubepods-burstable-pod1d1b9018c3c6e7aa2e803c6e9ccd2eab.slice 
       docker-589c1545d9e0cdf8ea391745c54c8f4db49f5f437b1a2e448e7744b2c12f8856.scope  
         21489 /pause
       docker-4b3708b62f4d427690f5979848c59fce522dab6c62a9c53b806ffbaef3f88e62.scope  
         21690 kube-controller-manager --authentication-kubeconfig=/etc/kubern 
   kubepods-besteffort.slice 
     kubepods-besteffort-podc7111c9e_7131_40e0_876d_be89d5ca1812.slice 
       docker-62b369de8d8cece4d33ec9fda4d23a9718379a8df8b30173d68f20bff830fed2.scope  
         514688 /pause
       docker-7cbb177bee18dbdeed21fb90e74378e2081436ad5bf116b36ad5077fe382df30.scope  
         514999 /bin/bash /usr/local/bin/run.sh
         515039 nginx: master process nginx -g daemon off;
         515040 nginx: worker process
     kubepods-besteffort-pod8bf2d7ec_4850_427f_860f_465a9ff84841.slice 
       docker-73fae81715b670255b66419a7959798b287be7bbb41e96f8b711fa529aa02f0d.scope  
         22364 /pause
       docker-26d92a720c560caaa5f8a0217bc98e486b1c032af6c7c5d75df508021d462878.scope  
         22412 /usr/local/bin/kube-proxy --config=/var/lib/kube-proxy/config.c 
Again, there s a lot going on here, but if you look for the kubepods.slice piece then you can see our pods are divided into two sets, kubepods-burstable.slice and kubepods-besteffort.slice. Under those you can see the individual pods, all of which have at least 2 separate cgroups, one of which is running /pause. Turns out this is a generic Kubernetes image which basically performs the process reaping that an init process would do on a normal system; it just sits and waits for processes to exit and cleans them up. Again, Ian Lewis has more details on the pause container. Finally let s dig into the actual containers. The pause container seems like a good place to start. We can examine the details of where the filesystem is (may differ if you re not using the overlay2 image thingy). The hex string is the container ID listed by docker ps.
# docker inspect --format=' .GraphDriver.Data.MergedDir ' 6669168db70d
/var/lib/docker/overlay2/5a2d76012476349e6b58eb6a279bac400968cefae8537082ea873b2e791ff3c6/merged
# cd /var/lib/docker/overlay2/5a2d76012476349e6b58eb6a279bac400968cefae8537082ea873b2e791ff3c6/merged
# find .   sed -e 's;^./;;'
pause
proc
.dockerenv
etc
etc/resolv.conf
etc/hostname
etc/mtab
etc/hosts
sys
dev
dev/shm
dev/pts
dev/console
# file pause
pause: ELF 64-bit LSB executable, x86-64, version 1 (GNU/Linux), statically linked, for GNU/Linux 3.2.0, BuildID[sha1]=d35dab7152881e37373d819f6864cd43c0124a65, stripped
This is a nice, minimal container. The pause binary is statically linked, so there are no extra libraries required and it s just a basic set of support devices and files. I doubt the pieces in /etc are even required. Let s try the echoserver next:
# docker inspect --format=' .GraphDriver.Data.MergedDir ' 7cbb177bee18
/var/lib/docker/overlay2/09042bc1aff16a9cba43f1a6a68f7786c4748e989a60833ec7417837c4bfaacb/merged
# cd /var/lib/docker/overlay2/09042bc1aff16a9cba43f1a6a68f7786c4748e989a60833ec7417837c4bfaacb/merged
# find .   wc -l
3358
Wow. That s a lot more stuff. Poking /etc/os-release shows why:
# grep PRETTY etc/os-release
PRETTY_NAME="Ubuntu 16.04.2 LTS"
Aha. It s an Ubuntu-based image. We can cut straight to the chase with the nginx ingress container:
# docker exec d5fa78fa31f1 grep PRETTY /etc/os-release
PRETTY_NAME="Alpine Linux v3.13"
That s a bit more reasonable an image for a container; Alpine Linux is a much smaller distro. I don t feel there s a lot more poking to do here. It s not something I d expect to do on a normal Kubernetes setup, but I wanted to dig under the hood to make sure it really was just a normal container situation. I think the next steps involve adding a bit more complexity - that means building a pod with more than a single node, and then running an application that s a bit more complicated. That should help explore two major advantages of running this sort of setup; resilency from a node dying, and the ability to scale out beyond what a single node can do.

1 June 2021

Paul Wise: FLOSS Activities May 2021

Focus This month I didn't have any particular focus. I just worked on issues in my info bubble.

Changes

Issues

Review

Administration
  • Debian wiki: unblock IP addresses, approve accounts

Communication
  • Joined the great IRC migration
  • Respond to queries from Debian users and contributors on the mailing lists and IRC

Sponsors The purple-discord, sptag and esprima-python work was sponsored by my employer. All other work was done on a volunteer basis.

30 May 2021

Russell Coker: HP ML110 Gen9

I ve just bought a HP ML110 Gen9 as a personal workstation, here are my notes about it and documentation on running Debian on it. Why a Server? I bought this is because the ML350p Gen8 turned out to be too noisy for my taste [1]. I ve just been editing my page about Memtest86+ RAM speeds [2], over the course of 10 years (high end laptop in 2001 to low end server in 2011) RAM speed increased by a factor of 100. RAM speed has been increasing at a lower rate than CPU speed and is becoming an increasing bottleneck on system performance. So while I could get a faster white-box system the cost of a second-hand server isn t that great and I m getting a system that s 100* faster than what was adequate for most tasks in 2001. HP makes some nice workstation class machines with ECC RAM (think server without remote management, hot-swap disks, or redundant PSU but with sound hardware). But they are significantly more expensive on the second hand market than servers. This server cost me $650 and came with 2*480G DC grade SSDs (Intel but with HPE stickers). I hope that more than half of the purchase price will be recovered from selling the SSDs (I will use NVMe). Also 64G of non-ECC RAM costs $370 from my local store. As I want lots of RAM for testing software on VMs it will probably turn out that the server cost me less than the cost of new RAM once I ve sold the SSDs! Monitoring
wget -O /usr/local/hpePublicKey2048_key1.pub https://downloads.linux.hpe.com/SDR/hpePublicKey2048_key1.pub
echo "# HP monitoring" >> /etc/apt/sources.list
echo "deb [signed-by=/usr/local/hpePublicKey2048_key1.pub] http://downloads.linux.hpe.com/SDR/downloads/MCP/Debian/ stretch/current-gen9 non-free" >> /etc/apt/sources.list
The above commands will make the management utilities installable on Debian/Buster. If using Bullseye (Testing at the moment) then you need to have Buster repositories in APT for dependencies, HP doesn t seem to have packaged all their utilities for Buster.
wget -r -np -A Contents-amd64.bz2 http://downloads.linux.hpe.com/SDR/repo/mcp/debian/dists
To find out which repositories had the programs I need I ran the above recursive wget and then uncompressed them for grep -R (as an aside it would be nice if bzgrep supported -R). I installed the hp-health package which has hpasmcli for viewing and setting many configuration options and hplog for viewing event log data and thermal data (among a few other things). I ve added a new monitor to etbemon hp-temp.monitor to monitor HP server temperatures, I haven t made a configuration option to change the thresholds for what is considered normal because I don t expect server class systems to be routinely running above the warning temperature. For the linux-temp.monitor script I added a command-line option for the percentage of the high temperature that is an error condition as well as an option for the number of CPU cores that need to be over-temperature, having one core permanently over the high temperature due to a web browser seems standard for white-box workstations nowadays. The hp-health package depends on libc6-i686 lib32gcc1 even though none of the programs it contains use lib32gcc1. Depending on lib32gcc1 instead of lib32gcc1 lib32gcc-s1 means that installing hp-health requires removing mesa-opencl-icd which probably means that BOINC can t use the GPU among other things. I solved this by editing /var/lib/dpkg/status and changing the package dependencies to what I desired. Note that this is not something for a novice to do, make a backup and make sure you know what you are doing! Issues The HPE Dynamic Smart Array B140i is a software RAID device. While it s convenient for some users that software RAID gets supported in the UEFI boot process, generally software RAID is a bad idea. Also my system has hot-swap drive caddies but the controller doesn t support hot-swap. So the first thing to do was to configure the array controller to run in AHCI mode and give up on using hot-swap drive caddies for hot-swap. I tested all the documented ways of scanning for new devices and nothing other than a reboot made the kernel recognise a new SATA disk. According to specs provided by Dell and HP the ML110 Gen9 makes less noise than the PowerEdge T320, according to my own observations the reverse is the case. I don t know if this is because of Dell being more conservative in their specs than HP or because of how dBA is measured vs my own personal annoyance thresholds for sounds. As the system makes more noise than I m comfortable with I plan to build a rubber enclosure for the rear of the system to reduce noise, that will be the subject of another post. For Australian readers Bunnings has some good deals on rubber floor mats that can be used to reduce server noise. The server doesn t have sound hardware, while one could argue that servers don t need sound there are some server uses for sound hardware such as using line input as a source of entropy. Also for a manufacturer it might be a benefit to use the same motherboard for workstations and servers. Fortunately a friend gave me a nice set of Logitech USB speakers a few years ago that I hadn t previously had a cause to use, so that will solve the problem for me (I don t need line-in on a workstation). UEFI and Memtest I decided to try UEFI boot for something new (in the past I d only used UEFI boot for a server that only had large disks). In the past I ve booted all my own systems with BIOS boot because I m familiar with it and they all have SSDs for booting which are less than 2TB in size (until recently 2TB SSDs weren t affordable for my personal use). The Debian UEFI wiki page is worth reading [3]. The Debian Wiki page about ProLiant servers [4] is worth reading too. Memtest86+ doesn t support EFI booting (just goes to a black screen) even though Debian/Buster puts in a GRUB entry for it (Debian bug #695246 was filed for this in 2012). Also on my ML110 Memtest86+ doesn t report the RAM speed (a known issue on Memtest86+). Comments on the net say that Memtest86+ hasn t been maintained for a long time and Memtest86 (the non-free version) has been updated more recently. So far I haven t seen a system with ECC RAM have a memory problem that could be detected by Memtest86+, the memory problems I ve seen on ECC systems have been things that prevent booting (RAM not being recognised correctly), that are detected by the BIOS as ECC errors before booting, or that are reported by the kernel as ECC errors at run time (happened years ago and I can t remember the details). Overall I m not a fan of EFI with the way it currently works in Debian. It seems to add some of the GRUB functionality into the BIOS and then use that to load GRUB. It seems that EFI can do everything you need and it would be better to just have a single boot loader not two of them chained. Power Supply There are a range of PSUs for the ML110, the one I have has the smallest available PSU (350W) and doesn t have a PCIe power cable (the one used for video cards). Here is the HP document which shows the cabling for the various ML110 Gen8 PSUs [5], I have the 350W PSU. One thing I ve considered is whether I could make an adaptor from the drive bay power to the PCIe connector. A quick web search indicates that 4 SAS disks when active can take up to 75W more power than a system with no disks. If that s the case then the 2 spare drive bay connectors which can each handle 4 disks should be able to supply 150W. As a 6 pin PCIe power cable (GPU power cable) is rated at 75W that should be fine in theory (here s a page with the pinouts for PCIe power connectors [6]). My video card is a Radeon R7 260X which apparently takes about 113W all up so should be taking less than 75W from the PCIe power cable. All I really want is YouTube, Netflix, and text editing at 4K resolution. So I don t need much in terms of 3D power. KDE uses some of the advanced features of modern video cards, but it doesn t compare to 3D gaming. According to the Wikipedia page for Radeon RX 500 series [7] the RX560 supports DisplayPort 1.4 and HDMI 2.0 (both of which do 4K@60Hz) and has a TDP of 75W. So a RX560 video card seems like a good option that will work in any system that doesn t have a spare PCIe power cable. I ve just ordered one of those for $246 so hopefully that will arrive in a week or so. PCI Fan The ML110 Gen9 has an optional PCIe fan and baffle to cool PCIe cards (part number 784580-B21). Extra cooling of PCIe cards is a good thing, but $400 list price (and about $50 ebay price) for the fan and baffle is unpleasant. When I boot the system with a PCIe dual-ethernet card and two PCIe NVMe cards it gives a BIOS warning on boot, when I add a video card it refuses to boot without the extra fan. It s nice that the system makes sure it doesn t get into a thermal overload situation, but it would be nicer if they just shipped all necessary fans with it instead of trying to get more money out of customers. I just bought a PCI fan and baffle kit for $60. Conclusion In spite of the unexpected expense of a new video card and PCI fan the overall cost of this system is still low, particularly when considering that I ll find another use for the video card which needs and extra power connector. It is disappointing that HP didn t supply a more capable PSU and fit all the fans to all models, the expectation of a server is that you can just do server stuff not have to buy extra bits before you can do server stuff. If you want to install Tesla GPUs or something then it s expected that you might need to do something unusual with a server, but the basic stuff should just work. A single processor tower server should be designed to function as a deskside workstation and be able to handle an average video card. Generally it s a nice computer, I look forward to getting the next deliveries of parts so I can make it work properly.

Russ Allbery: Review: The Silver Chair

Review: The Silver Chair, by C.S. Lewis
Illustrator: Pauline Baynes
Series: Chronicles of Narnia #4
Publisher: Collier
Copyright: 1953
Printing: 1978
ISBN: 0-02-044250-5
Format: Mass market
Pages: 217
The Silver Chair is a sequel to The Voyage of the Dawn Treader and the fourth book of the Chronicles of Narnia in original publication order. (For more about publication order, see the introduction to my review of The Lion, the Witch and the Wardrobe.) Apart from a few references to The Voyage of the Dawn Treader at the start, it stands sufficiently on its own that you could read it without reading the other books, although I have no idea why you'd want to. We have finally arrived at my least favorite of the Narnia books and the one that I sometimes skipped during re-reads. (One of my objections to the new publication order is that it puts The Silver Chair and The Last Battle back-to-back, and I don't think you should do that to yourself as a reader.) I was hoping that there would be previously unnoticed depth to this book that would redeem it as an adult reader. Sadly, no; with one very notable exception, it's just not very good. MAJOR SPOILERS BELOW. The Silver Chair opens on the grounds of the awful school to which Eustace's parents sent him: Experiment House. That means it opens (and closes) with a more extended version of Lewis's rant about schools. I won't get into this in detail since it's mostly a framing device, but Lewis is remarkably vicious and petty. His snide contempt for putting girls and boys in the same school did not age well, nor did his emphasis at the end of the book that the incompetent head of the school is a woman. I also raised an eyebrow at holding up ordinary British schools as a model of preventing bullying. Thankfully, as Lewis says at the start, this is not a school story. This is prelude to Jill meeting Eustace and the two of them escaping the bullies via a magical door into Narnia. Unfortunately, that's the second place The Silver Chair gets off on the wrong foot. Jill and Eustace end up in what the reader of the series will recognize as Aslan's country and almost walk off the vast cliff at the end of the world, last seen from the bottom in The Voyage of the Dawn Treader. Eustace freaks out, Jill (who has a much better head for heights) goes intentionally close to the cliff in a momentary impulse of arrogance before realizing how high it is, Eustace tries to pull her back, and somehow Eustace falls over the edge. I do not have a good head for heights, and I wonder how much of it is due to this memorable scene. I certainly blame Lewis for my belief that pulling someone else back from the edge of a cliff can result in you being pushed off, something that on adult reflection makes very little sense but which is seared into my lizard brain. But worse, this sets the tone for the rest of the story: everything is constantly going wrong because Eustace and Jill either have normal human failings that are disproportionately punished or don't successfully follow esoteric and unreasonably opaque instructions from Aslan. Eustace is safe, of course; Aslan blows him to Narnia and then gives Jill instructions before sending her afterwards. (I suspect the whole business with the cliff was an authorial cheat to set up Jill's interaction with Aslan without Eustace there to explain anything.) She and Eustace have been summoned to Narnia to find the lost Prince, and she has to memorize four Signs that will lead her on the right path. Gah, the Signs. If you were the sort of kid that I was, you immediately went back and re-read the Signs several times to memorize them like Jill was told to. The rest of this book was then an exercise in anxious frustration. First, Eustace is an ass to Jill and refuses to even listen to the first Sign. They kind of follow the second but only with heavy foreshadowing that Jill isn't memorizing the Signs every day like she's supposed to. They mostly botch the third and have to backtrack to follow it. Meanwhile, the narrator is constantly reminding you that the kids (and Jill in particular) are screwing up their instructions. On re-reading, it's clear they're not doing that poorly given how obscure the Signs are, but the ominous foreshadowing is enough to leave a reader a nervous wreck. Worse, Eustace and Jill are just miserable to each other through the whole book. They constantly bicker and snipe, Eustace doesn't want to listen to her and blames her for everything, and the hard traveling makes it all worse. Lewis does know how to tell a satisfying redemption arc; one of the things I have always liked about Edmund's story is that he learns his lesson and becomes my favorite character in the subsequent stories. But, sadly, Eustace's redemption arc is another matter. He's totally different here than he was at the start of The Voyage of the Dawn Treader (to the degree that if he didn't have the same name in both books, I wouldn't recognize him as the same person), but rather than a better person he seems to have become a different sort of ass. There's no sign here of the humility and appreciation for friendship that he supposedly learned from his time as a dragon. On top of that, the story isn't very interesting. Rilian, the lost Prince, is a damp squib who talks in the irritating archaic accent that Lewis insists on using for all Narnian royalty. His story feels like Lewis lifted it from medieval Arthurian literature; most of it could be dropped into a collection of stories of knights of the Round Table without seeming out of place. When you have a country full of talking animals and weirdly fascinating bits of theology, it's disappointing to get a garden-variety story about an evil enchantress in which everyone is noble and tragic and extremely stupid. Thankfully, The Silver Chair has one important redeeming quality: Puddleglum. Puddleglum is a Marsh-wiggle, a bipedal amphibious sort who lives alone in the northern marshes. He's recruited by the owls to help the kids with their mission when they fail to get King Caspian's help after blowing the first Sign. Puddleglum is an absolute delight: endlessly pessimistic, certain the worst possible thing will happen at any moment, but also weirdly cheerful about it. I love Eeyore characters in general, but Puddleglum is even better because he gives the kids' endless bickering exactly the respect that it deserves.
"But we all need to be very careful about our tempers, seeing all the hard times we shall have to go through together. Won't do to quarrel, you know. At any rate, don't begin it too soon. I know these expeditions usually end that way; knifing one another, I shouldn't wonder, before all's done. But the longer we can keep off it "
It's even more obvious on re-reading that Puddleglum is the only effective member of the party. Jill has only a couple of moments where she gets the three of them past some obstacle. Eustace is completely useless; I can't remember a single helpful thing he does in the entire book. Puddleglum and his pessimistic determination, on the other hand, is right about nearly everything at each step. And he's the one who takes decisive action to break the Lady of the Green Kirtle's spell near the end. I was expecting a bit of sexism and (mostly in upcoming books) racism when re-reading these books as an adult given when they were written and who Lewis was, but what has caught me by surprise is the colonialism. Lewis is weirdly insistent on importing humans from England to fill all the important roles in stories, even stories that are entirely about Narnians. I know this is the inherent weakness of portal fantasy, but it bothers me how little Lewis believes in Narnians solving their own problems. The Silver Chair makes this blatantly obvious: if Aslan had just told Puddleglum the same information he told Jill and sent a Badger or a Beaver or a Mouse along with him, all the evidence in the book says the whole affair would have been sorted out with much less fuss and anxiety. Jill and Eustace are far more of a hindrance than a help, which makes for frustrating reading when they're supposedly the protagonists. The best part of this book is the underground bits, once they finally get through the first three Signs and stumble into the Lady's kingdom far below the surface. Rilian is a great disappointment, but the fight against the Lady's mind-altering magic leads to one of the great quotes of the series, on par with Reepicheep's speech in The Voyage of the Dawn Treader.
"Suppose we have only dreamed, or made up, all those things trees and grass and sun and moon and stars and Aslan himself. Suppose we have. Then all I can say is that, in that case, the made-up things seem a good deal more important than the real ones. Suppose this black pit of a kingdom of yours is the only world. Well, it strikes me as a pretty poor one. And that's a funny thing, when you come to think of it. We're just babies making up a game, if you're right. But four babies playing a game can make a play-world which licks your real world hollow. That's why I'm going to stand by the play world. I'm on Aslan's side even if there isn't any Aslan to lead it. I'm going to live as like a Narnian as I can even if there isn't any Narnia. So, thanking you kindly for our supper, if these two gentlemen and the young lady are ready, we're leaving your court at once and setting out in the dark to spend our lives looking for Overland. Not that our lives will be very long, I should think; but that's small loss if the world's as dull a place as you say."
This is Puddleglum, of course. And yes, I know that this is apologetics and Lewis is talking about Christianity and making the case for faith without proof, but put that aside for the moment, because this is still powerful life philosophy. It's a cynic's litany against cynicism. It's a pessimist's defense of hope. Suppose we have only dreamed all those things like justice and fairness and equality, community and consensus and collaboration, universal basic income and effective environmentalism. The dreary magic of the realists and the pragmatists say that such things are baby's games, silly fantasies. But you can still choose to live like you believe in them. In Alasdair Gray's reworking of a line from Dennis Lee, "work as if you live in the early days of a better nation." That's one moment that I'll always remember from this book. The other is after they kill the Lady of the Green Kirtle and her magic starts to fade, they have to escape from the underground caverns while surrounded by the Earthmen who served her and who they believe are hostile. It's a tense moment that turns into a delightful celebration when they realize that the Earthmen were just as much prisoners as the Prince was. They were forced from a far deeper land below, full of living metals and salamanders who speak from rivers of fire. It's the one moment in this book that I thought captured the magical strangeness of Narnia, that sense that there are wonderful things just out of sight that don't follow the normal patterns of medieval-ish fantasy. Other than a few great lines from Puddleglum and some moments in Aslan's country, the first 60% of this book is a loss and remarkably frustrating to read. The last 40% isn't bad, although I wish Rilian had any discernible character other than generic Arthurian knight. I don't know what Eustace is doing in this book at all other than providing a way for Jill to get into Narnia, and I wish Lewis had realized Puddleglum could be the protagonist. But as frustrating as The Silver Chair can be, I am still glad I re-read it. Puddleglum is one of the truly memorable characters of children's literature, and it's a shame he's buried in a weak mid-series book. Followed, in the original publication order, by The Horse and His Boy. Rating: 6 out of 10

28 May 2021

Rapha&#235;l Hertzog: Freexian s report about Debian Long Term Support, April 2021

A Debian LTS logo
Like each month, have a look at the work funded by Freexian s Debian LTS offering. Debian project funding In April, we put aside 5775 EUR to fund Debian projects. There was no proposals for new projects received, thus we re looking forward to receive more projects from various Debian teams! Please do not hesitate to submit a proposal, if there is a project that could benefit from the funding! Debian LTS contributors In April, 11 contributors have been paid to work on Debian LTS, their reports are available: Evolution of the situation In April we released 33 DLAs and held a LTS team meeting using video conferencing. The security tracker currently lists 53 packages with a known CVE and the dla-needed.txt file has 26 packages needing an update. We are please to welcome VyOS as a new gold sponsor! Thanks to our sponsors Sponsors that joined recently are in bold.

27 May 2021

Michael Prokop: What to expect from Debian/bullseye #newinbullseye

Bullseye Banner, Copyright 2020 Juliette Taka Debian v11 with codename bullseye is supposed to be released as new stable release soon-ish (let s hope for June, 2021! :)). Similar to what we had with #newinbuster and previous releases, now it s time for #newinbullseye! I was the driving force at several of my customers to be well prepared for bullseye before its freeze, and since then we re on good track there overall. In my opinion, Debian s release team did (and still does) a great job I m very happy about how unblock requests (not only mine but also ones I kept an eye on) were handled so far. As usual with major upgrades, there are some things to be aware of, and hereby I m starting my public notes on bullseye that might be worth also for other folks. My focus is primarily on server systems and looking at things from a sysadmin perspective. Further readings Of course start with taking a look at the official Debian release notes, make sure to especially go through What s new in Debian 11 + Issues to be aware of for bullseye. Chris published notes on upgrading to Debian bullseye, and also anarcat published upgrade notes for bullseye. Package versions As a starting point, let s look at some selected packages and their versions in buster vs. bullseye as of 2021-05-27 (mainly having amd64 in mind):
Package buster/v10 bullseye/v11
ansible 2.7.7 2.10.8
apache 2.4.38 2.4.46
apt 1.8.2.2 2.2.3
bash 5.0 5.1
ceph 12.2.11 14.2.20
docker 18.09.1 20.10.5
dovecot 2.3.4 2.3.13
dpkg 1.19.7 1.20.9
emacs 26.1 27.1
gcc 8.3.0 10.2.1
git 2.20.1 2.30.2
golang 1.11 1.15
libc 2.28 2.31
linux kernel 4.19 5.10
llvm 7.0 11.0
lxc 3.0.3 4.0.6
mariadb 10.3.27 10.5.10
nginx 1.14.2 1.18.0
nodejs 10.24.0 12.21.0
openjdk 11.0.9.1 11.0.11+9 + 17~19
openssh 7.9p1 8.4p1
openssl 1.1.1d 1.1.1k
perl 5.28.1 5.32.1
php 7.3 7.4+76
postfix 3.4.14 3.5.6
postgres 11 13
puppet 5.5.10 5.5.22
python2 2.7.16 2.7.18
python3 3.7.3 3.9.2
qemu/kvm 3.1 5.2
ruby 2.5.1 2.7+2
rust 1.41.1 1.48.0
samba 4.9.5 4.13.5
systemd 241 247.3
unattended-upgrades 1.11.2 2.8
util-linux 2.33.1 2.36.1
vagrant 2.2.3 2.2.14
vim 8.1.0875 8.2.2434
zsh 5.7.1 5.8
Linux Kernel The bullseye release will ship a Linux kernel based on v5.10 (v5.10.28 as of 2021-05-27, with v5.10.38 pending in unstable/sid), whereas buster shipped kernel 4.19. As usual there are plenty of changes in the kernel area and this might warrant a separate blog entry, but to highlight some issues: One surprising change might be that the scrollback buffer (Shift + PageUp) is gone from the Linux console. Make sure to always use screen/tmux or handle output through a pager of your choice if you need all of it and you re in the console. The kernel provides BTF support (via CONFIG_DEBUG_INFO_BTF, see #973870), which means it s no longer necessary to install LLVM, Clang, etc (requiring >100MB of disk space), see Gregg s excellent blog post regarding the underlying rational. Sadly the libbpf-tools packaging didn t make it into bullseye (#978727), but if you want to use your own self-made Debian packages, my notes might be useful. With kernel version 5.4, SUBDIRS support was removed from kbuild, so if an out-of-tree kernel module (like a *-dkms package) fails to compile on bullseye, make sure to use a recent version of it which uses M= or KBUILD_EXTMOD= instead. Unprivileged user namespaces are enabled by default (see #898446 + #987777), so programs can create more restricted sandboxes without the need to run as root or via a setuid-root helper. If you prefer to keep this feature restricted (or tools like web browsers, WebKitGTK, Flatpak, don t work), use sysctl -w kernel.unprivileged_userns_clone=0 . The /boot/System.map file(s) no longer provide the actual data, you need to switch to the dbg package if you rely on that information:
% cat /boot/System.map-5.10.0-6-amd64 
ffffffffffffffff B The real System.map is in the linux-image-<version>-dbg package
Be aware though, that the *-dbg package requires ~5GB of additional disk space. Systemd systemd v247 made it into bullseye (updated from v241). Same as for the kernel this might warrant a separate blog entry, but to mention some highlights: Systemd in bullseye activates its persistent journal functionality by default (storing its files in /var/log/journal/, see #717388). systemd-timesyncd is no longer part of the systemd binary package itself, but available as standalone package. This allows usage of ntp, chrony, openntpd, without having systemd-timesyncd installed (which prevents race conditions like #889290, which was biting me more than once). journalctl gained new options:
--cursor-file=FILE      Show entries after cursor in FILE and update FILE
--facility=FACILITY...  Show entries with the specified facilities
--image=IMAGE           Operate on files in filesystem image
--namespace=NAMESPACE   Show journal data from specified namespace
--relinquish-var        Stop logging to disk, log to temporary file system
--smart-relinquish-var  Similar, but NOP if log directory is on root mount
systemctl gained new options:
clean UNIT...                       Clean runtime, cache, state, logs or configuration of unit
freeze PATTERN...                   Freeze execution of unit processes
thaw PATTERN...                     Resume execution of a frozen unit
log-level [LEVEL]                   Get/set logging threshold for manager
log-target [TARGET]                 Get/set logging target for manager
service-watchdogs [BOOL]            Get/set service watchdog state
--with-dependencies                 Show unit dependencies with 'status', 'cat', 'list-units', and 'list-unit-files'
 -T --show-transaction              When enqueuing a unit job, show full transaction
 --what=RESOURCES                   Which types of resources to remove
--boot-loader-menu=TIME             Boot into boot loader menu on next boot
--boot-loader-entry=NAME            Boot into a specific boot loader entry on next boot
--timestamp=FORMAT                  Change format of printed timestamps
If you use systemctl edit to adjust overrides, then you ll now also get the existing configuration file listed as comment, which I consider very helpful. The MACAddressPolicy behavior with systemd naming schema v241 changed for virtual devices (I plan to write about this in a separate blog post). There are plenty of new manual pages: systemd also gained new unit configurations related to security hardening: Another new unit configuration is SystemCallLog= , which supports listing the system calls to be logged. This is very useful for for auditing or temporarily when constructing system call filters. The cgroupv2 change is also documented in the release notes, but to explicitly mention it also here, quoting from /usr/share/doc/systemd/NEWS.Debian.gz:
systemd now defaults to the unified cgroup hierarchy (i.e. cgroupv2).
This change reflects the fact that cgroups2 support has matured
substantially in both systemd and in the kernel.
All major container tools nowadays should support cgroupv2.
If you run into problems with cgroupv2, you can switch back to the previous,
hybrid setup by adding systemd.unified_cgroup_hierarchy=false to the
kernel command line.
You can read more about the benefits of cgroupv2 at
https://www.kernel.org/doc/html/latest/admin-guide/cgroup-v2.html
Note that cgroup-tools (lssubsys + lscgroup etc) don t work in cgroup2/unified hierarchy yet (see #959022 for the details). Configuration management puppet s upstream doesn t provide packages for bullseye yet (see PA-3624 + MODULES-11060), and sadly neither v6 nor v7 made it into bullseye, so when using the packages from Debian you re still stuck with v5.5 (also see #950182). ansible is also available, and while it looked like that only version 2.9.16 would make it into bullseye (see #984557 + #986213), actually version 2.10.8 made it into bullseye. chef was removed from Debian and is not available with bullseye (due to trademark issues). Prometheus stack Prometheus server was updated from v2.7.1 to v2.24.1, and the prometheus service by default applies some systemd hardening now. Also all the usual exporters are still there, but bullseye also gained some new ones: Virtualization docker (v20.10.5), ganeti (v3.0.1), libvirt (v7.0.0), lxc (v4.0.6), openstack, qemu/kvm (v5.2), xen (v4.14.1), are all still around, though what s new and noteworthy is that podman version 3.0.1 (tool for managing OCI containers and pods) made it into bullseye. If you re using the docker packages from upstream, be aware that they still don t seem to understand Debian package version handling. The docker* packages will not be automatically considered for upgrade, as 5:20.10.6~3-0~debian-buster is considered newer than 5:20.10.6~3-0~debian-bullseye:
% apt-cache policy docker-ce
  docker-ce:
    Installed: 5:20.10.6~3-0~debian-buster
    Candidate: 5:20.10.6~3-0~debian-buster
    Version table:
   *** 5:20.10.6~3-0~debian-buster 100
          100 /var/lib/dpkg/status
       5:20.10.6~3-0~debian-bullseye 500
          500 https://download.docker.com/linux/debian bullseye/stable amd64 Packages
Vagrant is available in version 2.2.14, the package from upstream works perfectly fine on bullseye as well. If you re relying on VirtualBox, be aware that upstream doesn t provide packages for bullseye yet, but the package from Debian/unstable (v6.1.22 as of 2021-05-27) works fine on bullseye (VirtualBox isn t shipped with stable releases since quite some time due to lack of cooperation from upstream on security support for older releases, see #794466). If you rely on the virtualbox-guest-additions-iso and its shared folders support, you might be glad to hear that v6.1.22 made it into bullseye (see #988783), properly supporting more recent kernel versions like present in bullseye. debuginfod There s a new service debuginfod.debian.net (see debian-devel-announce and Debian Wiki), which makes the debugging experience way smoother. You no longer need to download the debugging Debian packages (*-dbgsym/*-dbg), but instead can fetch them on demand, by exporting the following variables (before invoking gdb or alike):
% export DEBUGINFOD_PROGRESS=1    # for optional download progress reporting
% export DEBUGINFOD_URLS="https://debuginfod.debian.net"
BTW: if you can t rely on debuginfod (for whatever reason), I d like to point your attention towards find-dbgsym-packages from the debian-goodies package. Vim Sadly Vim 8.2 once again makes another change for bad defaults (hello mouse behavior!). When incsearch is set, it also applies to :substitute. This makes it veeeeeeeeeery annoying when running something like :%s/\s\+$// to get rid of trailing whitespace characters, because if there are no matches it jumps to the beginning of the file and then back, sigh. To get the old behavior back, you can use this:
au CmdLineEnter : let s:incs = &incsearch   set noincsearch
au CmdLineLeave : let &incsearch = s:incs
rsync rsync was updated from v3.1.3 to v3.2.3. It provides various checksum enhancements (see option --checksum-choice). We got new capabilities (hardlink-specials, atimes, optional protect-args, stop-at, no crtimes) and the addition of zstd and lz4 compression algorithms. And we got new options: OpenSSH OpenSSH was updated from v7.9p1 to 8.4p1, so if you re interested in all the changes, check out the release notes between those version (8.0, 8.1, 8.2, 8.3 + 8.4). Let s highlight some notable new features: Misc unsorted

Next.

Previous.