Search Results: "thep"

18 April 2024

Thomas Koch: Rebuild search with trust

Posted on January 20, 2024
Finally there is a thing people can agree on: Apparently, Google Search is not good anymore. And I m not the only one thinking about decentralization to fix it: Honey I federated the search engine - finding stuff online post-big tech - a lightning talk at the recent chaos communication congress The speaker however did not mention, that there have already been many attempts at building distributed search engines. So why do I think that such an attempt could finally succeed? My definition of success is:
A mildly technical computer user (able to install software) has access to a search engine that provides them with superior search results compared to Google for at least a few predefined areas of interest.
The exact algorithm used by Google Search to rank websites is a secret even to most Googlers. Still it is clear, that it relies heavily on big data: billions of queries, a comprehensive web index and user behaviour data. - All this is not available to us. A distributed search engine however can instead rely on user input. Every admin of one node seeds the node ranking with their personal selection of trusted sites. They connect their node with nodes of people they trust. This results in a web of (transitive) trust much like pgp. For comparison, imagine you are searching for something in a world without computers: You ask the people around you. They probably forward your question to their peers. I already had a look at YaCy. It is active, somewhat usable and has a friendly maintainer. Unfortunately I consider the codebase to show its age. It takes a lot of time for a newcomer to find their way around and it contains a lot of cruft. Nevertheless, YaCy is a good example that a decentralized search software can be done even by a small team or just one person. I myself started working on a software in Haskell and keep my notes here: Populus:DezInV. Since I m learning Haskell along the way, there is nothing there to see yet. Additionally I took a yak shaving break to learn nix. By the way: DuckDuckGo is not the alternative. And while I would encourage you to also try Yandex for a second opinion, I don t consider this a solution.

31 March 2024

Russell Coker: Links March 2024

Bruce Schneier wrote an interesting blog post about his workshop on reimagining democracy and the unusual way he structured it [1]. It would be fun to have a security conference run like that! Matthias write an informative blog post about Wayland Wayland really breaks things Just for now which links to a blog debate about the utility of Wayland [2]. Wayland seems pretty good to me. Cory Doctorow wrote an insightful article about the AI bubble comparing it to previous bubbles [3]. Charles Stross wrote an insightful analysis of the implications if the UK brought back military conscription [4]. Looks like the era of large armies is over. Charles Stross wrote an informative blog post about the Worldcon in China, covering issues of vote rigging for location, government censorship vs awards, and business opportunities [5]. The Paris Review has an interesting article about speaking to the CIA s Creative Writing Group [6]. It doesn t explain why they have a creative writing group that has some sort of semi-official sanction. LongNow has an insightful article about the threats to biodiversity in food crops and the threat that poses to humans [7]. Bruce Schneier and Albert Fox Cahn wrote an interesting article about the impacts of chatbots on human discourse [8]. If it makes people speak more precisely then that would be great for all Autistic people!

29 July 2023

Shirish Agarwal: Manipur, Data Leakage, Aadhar, and IRCv3

Manipur Lot of news from Manipur. Seems the killings haven t stopped. In fact, there was a huge public rally in support of the rapists and murderers as reported by Imphal Free Press. The Ruling Govt. both at the Center and the State being BJP continuing to remain mum. Both the Internet shutdowns have been criticized and seems no effect on the Government. Their own MLA was attacked but they have chosen to also be silent about that. The opposition demanded that the PM come in both the houses and speak but he has chosen to remain silent. In that quite a few bills were passed without any discussions. If it was not for the viral videos nobody would have come to know of anything  . Internet shutdowns impact women disproportionately as more videos of assaults show  Of course, as shared before that gentleman has been arrested under Section 66A as I shared in the earlier blog post. In any case, in the last few years, this Government has chosen to pass most of its bills without any discussions. Some of the bills I will share below. The attitude of this Govt. can be seen through this cartoon
The above picture shows the disqualified M.P. Rahul Gandhi because he had asked what is the relationship between Adani and Modi. The other is the Mr. Modi, the Prime Minister who refuses to enter and address the Parliament. Prem Panicker shares how we chillingly have come to this stage when even after rapes we are silent

Data Leakage According to most BJP followers this is not a bug but a feature of this Government. Sucheta Dalal of Moneylife shared how the data leakage has been happening at the highest levels in the Government. The leakage is happening at the ministerial level because unless the minister or his subordinate passes a certain startup others cannot come to know. As shared in the article, while the official approval may take 3-4 days, within hours other entities start congratulating. That means they know that the person/s have been approved.While reading this story, the first thought that immediately crossed my mind was data theft and how easily that would have been done. There was a time when people would be shocked by articles such as above and demand action but sadly even if people know and want to do something they feel powerless to do anything

PAN Linking and Aadhar Last month GOI made PAN Linking to Aadhar a thing. This goes against the judgement given by the honored Supreme Court in September 2018. Around the same time, Moneylife had reported on the issue on how the info. on Aadhar cards is available and that has its consequences. But to date nothing has happened except GOI shrugging. In the last month, 13 crore+ users of PAN including me affected by it  I had tried to actually delink the two but none of the banks co-operated in the same  Aadhar has actually number of downsides, most people know about the AEPS fraud that has been committed time and time again. I have shared in previous blog posts the issue with biometric data as well as master biometric data that can and is being used for fraud. GOI either ignorant or doesn t give a fig as to what happens to you, citizen of India. I could go on and on but it would result in nothing constructive so will stop now

IRCv3 I had been enthused when I heard about IRCV3. While it was founded in 2016, it sorta came on in its own in around 2020. I did try matrix or rather riot-web and went through number of names while finally setting on element. While I do have the latest build 1.11.36 element just hasn t been workable for me. It is too outsized, and occupies much more real estate than other IM s (Instant Messengers and I cannot correct size it like I do say for qbittorrent or any other app. I had filed couple of bugs on it but because it apparently only affects me, nothing happened afterwards  But that is not the whole story at all. Because of Debconf happening in India, and that too Kochi, I decided to try out other tools to see how IRC is doing. While the Debian wiki page shares a lot about IRC clients and is also helpful in sharing stats by popcounter ( popularity-contest, thanks to whoever did that), it did help me in trying two of the most popular clients. Pidgin and Hexchat, both of which have shared higher numbers. This might be simply due to the fact that both get downloaded when you install the desktop version or they might be popular in themselves, have no idea one way or the other. But still I wanted to see what sort of experience I could expect from both of them in 2023. One of the other things I noticed is that Pidgin is not a participating organization in ircv3 while hexchat is. Before venturing in, I also decided to take a look at oftc.net. Came to know that for sometime now, oftc has started using web verify. I didn t see much of a difference between hcaptcha and gcaptcha other than that the fact that they looked more like oil paintings rather than anything else. While I could easily figure the odd man out or odd men out to be more accurate, I wonder how a person with low or no vision would pass that ??? Also much of our world is pretty much contextual based, figuring who the odd one is or are could be tricky. I do not have answers to the above other than to say more work needs to be done by oftc in that area. I did get a link that I verified. But am getting ahead of the story. Another thing I understood that for some reason oftc is also not particpating in ircv3, have no clue why not :(I

Account Registration in Pidgin and Hexchat This is the biggest pain point in both. I failed to register via either Pidgin or Hexchat. I couldn t find a way in either client to register my handle. I have had on/off relationships with IRC over the years, the biggest issue being IIRC is that if you stop using your handle for a month or two others can use it. IIRC, every couple of months or so, irc/oftc releases the dormant ones. Matrix/Vector has done quite a lot in that regard but that s a different thing altogether so for the moment will keep that aside. So, how to register for the network. This is where webchat.oftc.net comes in. You get a quaint 1970 s IRC window (probably emulated) where you call Nickserv to help you. As can be seen it one of the half a dozen bots that helps IRC. So the first thing you need to do is /msg nickserv help what you are doing is asking nickserv what services they have and Nickserv shares the numbers of services it offers. After looking into, you are looking for register /msg nickerv register Both the commands tell you what you need to do as can be seen by this
Let s say you are XYZ and your e-mail address is xyz@xyz.com This is just a throwaway id I am taking for the purpose of showing how the process is done. For this, also assume your passowrd is 1234xyz;0x something like this. I have shared about APG (Advanced Password Generator) before so you could use that to generate all sorts of passwords for yourself. So next would be /msg nickserv register 1234xyz;0x xyz@xyz.com Now the thing to remember is you need to be sure that the email is valid and in your control as it would generate a link with hcaptcha. Interestingly, their accessibility signup fails or errors out. I just entered my email and it errors out. Anyway back to it. Even after completing the puzzle, even with the valid username and password neither pidgin or hexchat would let me in. Neither of the clients were helpful in figuring out what was going wrong. At this stage, I decided to see the specs of ircv3 if they would help out in anyway and came across this. One would have thought that this is one of the more urgent things that need to be fixed, but for reasons unknown it s still in draft mode. Maybe they (the participants) are not in consensus, no idea. Unfortunately, it seems that the participants of IRCv3 have chosen a sort of closed working model as the channel is restricted. The only notes of any consequence are being shared by Ilmari Lauhakangas from Finland. Apparently, Mr/Ms/they Ilmari is also a libreoffice hacker. It is possible that their is or has been lot of drama before or something and that s why things are the way they are. In either way, doesn t tell me when this will be fixed, if ever. For people who are on mobiles and whatnot, without element, it would be 10x times harder. Update :- Saw this discussion on github. Don t see a way out  It seems I would be unable to unable to be part of Debconf Kochi 2023. Best of luck to all the participants and please share as much as possible of what happens during the event.

19 July 2023

Shirish Agarwal: RISC-V, Chips Act, Burning of Books, Manipur

RISC -V Motherboard, SBC While I didn t want to, a part of me is hyped about this motherboard. This would probably be launched somewhere in November. There are obvious issues in this, the first being unlike regular motherboards you wouldn t be upgrade as you would do.You can t upgrade your memory, can t upgrade the CPU (although new versions of instructions could be uploaded, similar to BIOS updates) but as the hardware is integrated (the quad-core SiFive Performance P550 core complex) it would really depend. If the final pricing is around INR 4-5k then it may be able to sell handsomely provided there are people to push and provide support around it. A 500 GB or 1 TB SSD coupled with it and a cheap display unit and you could use it anywhere although as the name says it s more for tinkering as the name suggests. Another board that could perhaps be of more immediate use would be the beagleboard. They launched the same couple of days back and called it Beagle V-Ahead. Again, costs are going to be a concern. Just a year before the pandemic the Beagleboard Black (BB) used to cost in the sub 4k range, today it costs 8k+ for the end user, more than twice the price. How much Brexit is to be blamed for this and how much the Indian customs we would never know. The RS Group that is behind that shop is head-quartered in the UK. As said before, we do not know the price of either board as it probably will take few months for v-ahead to worm its way in the Indian market, maybe another 6 months or so. Even so, with the limited info. on both the boards, I am tilting more towards the other HiFive one. We should come to know about the boards say in 3-5 months of time.

CHIPS Act I had shared about the Chips Act a few times here as well as on SM. Two articles do tell how the CHIPS Act 2023 is more of a political tool, an industrial defence policy rather than just business as most people tend to think.

Cancelation of Books, Books Burning etc. Almost 2400 years ago, Plato released his work called Plato s Republic and one of the seminal works within it is perhaps one of the most famous works was the Allegory of the Cave. That is used again and again in a myriad ways, mostly in science-fiction though and mostly to do with utopian, dystopian movies, webseries etc. I did share how books are being canceled in the States, also a bit here. But the most damning thing has happened throughout history, huge quantities of books burned almost all for politics  But part of it has been neglect as well as this time article shares. What we have lost and continue to lose is just priceless. Every book has a grain of truth in it, some more, some less but equally enjoyable. Most harmful is the neglect towards books and is more true today than any other time in history. Kids today have a wide variety of tools to keep themselves happy or occupied, from anime, VR, gaming the list goes on and on. In that scenario, how the humble books can compete. People think of Kindle but most e-readers like Kindle are nothing but obsolescence by design. I have tried out Kindle a few times but find it a bit on the flimsy side. Books are much better IMHO or call me old-school. While there are many advantages, one of the things that I like about books is that you can easily put yourself in either the protagonist or the antagonist or somewhere in the middle and think of the possible scenarios wherever you are in a particular book. I could go on but it will be a blog post or two in itself. Till later. Happy Reading.

Update:Manipur Extremely horrifying visuals, articles and statements continue to emanate from Manipur. Today, 19th July 2023, just couple of hours back, a video surfaced showing two Kuki women were shown as stripped, naked and Meitei men touching their private parts. Later on, we came to know that this was in response of a disinformation news spread by the Meitis of few women being raped although no documentary evidence of the same surfaced, no names nothing. While I don t want to share the video I will however share the statement shared by the Kuki-Zo tribal community on that. The print gives a bit more context to what has been going on.
Update, Few hours later : The Print also shared more of a context about six days ago. The reason we saw the video now was that for the last 2.5 months Manipur was in Internet shutdown so those videos got uploaded now. There was huge backlash from the Twitter community and GOI ordered the Manipur Police to issue this Press Release yesterday night or just few hours before with yesterday s time-stamp.
IndianExpress shared an article that does state that while an FIR had been registered immediately no arrests so far and this is when you can see the faces of all the accused. Not one of them tried to hide their face behind a mask or something. So, if the police wanted, they could have easily identified who they are. They know which community the accused belong to, they even know from where they came. If they wanted to, they could have easily used mobile data and triangulation to find the accused and their helpers. So, it does seem to be attempt to whitewash and protect a certain community while letting it prey on the other. Another news that did come in, is because of the furious reaction on Twitter, Youtube has constantly been taking down the video as some people are getting a sort of high more so from the majoritarian community and making lewd remarks. Twitter has been somewhat quick when people are making lewd remarks against the two girl/women. Quite a bit of the above seems like a cover-up. Lastly, apparently GOI has agreed to having a conversation about it in Lok Sabha but without any voting or passing any resolutions as of right now. Would update as an when things change. Update: Smriti Irani, the Child and Development Minister gave the weakest statement possible
As can be noticed, she said sexual assault rather than rape. The women were under police custody for safety when they were whisked away by the mob. No mention of that. She spoke to the Chief Minister who has been publicly known as one of the provocateurs or instigators for the whole thing. The CM had publicly called the Kukus and Nagas as foreigners although both of them claim to be residing for thousands of years and they apparently have documentary evidence of the same  . Also not clear who is doing the condemning here. No word of support for the women, no offer of intervention, why is she the Minister of Child and Women Development (CDW) if she can t use harsh words or give support to the women who have gone and going through horrific things  Update : CM Biren Singh s Statement after the video surfaced
This tweet is contradictory to the statements made by Mr. Singh couple of months ago. At that point in time, Mr. Singh had said that NIA, State Intelligence Departments etc. were giving him minute to minute report on the ground station. The Police itself has suo-moto (on its own) powers to investigate and apprehend criminals for any crime. In fact, the Police can call for questioning of anybody in any relation to any crime and question them for upto 48 hours before charging them. In fact, many cases have been lodged where innocent persons have been framed or they have served much more in the jail than the crime they are alleged to have been committed. For e.g. just a few days before there was a media report of a boy who has been in jail for 3 years. His alleged crime, stealing mere INR 200/- to feed himself. Court doesn t have time to listen to him yet. And there are millions like him. The quint eloquently shares the tale where it tells how both the State and the Centre have been explicitly complicit in the incidents ravaging Manipur. In fact, what has been shared in the article has been very true as far as greed for land is concerned. Just couple of weeks back there have been a ton of floods emanating from Uttarakhand and others. Just before the flooding began, what was the CM doing can be seen here. Apart from the newspapers I have shared and the online resources, most of the mainstream media has been silent on the above. In fact, they have been silent on the Manipur issue until the said video didn t come into limelight. Just now, in Lok Sabha everybody is present except the Prime Minister and the Home Minister. The PM did say that the law will take its own course, but that s about it. Again no support for the women concerned.  Update: CJI (Chief Justice of India) has taken suo-moto cognizance and has warned both the State and Centre to move quickly otherwise they will take the matter in their own hand.
Update: Within 2 hours of the CJI taking suo-moto cognizance, they have arrested one of the main accused Heera Das
The above tells you why the ban on Internet was put in the first place. They wanted to cover it all up. Of all the celebs, only one could find a bit of spine, a bit of backbone to speak about it, all the rest mum
Just imagine, one of the women is around my age while the young one could have been a daughter if I had married on time or a younger sister for sure. If ever I came face to face with them, I just wouldn t be able to look them in the eye. Even their whole whataboutery is built on sham. From their view Kukis are from Burma or Burmese descent. All of which could be easily proved by DNA of all. But let s leave that for a sec. Let s take their own argument that they are Burmese. Their idea of Akhand Bharat stretches all the way to Burma (now called Myanmar). They want all the land but no idea with what to do with the citizens living on it. Even after the video, the whataboutery isn t stopping, that shows how much hatred is there. And not knowing that they too will be victim of the same venom one or the other day  Update: Opposition was told there would be a debate on Manipur. The whole day went by, no debate. That s the shamelessness of this Govt.  Update 20th July 19:25 Center may act or not act against the perpetrators but they will act against Twitter who showed the crime. Talk about shooting the messenger
We are now in the last stage. In 2014, we were at 6

9 July 2023

Russell Coker: Matrix

Introduction In 2020 I first setup a Matrix [1] server. Matrix is a full featured instant messaging protocol which requires a less stringent definition of instant , messages being delayed for minutes aren t that uncommon in my experience. Matrix is a federated service where the servers all store copies of the room data, so when you connect your client to it s home server it gets all the messages that were published while you were offline, it is widely regarded as being IRC but without a need to be connected all the time. One of it s noteworthy features is support for end to end encryption (so the server can t access cleartext messages from users) as a core feature. Matrix was designed for bridging with other protocols, the most well known of which is IRC. The most common Matrix server software is Synapse which is written in Python and uses a PostgreSQL database as it s backend [2]. My tests have shown that a lightly loaded Synapse server with less than a dozen users and only one or two active users will have noticeable performance problems if the PostgreSQL database is stored on SATA hard drives. This seems like the type of software that wouldn t have been developed before SSDs became commonly affordable. The matrix-synapse is in Debian/Unstable and the backports repositories for Bullseye and Buster. As Matrix is still being very actively developed you want to have a recent version of all related software so Debian/Buster isn t a good platform for running it, Bullseye or Bookworm are the preferred platforms. Configuring Synapse isn t really hard, but there are some postential problems. The first thing to do is to choose the DNS name, you can never change it without dropping the database (fresh install of all software and no documented way of keeping user configuration) so you don t want to get it wrong. Generally you will want the Matrix addresses at the top level of the domain you choose. When setting up a Matrix server for my local LUG I chose the top level of their domain luv.asn.au as the DNS name for the server. If you don t want to run a server then there are many open servers offering free account. Server Configuration Part of doing this configuration required creating the URL https://luv.asn.au/.well-known/matrix/client with the following contents so clients know where to connect. Note that you should not setup Jitsi sections without first discussing it with the people who run the Jitsi server in question.
 
  "m.homeserver":  
    "base_url": "https://luv.asn.au"
   
  "jitsi":  
    "preferredDomain": "jitsi.perthchat.org"
   
  "im.vector.riot.jitsi":  
    "preferredDomain": "jitsi.perthchat.org"
   
 
Also the URL https://luv.asn.au/.well-known/matrix/server for other servers to know where to connect:
 
  "m.server": "luv.asn.au:8448"
 
If the base_url or the m.server points to a name that isn t configured then you need to add it to the web server configuration. See section 3.1 of the documentation about well known Matrix client fields [3]. The SE Linux specific parts of the configuration are to run the following commands as Bookworm and Bullseye SE Linux policy have support for Synapse:
setsebool -P httpd_setrlimit 1
setsebool -P httpd_can_network_relay 1
setsebool -P matrix_postgresql_connect 1
To configure apache you have to enable proxy mode and SSL with the command a2enmod proxy ssl proxy_http and add the line Listen 8443 to /etc/apache2/ports.conf and restart Apache. The command chmod 700 /etc/matrix-synapse should probably be run to improve security, there s no reason for less restrictive permissions on that directory. In the /etc/matrix-synapse/homeserver.yaml file the macaroon_secret_key is a random key for generating tokens. To use the matrix.org server as a trusted key server and not receive warnings put the following line in the config file:
suppress_key_server_warning: true
A line like the following is needed to configure the baseurl:
public_baseurl: https://luv.asn.au:8448/
To have Synapse directly accept port 8448 connections you have to change bind_addresses in the first section of listeners to the global listen IPv6 and IPv4 addresses. The registration_shared_secret is a password for adding users. When you have set that you can write a shell script to add new users such as:
#!/bin/bash
# usage: matrix_new_user USER PASS
synapse_register_new_matrix_user -u $1 -p $2 -a -k THEPASSWORD
You need to set tls_certificate_path and tls_private_key_path to appropriate values, usually something like the following:
tls_certificate_path: "/etc/letsencrypt/live/www.luv.asn.au-0001/fullchain.pem"
tls_private_key_path: "/etc/letsencrypt/live/www.luv.asn.au-0001/privkey.pem"
For the database section you need something like the following which matches your PostgreSQL setup:
  name: "psycopg2"
  args:
    user: WWWWWW
    password: XXXXXXX
    database: YYYYYYY
    host: ZZZZZZ
    cp_min: 5
    cp_max: 10
You need to run psql commands like the following to set it up:
create role WWWWWW login password 'XXXXXXX';
create database YYYYYYY with owner WWWWWW ENCODING 'UTF8' LOCALE 'C' TEMPLATE 'template0';
For the Apache configuration you need something like the following for the port 8448 web server:
<VirtualHost *:8448>
  SSLEngine on
...
  ServerName luv.asn.au;
  AllowEncodedSlashes NoDecode
  ProxyPass /_matrix http://127.0.0.1:8008/_matrix nocanon
  ProxyPassReverse /_matrix http://127.0.0.1:8008/_matrix
  AllowEncodedSlashes NoDecode
  ProxyPass /_matrix http://127.0.0.1:8008/_matrix nocanon
  ProxyPassReverse /_matrix http://127.0.0.1:8008/_matrix
</VirtualHost>
Also you must add the ProxyPass section to the port 443 configuration (the server that is probably doing other more directly user visible things) for most (all?) end-user clients:
  ProxyPass /_matrix http://127.0.0.1:8008/_matrix nocanon
This web page can be used to test listing rooms via federation without logging in [4]. If it gives the error Can t find this server or its room list then you must set allow_public_rooms_without_auth and allow_public_rooms_over_federation to true in /etc/matrix-synapse/homeserver.yaml. The Matrix Federation Tester site [5] is good for testing new servers and for tests after network changes. Clients The Element (formerly known as Riot) client is the most common [6]. The following APT repository will allow you to install Element via apt install element-desktop on Debian/Buster.
deb https://packages.riot.im/debian/ default main
The Debian backports repository for Buster has the latest version of Quaternion, apt install quaternion should install that for you. Quaternion doesn t support end to end encryption (E2EE) and also doesn t seem to have good support for some other features like being invited to a room. My current favourite client is Schildi Chat on Android [7], which has a notification message 24*7 to reduce the incidence of Android killing it. Eventually I want to go to PinePhone or Librem 5 for all my phone use so I need to find a full featured Linux client that works on a small screen. Comparing to Jabber I plan to keep using Jabber for alerts because it really does instant messaging, it can reliably get the message to me within a matter of seconds. Also there are a selection of command-line clients for Jabber to allow sending messages from servers. When I first investigated Matrix there was no program suitable for sending messages from a script and the libraries for the protocol made it unreasonably difficult to write one. Now there is a Matrix client written in shell script [8] which might do that. But the delay in receiving messages is still a problem. Also the Matrix clients I ve tried so far have UIs that are more suited to serious chat than to quickly reading a notification message. Bridges Here is a list of bridges between Matrix and other protocols [9]. You can run bridges yourself for many different messaging protocols including Slack, Discord, and Messenger. There are also bridges run for public use for most IRC channels. Here is a list of integrations with other services [10], this is for interacting with things other than IM systems such as RSS feeds, polls, and other things. This also has some frameworks for writing bots. More Information The Debian wiki page about Matrix is good [11]. The view.matrix.org site allows searching for public rooms [12].

6 June 2023

Shirish Agarwal: Odisha Train Crash and Coverup, Demonetization 2.0 & NHFS-6 Survey

Just a few days back we came to know about the horrific Train Crash that happened in Odisha (Orissa). There are some things that are known and somethings that can be inferred by observance. Sadly, it seems the incident is going to be covered up  . Some of the facts that have not been contested in the public domain are that there were three lines. One loop line on which the Goods Train was standing and there was an up and a down line. So three lines were there. Apparently, the signalling system and the inter-locking system had issues as highlighted by an official about a month back. That letter, thankfully is in the public domain and I have downloaded it as well. It s a letter that goes to 4 pages. The RW is incensed that the letter got leaked and is in public domain. They are blaming everyone and espousing conspiracy theories rather than taking the minister to task. Incidentally, the Minister has three ministries that he currently holds. Ministry of Communication, Ministry of Electronics and Information Technology (MEIT), and Railways Ministry. Each Ministry in itself is important and has revenues of more than 6 lakh crore rupees. How he is able to do justice to all the three ministries is beyond me  The other thing is funds both for safety and relaying of tracks has been either not sanctioned or unutilized. In fact, CAG and the Railway Brass had shared how derailments have increased and unfulfilled vacancies but they were given no importance  In fact, not talking about safety in the recently held Chintan Shivir (brainstorming session) tells you how much the Govt. is serious about safety. In fact, most of the programme was on high speed rail which is a white elephant. I have shared a whitepaper done by RW in the U.S. that tells how high-speed rail doesn t make economic sense. And that is an economy that is 20 times + the Indian Economy. Even the Chinese are stopping with HSR as it doesn t make economic sense. Incidentally, Air Fares again went up 200% yesterday. Somebody shared in the region of 20k + for an Air ticket from their place to Bangalore  Coming back to the story itself. the Goods Train was on the loopline. Some say it was a little bit on the outer, some say otherwise, but it is established that it was on the loopline. This is standard behavior on and around Railway Stations around the world. Whether it was in the Inner or Outer doesn t make much of a difference with what happened next. The first train that collided with the goods train was the 12864 (SMVB-HWH) Yashwantpur Howrah Express and got derailed on to the next track where from the opposite direction 12841 (Shalimar- Bangalore) Coramandel Express was coming. Now they have said that around 300 people have died and that seems to be part of the cover-up. Both the trains are long trains, having between 23 odd coaches each. Even if you have reserved tickets you have 80 odd people in a coach and usually in most of these trains, it is at least double of that. Lot of money goes to TC and then above (Corruption). The Railway fares have gone up enormously but that s a question for perhaps another time  . So at the very least, we could be looking at more than 1000 people having died. The numbers are being under-reported so that nobody has to take responsibility. The Railways itself has told that it is unable to identify 80% of the people who have died. This means that 80% were unreserved ticket holders or a majority of them. There have been disturbing images as how bodies have been flung over on tractors and whatnot to be either buried or cremated without a thought. We are in peak summer season so bodies will start to rot within 24-48 hours  No arrangements made to cool the bodies and take some information and identifying marks or whatever. The whole thing being done in a very callous manner, not giving dignity to even those who have died for no fault of their own. The dissent note also tells that a cover-up is also in the picture. Apparently, India doesn t have nor does it feel to have a need for something like the NTSB that the U.S. used when it hauled both the plane manufacturer (Boeing) and the FAA when the 737 Max went down due to improper data collection and sharing of data with pilots. And with no accountability being fixed to Minister or any of the senior staff, a small junior staff person may be fired. Perhaps the same official that actually told them about the signal failures almost 3 months back  There were and are also some reports that some jugaadu /temporary fixes were applied to signalling and inter-locking just before this incident happened. I do not know nor confirm one way or the other if the above happened. I can however point out that if such a thing happened, then usually a traffic block is announced and all traffic on those lines are stopped. This has been the thing I know for decades. Traveling between Mumbai and Pune multiple times over the years am aware about traffic block. If some repair work was going on and it wasn t able to complete the work within the time-frame then that may well have contributed to the accident. There is also a bit muddying of the waters where it is being said that one of the trains was 4 hours late, which one is conflicting stories. On top of the whole thing, they have put the case to be investigated by CBI and hinting at sabotage. They also tried to paint a religious structure as mosque, later turned out to be a temple. The RW says done by Muslims as it was Friday not taking into account as shared before that most Railway maintenance works are usually done between Friday Monday. This is a practice followed not just in India but world over. There has been also move over a decade to remove wooden sleepers and have concrete sleepers. Unlike the wooden ones they do not expand and contract as much and their life is much more longer than the wooden ones. Funds had been marked (although lower than last few years) but not yet spent. As we know in case of any accident, it is when all the holes in cheese line up it happens. Fukushima is a great example of that, no sea wall even though Japan is no stranger to Tsunamis. External power at the same level as the plant. (10 meters above sea-level), no training for cascading failures scenarios which is what happened. The Days mini-series shares some but not all the faults that happened at Fukushima and the Govt. response to it. There is a difference though, the Japanese Prime Minister resigned on moral grounds. Here, nor the PM, nor the Minister would be resigning on moral grounds or otherwise :(. Zero accountability and that was partly a natural disaster, here it s man-made. In fact, both the Minister and the Prime Minister arrived with their entourages, did a PR blitzkrieg showing how concerned they are. Within 50 hours, the lines were cleared. The part-time Railway Minister shared that he knows the root cause and then few hours later has given the case to CBI. All are saying, wait for the inquiry report. To date, none of the accidents even in this Govt. has produced an investigation report. And even if it did, I am sure it will whitewash as it did in case of Adani as I had shared before in the previous blog post. Incidentally, it is reported that Adani paid off some of its debt, but when questioned as to where they got the money, complete silence on that part :(. As can be seen cover-up after cover-up  FWIW, the Coramandel Express is known as the Migrant train so has a huge number of passengers, the other one which was collided with is known as sick train as huge number of cancer patients use it to travel to Chennai and come back

Demonetization 2.0 Few days back, India announced demonetization 2.0. Surprised, don t be. Apparently, INR 2k/- is being used for corruption and Mr. Modi is unhappy about it. He actually didn t like the INR 2k/- note but was told that it was needed, who told him we are unaware to date. At that time the RBI Governor was Mr. Urjit Patel who didn t say about INR 2k/- he had said that INR 1k/- note redesigned would come in the market. That has yet to happen. What has happened is that just like INR 500/- and INR 1k/- note is concerned, RBI will no longer honor the INR 2k/- note. Obviously, this has made our neighbors angry, namely Nepal, Sri Lanka, Bhutan etc. who do some trading with us. 2 Deccan herald columns share the limelight on it. Apparently, India wants to be the world s currency reserve but doesn t want to play by the rules for everyone else. It was pointed out that both the U.S. and Singapore had retired their currencies but they will honor that promise even today. The Singapore example being a bit closer (as it s in Asia) is perhaps a bit more relevant than the U.S. one. Singapore retired the SGD $10,000 as of 2014 but even in 2022, it remains as legal tender. They also retired the SGD $1,000 in 2020 but still remains legal tender.

So let s have a fictitious example to illustrate what is meant by what Singapore has done. Let s say I go to Singapore, rent a flat, and find a $1000 note in that house somewhere. Both practically and theoretically, I could go down to any of the banks, get the amount transferred to my wallet, bank account etc. and nobody will question. Because they have promised the same. Interestingly, the Singapore Dollar has been pretty resilient against the USD for quite a number of years vis-a-vis other Asian currencies. Most of the INR 2k/- notes were also found and exchanged in Gujarat in just a few days (The PM and HM s state.). I am sure you are looking into the mental gymnastics that the RW indulge in :(. What is sadder that most of the people who try to defend can t make sense one way or the other and start to name-call and get personal as they have nothing else

Disability questions dropped in NHFS-6 Just came to know today that in the upcoming National Family Health Survey-6 disability questions are being dropped. Why is this important. To put it simply, if you don t have numbers, you won t and can t make policies for them. India is one of the worst countries to live if you are disabled. The easiest way to share to draw attention is most Railway platforms are not at level with people. Just as Mick Lynch shares in the UK, the same is pretty much true for India too. Meanwhile in Europe, they do make an effort to be level so even disabled people have some dignity. If your public transport is sorted, then people would want much more and you will be obligated to provide for them as they are citizens. Here, we have had many reports of women being sexually molested when being transferred from platform to coach irrespective of their age or whatnot  The main takeaway is if you do not have their voice, you won t make policies for them. They won t go away but you will make life hell for them. One thing to keep in mind that most people assume that most people are disabled from birth. This may or may not be true. For e.g. in the above triple Railways accidents, there are bound to be disabled people or newly disabled people who were healthy before the accident. The most common accident is road accidents, some involving pedestrians and vehicles or both, the easiest is Ministry of Road Transport data that says 4,00,000 people sustained injuries in 2021 alone in road mishaps. And this is in a country where even accidents are highly under-reported, for more than one reason. The biggest reason especially in 2 and 4 wheeler is the increased premium they would have to pay if in an accident, so they usually compromise with the other and pay off the Traffic Inspector. Sadly, I haven t read a new book, although there are a few books I m looking forward to have. People living in India and neighbors please be careful as more heat waves are expected. Till later.

21 September 2021

Russell Coker: Links September 2021

Matthew Garrett wrote an interesting and insightful blog post about the license of software developed or co-developed by machine-learning systems [1]. One of his main points is that people in the FOSS community should aim for less copyright protection. The USENIX ATC 21/OSDI 21 Joint Keynote Address titled It s Time for Operating Systems to Rediscover Hardware has some inssightful points to make [2]. Timothy Roscoe makes some incendiaty points but backs them up with evidence. Is Linux really an OS? I recommend that everyone who s interested in OS design watch this lecture. Cory Doctorow wrote an interesting set of 6 articles about Disneyland, ride pricing, and crowd control [3]. He proposes some interesting ideas for reforming Disneyland. Benjamin Bratton wrote an insightful article about how philosophy failed in the pandemic [4]. He focuses on the Italian philosopher Giorgio Agamben who has a history of writing stupid articles that match Qanon talking points but with better language skills. Arstechnica has an interesting article about penetration testers extracting an encryption key from the bus used by the TPM on a laptop [5]. It s not a likely attack in the real world as most networks can be broken more easily by other methods. But it s still interesting to learn about how the technology works. The Portalist has an article about David Brin s Startide Rising series of novels and his thought s on the concept of Uplift (which he denies inventing) [6]. Jacobin has an insightful article titled You re Not Lazy But Your Boss Wants You to Think You Are [7]. Making people identify as lazy is bad for them and bad for getting them to do work. But this is the first time I ve seen it described as a facet of abusive capitalism. Jacobin has an insightful article about free public transport [8]. Apparently there are already many regions that have free public transport (Tallinn the Capital of Estonia being one example). Fare free public transport allows bus drivers to concentrate on driving not taking fares, removes the need for ticket inspectors, and generally provides a better service. It allows passengers to board buses and trams faster thus reducing traffic congestion and encourages more people to use public transport instead of driving and reduces road maintenance costs. Interesting research from Israel about bypassing facial ID [9]. Apparently they can make a set of 9 images that can pass for over 40% of the population. I didn t expect facial recognition to be an effective form of authentication, but I didn t expect it to be that bad. Edward Snowden wrote an insightful blog post about types of conspiracies [10]. Kevin Rudd wrote an informative article about Sky News in Australia [11]. We need to have a Royal Commission now before we have our own 6th Jan event. Steve from Big Mess O Wires wrote an informative blog post about USB-C and 4K 60Hz video [12]. Basically you can t have a single USB-C hub do 4K 60Hz video and be a USB 3.x hub unless you have compression software running on your PC (slow and only works on Windows), or have DisplayPort 1.4 or Thunderbolt (both not well supported). All of the options are not well documented on online store pages so lots of people will get unpleasant surprises when their deliveries arrive. Computers suck. Steinar H. Gunderson wrote an informative blog post about GaN technology for smaller power supplies [13]. A 65W USB-C PSU that fits the usual wall wart form factor is an interesting development.

21 June 2021

Shirish Agarwal: Accessibility, Freenode and American imperialism.

Accessibility This is perhaps one of the strangest ways and yet also perhaps the straightest way to start the blog post. For the past weeks/months, a strange experience has been there. I am using a Logitech wireless keyboard and mouse for almost a decade. Now, for the past few months and weeks we observed a somewhat rare phenomena . While in-between us we have a single desktop computer. So me and mum take turns to be on the Desktop. At times, however, the system would sit idle and after some time it goes to low-power mode/sleep mode after 30 minutes. Then, when you want to come back, you obviously have to give your login credentials. At times, the keyboard refuses to input any data in the login screen. Interestingly, the mouse still functions. Much more interesting is the fact that both the mouse and the keyboard use the same transceiver sensor to send data. And I had changed batteries to ensure it was not a power issue but still no input :(. While my mother uses and used the power switch (I did teach her how to hold it for few minutes and then let it go) but for self, tried another thing. Using the mouse I logged of the session thinking perhaps some race condition or something might be in the session which was not letting the keystrokes be inputted into the system and having a new session might resolve it. But this was not to be  Luckily, on the screen you do have the option to reboot or power off. I did a reboot and lo, behold the system was able to input characters again. And this has happened time and again. I tried to find GOK and failed to remember that GOK had been retired. I looked up the accessibility page on Debian wiki. Very interesting, very detailed but sadly it did not and does not provide the backup I needed. I tried out florence but found that the app. is buggy. Moreover, the instructions provided on the lightdm screen does not work. I do not get the on-screen keyboard while I followed the instructions. Just to be clear this is all on Debian testing which is gonna be Debian stable soonish  I even tried the same with xvkbd but no avail. I do use mate as my desktop-manager so maybe the instructions need some refinement ???? $ cat /etc/lightdm/lightdm-gtk-greeter.conf grep keyboard
# a11y-states = states of accessibility features: name save state on exit, -name
disabled at start (default value for unlisted), +name enabled at start. Allowed names: contrast, font, keyboard, reader.
keyboard=xvkbd no-gnome focus &
# keyboard-position = x y[;width height] ( 50%,center -0;50% 25% by default) Works only for onboard
#keyboard= Interestingly, Debian does provide two more on-screen keyboards, matchbox as well as onboard which comes from Ubuntu. While I have both of them installed. I find xvkbd to be enough for my work, the only issue seems to be I cannot get it from the drop-down box of accessibility at the login screen. Just to make sure that I have not gone to Gnome-display manager, I did run

$ sudo dpkg-reconfigure gdm3 Only to find out that I am indeed running lightdm. So I am a bit confused why it doesn t come up as an option when I have the login window/login manager running. FWIW I do run metacity as the window manager as it plays nice with all the various desktop environments I have, almost all of them. So this is where I m stuck. If I do get any help, I probably would also add those instructions to the wiki page, so it would be convenient to the next person who comes with the same issue. I also need to figure out some way to know whether there is some race-condition or something which is happening, have no clue how would I go about it without having whole lot of noise. I am sure there are others who may have more of an idea. FWIW, I did search unix.stackexchange as well as reddit/debian to see if I could see any meaningful posts but came up empty.

Freenode I had not been using IRC for quite some time now. The reasons have been multiple issues with Riot (now element) taking the whole space on my desktop. I did get alerted to the whole thing about a week after the whole thing went down. Somebody messaged me DM. I *think* I put up a thread or a mini-thread about IRC or something in response to somebody praising telegram/WhatsApp or one of those apps. That probably triggered the DM. It took me a couple of minutes to hit upon this. I was angry and depressed, seeing the behavior of the new overlords of freenode. I did see that lot of channels moved over to Libera. It was also interesting to see that some communities were thinking of moving to some other obscure platform, which again could be held hostage to the same thing. One could argue one way or the other, but that would be tiresome and fact is any network needs lot of help to be grown and nurtured, whether it is online or offline. I also saw that Libera was also using a software Solanum which is ircv3 compliant. Now having done this initial investigation, it was time to move to an IRC client. The Libera documentation is and was pretty helpful in telling which IRC clients would be good with their network. So I first tried hexchat. I installed it and tried to add Libera server credentials, it didn t work. Did see that they had fixed the bug in sid/unstable and now it s in testing. But at the time it was in sid, the bug-fixed and I wanted to have something which just ran the damn thing. I chanced upon quassel. I had played around with quassel quite a number of times before, so I knew I could play/use it. Hence, I installed it and was able to use it on the first try. I did use the encrypted server and just had to tweak some settings before I could use it with some help with their documentation. Although, have to say that even quassel upstream needs to get its documentation in order. It is just all over the place, and they haven t put any effort into streamlining the documentation, so that finding things becomes easier. But that can be said of many projects upstream. There is one thing though that all of these IRC clients lack. The lack of a password manager. Now till that isn t fixed it will suck because you need another secure place to put your password/s. You either put it on your desktop somewhere (insecure) or store it in the cloud somewhere (somewhat secure but again need to remember that password), whatever you do is extra work. I am sure there will be a day when authenticating with Nickserv will be an automated task and people can just get on talking on channels and figuring out how to be part of the various communities. As can be seen, even now there is a bit of a learning curve for both newbies and people who know a bit about systems to get it working. Now, I know there are a lot of things that need to be fixed in the anonymity, security place if I put that sort of hat. For e.g. wouldn t it be cool if either the IRC client or one of its add-on gave throwaway usernames and passwords. The passwords would be complex. This would make it easier who are paranoid about security and many do and would have. As an example we can see of Fuchs. Now if the gentleman or lady is working in a professional capacity and would come to know of their real identity and perceive rightly or wrongly the role of that person, it will affect their career. Now, should it? I am sure a lot of people would be divided on the issue. Personally, as far as I am concerned, I would say no because whether right or wrong, whatever they were doing they were doing on their own time. Not on company time. So it doesn t concern the company at all. If we were to let companies police the behavior outside the time, individuals would be in a lot of trouble. Although, have to say that is a trend that has been seen in companies that are firing people either on the left or right. A recent example that comes to mind is Emily Wilder who was fired by Associated Press. Interestingly, she was interviewed by Democracy now, and it did come out that she is a Jew. As can be seen and understood there is a lot of nuance to her story and not the way she was fired. It doesn t give a good taste in the mouth, but then getting fired nobody does. On few forums, people did share of people getting fired of their job because they were dancing (cops). Again, it all depends, for me again, hats off to anybody who feels like dancing or whatever because there are just so many depressing stories all around.

Banned and FOE On few forums I was banned because I was talking about Brexit and American imperialism, both of which are seem to ruffle a few feathers in quite a few places. For instance, many people for obvious reasons do not like this video

Now I m sorry I am not able to and have not been able to give invidious links for the past few months. The reason being invidious itself went through some changes and the changes are good and bad. For e.g. now you need to share your google id with a third-party which at least to my mind is not a good idea. But that probably is another story altogether and it probably will need its own place. Coming back to the video itself, this was shared by Anthony hazard and the Title is The Atlantic slave trade: What too few textbooks told you . I did see this video quite a few years ago and still find it hard to swallow that tens of millions of Africans were bought as slaves to the Americas, although to be fair it does start with the Spanish settlement in the land which would be called the U.S. but they bought slaves with themselves. They even got the American natives, i.e. people from different tribes which made up America at that point. One point to note is that U.S. got its independence on July 4, 1776 so all the people before that were called as European settlers for want of a better word. Some or many of these European settlers would be convicts who were sent from UK. But as shared in the article, that would only happen with U.S. itself is mature and open enough for that discussion. Going back to the original point though, these European or American settlers bought lot of slaves from Africa. The video does also shed some of the cruelty the Europeans or Americans did on the slaves, men and women in different ways. The most revelatory part though which I also forget many a times that because lot of people were taken from Africa and many of them men, it did lead to imbalances in the African societies not just in weddings but economics in general. It also developed a theory called Critical Race theory in which it tries to paint the Africans as an inferior race otherwise how would Christianity work where their own good book says All men are born equal . That does in part explain why the African countries are still so far behind their European or American counterparts. But Africa can still be proud as they are richer than us, yup India. Sadly, I don t think America is ready to have that conversation anytime soon or if ever. And if it were to do, it would have to out-do any truth and reconciliation Committee which the world has seen. A mere apology or two would not just cut it. The problems of America sadly are not limited to just Africans but the natives of the land, for e.g. the Lakota people. In 1868, they put a letter stating we will give the land back to the Lakota people forever, but then the gold rush happened. In 2007, when the Lakota stated their proposal for independence, the U.S. through its force denied. So much for the paper, it was written on. Now from what I came to know over the years, the American natives are called First nations . Time and time again the American Govt. has tried or been foul towards them. Some of the examples include The Yucca Mountain nuclear waste repository . The same is and was the case with The Keystone pipeline which is now dead. Now one could say that it is America s internal matter and I would fully agree but when they speak of internal matters of other countries, then we should have the same freedom. But this is not restricted to just internal matters, sadly. Since the 1950 s i.e. the advent of the cold war, America s foreign policy made Regime changes all around the world. Sharing some of the examples from the Cold War

Iran 1953
Guatemala 1954
Democratic Republic of the Congo 1960
Republic of Ghana 1966
Iraq 1968
Chile 1973
Argentina 1976
Afghanistan 1978-1980s
Grenada
Nicaragua 1981-1990
1. Destabilization through CIA assets
2. Arming the Contras
El Salvador 1980-92
Philippines 1986 Even after the Cold War ended the situation was anonymolus, meaning they still continued with their old behavior. After the end of Cold War

Guatemala 1993
Serbia 2000
Iraq 2003-
Afghanistan 2001 ongoing There is a helpful Wikipedia article titled History of CIA which basically lists most of the covert regime changes done by U.S. The abvoe is merely a sub-set of the actions done by U.S. Now are all the behaviors above of a civilized nation ? And if one cares to notice, one would notice that all the above countries in the list which had the regime change had either Oil or precious metals. So U.S. is and was being what it accuses China, a profiteer. But this isn t just the U.S. China story but more about the American abuse of its power. My own country, India paid IMF loans till 1991 and we paid through the nose. There were economic sanctions against India. But then, this is again not just about U.S. India. Even with Europe or more precisely Norway which didn t want to side with America because their intelligence showed that no WMD were present in Iraq, the relationship still has issues.

Pandemic and the World So I do find that this whole blaming of China by U.S. quite theatrical and full of double-triple standards. Very early during the debates, it came to light that the Spanish Flu actually originated in Kensas, U.S.

What was also interesting as I found in the Pentagon Papers much before The Watergate scandal came out that U.S. had realized that China would be more of a competitor than Russia. And this itself was in 1960 s itself. This shows the level of intelligence that the Americans had. From what I can recollect from whatever I have read of that era, China was still mostly an agri-based economy. So, how the U.S. was able to deduce that China will surpass other economies is beyond me even now. They surely must have known something that even we today do not. One of the other interesting observations and understanding that I got while researching that every year we transfer an average of 7500 diseases from animal to humans and that should be a scary figure. I think more than anything else, loss of habitat and use of animals from food to clothing to medicine is probably the reason we are getting such diseases. I am also sure that there probably are and have been similar number of transfer of diseases from humans to animals as well but for well-known biases and whatnot those studies are neither done or are under-funded. There are and have been reports of something like 850,000 undiscovered viruses which various mammals and birds have. Also I did find that most of such pandemics are hard to identify, for e.g. SARS 1 took about 15 years, Ebola we don t know till date from where it came. Even HIV has questions for us. Hell, even why does hearing go away is a mystery to us. In all of this, we want to say China is culpable. And while China may or may not be culpable, only time will tell, this is surely the opportunity for all countries to spend and make capacities in public health. Countries which will take lessons from it and improve their public healthcare models will hopefully will not suffer as those who will suffer and are continuing to suffer now  To those who feel that habitat loss of animals is untrue, I would suggest them to see Sherni which depicts the human/animal conflict in all its brutality. I am gonna warn in advance that the ending is not nice but what can you expect from a country in which forest area cover has constantly declined and the Govt. itself is only interested in headline management

The only positive story I can share from India is that finally the Modi Govt. has said we will do free vaccine immunization for everybody. Although the pace is nothing to write home about. One additional thing they relaxed was instead of going to Cowin or any other portal, people could simply walk in using their identity papers. Although, given the pace of vaccinations, it is going to take anywhere between 13-18 months or more depending on availability of vaccines.

Looking forward to all and any replies have a virtual keyboard, preferably xvkbd as that is good enough for my use-case.

15 April 2021

Martin Michlmayr: ledger2beancount 2.6 released

I released version 2.6 of ledger2beancount, a ledger to beancount converter. Here are the changes in 2.6: Thanks to Alexander Baier, Daniele Nicolodi, and GitHub users bratekarate, faaafo and mefromthepast for various bug reports and other input. Thanks to Dennis Lee for adding a Dockerfile and to Vinod Kurup for fixing a bug. Thanks to Stefano Zacchiroli for testing. You can get ledger2beancount from GitHub.

27 November 2020

Shirish Agarwal: Farmer Protests and RCEP

Farmer Protests While I was hoping to write about RCEP exclusively, just today farmer protests have happened against three farm laws which had been passed by our Govt. about a month ago without consulting anybody. The bills benefit only big business houses at the cost of farmers. This has been amply shared by an open letter to one of the biggest business house which will benefit the most. Now while that is a national experience and what it tells, let me share, some experience from the State I come from, Maharashtra. About 4-5 years back Maharashtra delisted fruit and vegetables from the APMC market. But till date, the APMC market is working, why, the reasons are many. However, what it did was it forced the change to sugarcane, a water guzzling crop much more than previously. This has resulted in lowering the water table in Maharashtra and put them more into debt trap and later they had to commit suicide. Now let us see why the Punjab farmers have been so agitated that they are walking all the way to Delhi. They are right now, somewhere between Haryana-Delhi border. The reason is that because even their experiments with contract farming have not been good. This is why they are struggling to go to Delhi to make their collective voices heard and get the farm bills rolled back. Even the farmers from Gujarat were sued, but because of elections were put back, the intentions though are clear. This has also happened in Uttar Pradesh and for sugarcane and that too by Bajaj Company. At the end of the day, the laws made by the Govt. leaves our farmer at the mercy of big corporations. It is preposterous to believe that the farmer, with their small land holdings will be able to stand up to the Corporation. Add to that, they cannot go to Court. It is the SDM (Sub-Divisonal Magistrate) who will decide on the matters and has the last word. If this is allowed, in a couple of years there will be only few farmers or corporations who would have large hand-holdings, and they would be easily co-opted by the Government in power. Just in A gentleman who turned off water cannon being shot at farmers has been charged for murder  Currently, the Government procures rice in vast quantities and the farmers are assured at least some basic income, in the states of Punjab and Haryana
Procurement of Rice by Various States
Recently there was also an article in Indian Express which shares the farmer s apprehensions and does share that it s a complex problem with no easy solutions. The solution can only be dialogue between the two parties. This was also shared by Vivek Kaul, who is far more knowledgable than me on the subject and made a long read on the subject.

The Canada Way Recently, while sparring on the Internet, came to know of the Canada way. Here, the Government makes the farmer a corporation and the Government helps them. But the Canada way seems to largely work as the Canadian Government owns the majority of the lands in question. And yes, Indians have benefited from it but that is also due to a. the currency differential between Canadian dollar and Indian Rupee and the 99-year land lease. There may be other advantages that the Canadian Government bestows and that is the reason possibly that most Punjabi farmers go to Canada and UK to farm. While looking at it, I also came across the situation in the United States and it seems the situation there seems to be becoming even more grim.

RCEP RCEP stands for Regional Comprehensive Economic Partnership. We were supposed to be part of this partnership. Now why didn t we join, for two reasons, our judicial infrastructure is the worst. It took 8 years to decide on a tax retrospective case (Vodafone) and that too finally outside India. And that decision, by no means an end. The other thing is all those who have joined RCEP have lesser duties, tariffs then India. What this means is that they are much more competitive than India. While there is fear that perhaps that China may take over its assets as it has done with few countries around the world, the opportunity for those countries was too good to pass up even with the dangers. But, then even India has taken loans from the Asian Infrastructure Investment (AIIB) Bank where China is the biggest shareholder. So it doesn t make sense to be insecure on that front. And again, it is up to India or any other sovereign country to decide to take loans from some country, some multilateral organization or any other way and on what terms.

What China has done and doing is similar to what IMF (being used primarily by the United States) had done in its past. The only difference is that time it was the United States, now it is China. America co-opted Governments, and got assets, China doing the same, no difference in tactics, more or less the same. There has also been a somewhat interesting paper which discusses how the RCEP may unfold in different circumstances. In short, it tells that the partners will benefit, some more than others. It also does compare the RCEP to CPTPP (The Comprehensive and Progressive Agreement for Trans-Pacific Partnership). While the study is a bit academic in nature as the United States has walked out and the new president-elect Joe Biden hasn t made any moves and is unlikely to make any moves as there is deep divide and resentment about multilateral trade partnerships domestically within the United States. This news and understanding was quite shocking to me as it shows that unlike the United States of the past, which was supposed to be a beacon of capitalism and seemed to enjoy capitalism, it seems to be an opportunist only. There is also this truth that under Biden, there is only so many things on which he would need and can spend his political capital on.
Statistica Chart of differences between Republicans and Democrats
As can be seen, economy at least for the democrats, this time around is pretty far round the corner. He has a host of battles and would have to choose which to fight and which to ignore. In the end, we are left to our own devices. At the moment, India does not know when it s economy will recover
PTI News, Nov 27, 2020
There has been another worrying bit of news, now all newspapers will need to get some sort of permission, certification from Govt. of India about any news of the world. This is harking back on the 1970 s, 1980 s era

20 November 2020

Shirish Agarwal: Rights, Press freedom and India

In some ways it is sad and interesting to see how personal liberty is viewed in India. And how it differs from those having the highest fame and power can get a different kind of justice then the rest cannot.

Arnab Goswami This particular gentleman is a class apart. He is the editor as well as Republic TV, a right-leaning channel which demonizes the minority, women whatever is antithesis to the Central Govt. of India. As a result there have been a spate of cases against him in the past few months. But surprisingly, in each of them he got hearing the day after the suit was filed. This is unique in Indian legal history so much so that a popular legal site which publishes on-going cases put up a post sharing how he was getting prompt hearings. That post itself needs to be updated as there have been 3 more hearings which have been done back to back for him. This is unusual as there have been so many cases pending for the SC attention, some arguably more important than this gentleman . So many precedents have been set which will send a wrong message. The biggest one, that even though a trial is taking place in the sessions court (below High Court) the SC can interject on matters. What this will do to the morale of both lawyers as well as judges of the various Sessions Court is a matter of speculation and yet as shared unprecedented. The saddest part was when Justice Chandrachud said
Justice Chandrachud If you don t like a channel then don t watch it. 11th November 2020 .
This is basically giving a free rope to hate speech. How can a SC say like that ? And this is the Same Supreme Court which could not take two tweets from Shri Prashant Bhushan when he made remarks against the judiciary .

J&K pleas in Supreme Court pending since August 2019 (Abrogation 370) After abrogation of 370, citizens of Jammu and Kashmir, the population of which is 13.6 million people including 4 million Hindus have been stuck with reduced rights and their land being taken away due to new laws. Many of the Hindus which regionally are a minority now rue the fact that they supported the abrogation of 370A . Imagine, a whole state whose answers and prayers have not been heard by the Supreme Court and the people need to move a prayer stating the same.

100 Journalists, activists languishing in Jail without even a hearing 55 Journalists alone have been threatened, booked and in jail for reporting of pandemic . Their fault, they were bring the irregularities, corruption made during the pandemic early months. Activists such as Sudha Bharadwaj, who giving up her American citizenship and settling to fight for tribals is in jail for 2 years without any charges. There are many like her, There are several more petitions lying in the Supreme Court, for e.g. Varavara Rao, not a single hearing from last couple of years, even though he has taken part in so many national movements including the emergency as well as part-responsible for creation of Telengana state out of Andhra Pradesh .

Then there is Devangana kalita who works for gender rights. Similar to Sudha Bharadwaj, she had an opportunity to go to UK and settle here. She did her master s and came back. And now she is in jail for the things that she studied. While she took part in Anti-CAA sittings, none of her speeches were incendiary but she still is locked up under UAPA (Unlawful Practises Act) . I could go on and on but at the moment these should suffice.

Petitions for Hate Speech which resulted in riots in Delhi are pending, Citizen s Amendment Act (controversial) no hearings till date. All of the best has been explained in a newspaper article which articulates perhaps all that I wanted to articulate and more. It is and was amazing to see how in certain cases Article 32 is valid and in many it is not. Also a fair reading of Justice Bobde s article tells you a lot how the SC is functioning. I would like to point out that barandbench along with livelawindia makes it easier for never non-lawyers and public to know how arguments are done in court, what evidences are taken as well as give some clue about judicial orders and judgements. Both of these resources are providing an invaluable service and more often than not, free of charge.

Student Suicide and High Cost of Education
For quite sometime now, the cost of education has been shooting up. While I have visited this topic earlier as well, recently a young girl committed suicide because she was unable to pay the fees as well as additional costs due to pandemic. Further investigations show that this is the case with many of the students who are unable to buy laptops. Now while one could think it is limited to one college then it would be wrong. It is almost across all India and this will continue for months and years. People do know that the pandemic is going to last a significant time and it would be a long time before R value becomes zero . Even the promising vaccine from Pfizer need constant refrigeration which is sort of next to impossible in India. It is going to make things very costly.

Last Nail on Indian Media Just today the last nail on India has been put. Thankfully Freedom Gazette India did a much better job so just pasting that
Information and Broadcasting Ministry bringing OTT services as well as news within its ambit.
With this, projects like Scam 1992, The Harshad Mehta Story or Bad Boy Billionaires:India, Test Case, Delhi Crime, Laakhon Mein Ek etc. etc. such kind of series, investigative journalism would be still-births. Many of these web-series also shared tales of woman empowerment while at the same time showed some of the hard choices that women had to contend to live with. Even western media may be censored where it finds the political discourse not to its liking. There had been so many accounts of Mr. Ravish Kumar, the winner of Ramon Magsaysay, how in his shows the electricity was cut in many places. I too have been the victim when the BJP governed in Maharashtra as almost all Puneities experienced it. Light would go for just half or 45 minutes at the exact time. There is another aspect to it. The U.S. elections showed how independent media was able to counter Mr. Trump s various falsehoods and give rise to alternative ideas which lead the team of Bernie Sanders, Joe Biden and Kamala Harris, Biden now being the President-elect while Kamala Harris being the vice-president elect. Although the journey to the white house seems as tough as before. Let s see what happens. Hopefully 2021 will bring in some good news. Update On 27th November 2020 Martin who runs the planet got an e-mail/notice by a Mr. Nikhil Sethi who runs the wikibio.com property. Mr. Sethi asked to remove the link pointing Devangana Kalita from my blog post to his site as he has used the no follow link. On inquiring further, the gentleman stated that it is an Updated mandate (his exact quote) from Google algorithm. To further understand the issue, I went to SERP as they are one of the more known ones on the subject. I also looked it up on Google as well. Found that the gentleman was BSing the whole time. The page basically talks about weightage of a page/site and authoritativeness which is known and yet highly contested ideas. In any case, the point for me was for whatever reason (could be fear, could be something else entirely), Mr. Sethi did not want me to link the content. Hence, I have complied above. I could have dragged it out but I do not wish Mr. Sethi any ill-being or/and further harm unduly and unintentionally caused by me. Hence, have taken down the link.

31 October 2020

Chris Lamb: Free software activities in October 2020

Here is my monthly update covering what I have been doing in the free software world during October 2020 (previous month): For Lintian, the static analysis tool for Debian packages, I uploaded versions 2.97.0, 2.98.0, 2.99.0 & 2.100.0 as well as updated the declares-possibly-conflicting-debhelper-compat-versions tag as we may specify the Debhelper compatibility level in debian/rules or debian/control (#972464) and dropped a reference to missing manual page [...].

Reproducible Builds One of the original promises of open source software is that distributed peer review and transparency of process results in enhanced end-user security. However, whilst anyone may inspect the source code of free and open source software for malicious flaws, almost all software today is distributed as pre-compiled binaries. This allows nefarious third-parties to compromise systems by injecting malicious code into ostensibly secure software during the various compilation and distribution processes. The motivation behind the Reproducible Builds effort is to ensure no flaws have been introduced during this compilation process by promising identical results are always generated from a given source, thus allowing multiple third-parties to come to a consensus on whether a build was compromised. The project is proud to be a member project of the Software Freedom Conservancy. Conservancy acts as a corporate umbrella allowing projects to operate as non-profit initiatives without managing their own corporate structure. If you like the work of the Conservancy or the Reproducible Builds project, please consider becoming an official supporter. This month, I: I also updated the main Reproducible Builds website and documentation:
Lastly, I made the following changes to diffoscope, including preparing and uploading version 161 to Debian: trydiffoscope is the web-based version of diffoscope. This month, I made the following changes:

Debian Debian LTS This month I have worked 18 hours on Debian Long Term Support (LTS) and 12 hours on its sister Extended LTS project. You can find out more about the project via the following video:

Uploads Bugs filed

27 September 2020

Iain R. Learmonth: Multicast IPTV

For almost a decade, I ve been very slowly making progress on a multicast IPTV system. Recently I ve made a significant leap forward in this project, and I wanted to write a little on the topic so I ll have something to look at when I pick this up next. I was aspiring to have a useable system by the end of today, but for a couple of reasons, it wasn t possible. When I started thinking about this project, it was still common to watch broadcast television. Over time the design of this system has been changing as new technologies have become available. Multicast IP is probably the only constant, although I m now looking at IPv6 rather than IPv4. Initially, I d been looking at DVB-T PCI cards. USB devices have become common and are available cheaply. There are also DVB-T hats available for the Raspberry Pi. I m now looking at a combination of Raspberry Pi hats and USB devices with one of each on a couple of Pis.
Two Raspberry Pis with DVB hats installed, TV antenna sockets showing Two Raspberry Pis with DVB hats installed, TV antenna sockets showing
The Raspberry Pi devices will run DVBlast, an open-source DVB demultiplexer and streaming server. Each of the tuners will be tuned to a different transponder giving me the ability to stream any combination of available channels simultaneously. This is everything that would be needed to watch TV on PCs on the home network with VLC. I ve not yet worked out if Kodi will accept multicast streams as a TV source, but I do know that Tvheadend will. Tvheadend can also act as a PVR to record programmes for later playback so is useful even if the multicast streams can be viewed directly. So how far did I get? I have built two Raspberry Pis in cases with the DVB-T hats on. They need to sit in the lounge as that s where the antenna comes down from the roof. There s no wired network connection in the lounge. I planned to use an OpenBSD box as a gateway, bridging the wireless network to a wired network. Two problems quickly emerged. The first being that the wireless card I had purchased only supported 2.4GHz, no 5GHz, and I have enough noise from neighbours that the throughput rate and packet loss are unacceptable. The second problem is that I had forgotten the problems with bridging wireless networks. To create a bridge, you need to be able to spoof the MAC addresses of wired devices on the wireless interface, but this can only be done when the wireless interface is in access point mode. So when I come back to this, I will have to look at routing rather than bridging to work around the MAC address issue, and I ll also be on the lookout for a cheap OpenBSD supported mini-PCIe wireless card that can do 5GHz.

2 August 2020

Enrico Zini: Toxic positivity links

That which we do not bring to consciousness appears in our lives as fate. Carl Jung
Emotional support of others can take the form of surface-level consolation. But compassion means being willing to listen and feel, even when it's uncomfortable.
Ultimately, the driving force behind the power of positive thinking meme is the word power. But what about those whose bodies are not powerful? What about those who are vulnerable? What about those who are tired, isolated, and struggling? What about those who are ill? What about those who lack
I have often been dismissive or unhelpful when someone close to me was dealing with painful circumstances, having learned to accentuate the positive. In the more recent past, I have recognized these behavioral patterns as part of what some mental health professionals term, toxic positivity.
Toxic positivity is the overgeneralization of a happy, optimistic state resulting in the denial & invalidation of the authentic human emotional experience.

30 August 2017

Daniel Silverstone: STM32 USB and Rust - Packet Memory Area

In this, our next exciting installment of STM32 and Rust for USB device drivers, we're going to look at what the STM32 calls the 'packet memory area'. If you've been reading along with the course, including reading up on the datasheet content then you'll be aware that as well as the STM32's normal SRAM, there's a 512 byte SRAM dedicated to the USB peripheral. This SRAM is called the 'packet memory area' and is shared between the main bus and the USB peripheral core. Its purpose is, simply, to store packets in transit. Both those IN to the host (so stored queued for transmission) or OUT from the host (so stored, queued for the application to extract and consume). It's time to actually put hand to keyboard on some Rust code, and the PMA is the perfect starting point, since it involves two basic structures. Packets are the obvious first structure, and they are contiguous sets of bytes which for the purpose of our work we shall assume are one to sixty-four bytes long. The second is what the STM32 datasheet refers to as the BTABLE or Buffer Descriptor Table. Let's consider the BTABLE first.

The Buffer Descriptor Table The BTABLE is arranged in quads of 16bit words. For "normal" endpoints this is a pair of descriptors, each consisting of two words, one for transmission, and one for reception. The STM32 also has a concept of double buffered endpoints, but we're not going to consider those in our proof-of-concept work. The STM32 allows for up to eight endpoints (EP0 through EP7) in internal register naming, though they support endpoints numbered from zero to fifteen in the sense of the endpoint address numbering. As such there're eight descriptors each four 16bit words long (eight bytes) making for a buffer descriptor table which is 64 bytes in size at most.
Buffer Descriptor Table
Byte offset in PMA Field name Description
(EPn * 8) + 0 USB_ADDRn_TX The address (inside the PMA) of the TX buffer for EPn
(EPn * 8) + 2 USB_COUNTn_TX The number of bytes present in the TX buffer for EPn
(EPn * 8) + 4 USB_ADDRn_RX The address (inside the PMA) of the RX buffer for EPn
(EPn * 8) + 6 USB_COUNTn_RX The number of bytes of space available for the RX buffer for EPn (and once received, the number of bytes received)
The TX entries are trivial to comprehend. To transmit a packet, part of the process involves writing the packet into the PMA, putting the address into the appropriate USB_ADDRn_TX entry, and the length into the corresponding USB_COUNTn_TX entry, before marking the endpoint as ready to transmit. To receive a packet though is slightly more complex. The application must allocate some space in the PMA, setting the address into the USB_ADDRn_RX entry of the BTABLE before filling out the top half of the USB_COUNTn_RX entry. For ease of bit sizing, the STM32 only supports space allocations of two to sixty-two bytes in steps of two bytes at a time, or thirty-two to five-hundred-twelve bytes in steps of thirty-two bytes at a time. Once the packet is received, the USB peripheral will fill out the lower bits of the USB_COUNTn_RX entry with the actual number of bytes filled out in the buffer.

Packets themselves Since packets are, typically, a maximum of 64 bytes long (for USB 2.0) and are simply sequences of bytes with no useful structure to them (as far as the USB peripheral itself is concerned) the PMA simply requires that they be present and contiguous in PMA memory space. Addresses of packets are relative to the base of the PMA and are byte-addressed, however they cannot start on an odd byte, so essentially they are 16bit addressed. Since the BTABLE can be anywhere within the PMA, as can the packets, the application will have to do some memory management (either statically, or dynamically) to manage the packets in the PMA.

Accessing the PMA The PMA is accessed in 16bit word sections. It's not possible to access single bytes of the PMA, nor is it conveniently structured as far as the CPU is concerned. Instead the PMA's 16bit words are spread on 32bit word boundaries as far as the CPU knows. This is done for convenience and simplicity of hardware, but it means that we need to ensure our library code knows how to deal with this. First up, to convert an address in the PMA into something which the CPU can use we need to know where in the CPU's address space the PMA is. Fortunately this is fixed at 0x4000_6000. Secondly we need to know what address in the PMA we wish to access, so we can determine which 16bit word that is, and thus what the address is as far as the CPU is concerned. If we assume we only ever want to access 16bit entries, we can just multiply the PMA offset by two before adding it to the PMA base address. So, to access the 16bit word at byte-offset 8 in the PMA, we'd look for the 16bit word at 0x4000_6000 + (0x08 * 2) => 0x4000_6010.

Bundling the PMA into something we can use I said we'd do some Rust, and so we shall
    // Thanks to the work by Jorge Aparicio, we have a convenient wrapper
    // for peripherals which means we can declare a PMA peripheral:
    pub const PMA: Peripheral<PMA> = unsafe   Peripheral::new(0x4000_6000)  ;
    // The PMA struct type which the peripheral will return a ref to
    pub struct PMA  
        pma_area: PMA_Area,
     
    // And the way we turn that ref into something we can put a useful impl on
    impl Deref for PMA  
        type Target = PMA_Area;
        fn deref(&self) -> &PMA_Area  
            &self.pma_area
         
     
    // This is the actual representation of the peripheral, we use the C repr
    // in order to ensure it ends up packed nicely together
    #[repr(C)]
    pub struct PMA_Area  
        // The PMA consists of 256 u16 words separated by u16 gaps, so lets
        // represent that as 512 u16 words which we'll only use every other of.
        words: [VolatileCell<u16>; 512],
     
That block of code gives us three important things. Firstly a peripheral object which we will be able to (later) manage nicely as part of the set of peripherals which RTFM will look after for us. Secondly we get a convenient packed array of u16s which will be considered volatile (the compiler won't optimise around the ordering of writes etc). Finally we get a struct on which we can hang an implementation to give our PMA more complex functionality. A useful first pair of functions would be to simply let us get and put u16s in and out of that word array, since we're only using every other word
    impl PMA_Area  
        pub fn get_u16(&self, offset: usize) -> u16  
            assert!((offset & 0x01) == 0);
            self.words[offset].get()
         
        pub fn set_u16(&self, offset: usize, val: u16)  
            assert!((offset & 0x01) == 0);
            self.words[offset].set(val);
         
     
These two functions take an offset in the PMA and return the u16 word at that offset. They only work on u16 boundaries and as such they assert that the bottom bit of the offset is unset. In a release build, that will go away, but during debugging this might be essential. Since we're only using 16bit boundaries, this means that the first word in the PMA will be at offset zero, and the second at offset two, then four, then six, etc. Since we allocated our words array to expect to use every other entry, this automatically converts into the addresses we desire. If we pop (and please don't worry about the unsafe stuff for now):
    unsafe   (&*usb::pma::PMA.get()).set_u16(4, 64);  
into our main function somewhere, and then build and objdump our test binary we can see the following set of instructions added:
 80001e4:   f246 0008   movw    r0, #24584  ; 0x6008
 80001e8:   2140        movs    r1, #64 ; 0x40
 80001ea:   f2c4 0000   movt    r0, #16384  ; 0x4000
 80001ee:   8001        strh    r1, [r0, #0]
This boils down to a u16 write of 0x0040 (64) to the address 0x4006008 which is the third 32 bit word in the CPU's view of the PMA memory space (where offset 4 is the third 16bit word) which is exactly what we'd expect to see. We can, from here, build up some functions for manipulating a BTABLE, though the most useful ones for us to take a look at are the RX counter functions:
    pub fn get_rxcount(&self, ep: usize) -> u16  
        self.get_u16(BTABLE + (ep * 8) + 6) & 0x3ff
     
    pub fn set_rxcount(&self, ep: usize, val: u16)  
        assert!(val <= 1024);
        let rval: u16 =  
            if val > 62  
                assert!((val & 0x1f) == 0);
                (((val >> 5) - 1) << 10)   0x8000
              else  
                assert!((val & 1) == 0);
                (val >> 1) << 10
             
         ;
        self.set_u16(BTABLE + (ep * 8) + 6, rval)
     
The getter is fairly clean and clear, we need the BTABLE base in the PMA, add the address of the USB_COUNTn_RX entry to that, retrieve the u16 and then mask off the bottom ten bits since that's the size of the relevant field. The setter is a little more complex, since it has to deal with the two possible cases, this isn't pretty and we might be able to write some better peripheral structs in the future, but for now, if the length we're setting is 62 or less, and is divisible by two, then we put a zero in the top bit, and the number of 2-byte lumps in at bits 14:10, and if it's 64 or more, we mask off the bottom to check it's divisible by 32, and then put the count (minus one) of those blocks in, instead, and set the top bit to mark it as such. Fortunately, when we set constants, Rust's compiler manages to optimise all this very quickly. For a BTABLE at the bottom of the PMA, and an initialisation statement of:
    unsafe   (&*usb::pma::PMA.get()).set_rxcount(1, 64);  
then we end up with the simple instruction sequence:
80001e4:    f246 001c   movw    r0, #24604  ; 0x601c
80001e8:    f44f 4104   mov.w   r1, #33792  ; 0x8400
80001ec:    f2c4 0000   movt    r0, #16384  ; 0x4000
80001f0:    8001        strh    r1, [r0, #0]
We can decompose that into a C like *((u16*)0x4000601c) = 0x8400 and from there we can see that it's writing to the u16 at 0x1c bytes into the CPU's view of the PMA, which is 14 bytes into the PMA itself. Since we know we set the BTABLE at the start of the PMA, it's 14 bytes into the BTABLE which is firmly in the EP1 entries. Specifically it's USB_COUNT1_RX which is what we were hoping for. To confirm this, check out page 651 of the datasheet. The value set was 0x8400 which we can decompose into 0x8000 and 0x0400. The first is the top bit and tells us that BL_SIZE is one, and thus the blocks are 32 bytes long. Next the 0x4000 if we shift it right ten places, we get the value 2 for the field NUM_BLOCK and multiplying 2 by 32 we get the 64 bytes we asked it to set as the size of the RX buffer. It has done exactly what we hoped it would, but the compiler managed to optimise it into a single 16 bit store of a constant value to a constant location. Nice and efficient. Finally, let's look at what happens if we want to write a packet into the PMA. For now, let's assume packets come as slices of u16s because that'll make our life a little simpler:
    pub fn write_buffer(&self, base: usize, buf: &[u16])  
        for (ofs, v) in buf.iter().enumerate()  
            self.set_u16(base + (ofs * 2), *v);
         
     
Yes, even though we're deep in no_std territory, we can still get an iterator over the slice, and enumerate it, getting a nice iterator of (index, value) though in this case, the value is a ref to the content of the slice, so we end up with *v to deref it. I am sure I could get that automatically happening but for now it's there. Amazingly, despite using iterators, enumerators, high level for loops, function calls, etc, if we pop:
    unsafe   (&*usb::pma::PMA.get()).write_buffer(0, &[0x1000, 0x2000, 0x3000]);  
into our main function and compile it, we end up with the instruction sequence:
80001e4:    f246 0000   movw    r0, #24576  ; 0x6000
80001e8:    f44f 5180   mov.w   r1, #4096   ; 0x1000
80001ec:    f2c4 0000   movt    r0, #16384  ; 0x4000
80001f0:    8001        strh    r1, [r0, #0]
80001f2:    f44f 5100   mov.w   r1, #8192   ; 0x2000
80001f6:    8081        strh    r1, [r0, #4]
80001f8:    f44f 5140   mov.w   r1, #12288  ; 0x3000
80001fc:    8101        strh    r1, [r0, #8]
which, as you can see, ends up being three sequential halfword stores directly to the right locations in the CPU's view of the PMA. You have to love seriously aggressive compile-time optimisation :-) Hopefully, by next time, we'll have layered some more pleasant routines on our PMA code, and begun a foray into the setup necessary before we can begin handling interrupts and start turning up on a USB port.

22 February 2017

Antoine Beaupr : The case against password hashers

In previous articles, we have looked at how to generate passwords and did a review of various password managers. There is, however, a third way of managing passwords other than remembering them or encrypting them in a "vault", which is what I call "password hashing". A password hasher generates site-specific passwords from a single master password using a cryptographic hash function. It thus allows a user to have a unique and secure password for every site they use while requiring no storage; they need only to remember a single password. You may know these as "deterministic or stateless password managers" but I find the "password manager" phrase to be confusing because a hasher doesn't actually store any passwords. I do not think password hashers represent a good security tradeoff so I generally do not recommend their use, unless you really do not have access to reliable storage that you can access readily. In this article, I use the word "password" for a random string used to unlock things, but "token" to represent a generated random string that the user doesn't need to remember. The input to a password hasher is a password with some site-specific context and the output from a password hasher is a token.

What is a password hasher? A password hasher uses the master password and a label (generally the host name) to generate the site-specific password. To change the generated password, the user can modify the label, for example by appending a number. Some password hashers also have different settings to generate tokens of different lengths or compositions (symbols or not, etc.) to accommodate different site-specific password policies. The whole concept of password hashers relies on the concept of one-way cryptographic hash functions or key derivation functions that take an arbitrary input string (say a password) and generate a unique token, from which it is impossible to guess the original input string. Password hashers are generally written as JavaScript bookmarklets or browser plugins and have been around for over a decade. The biggest advantage of password hashers is that you only need to remember a single password. You do not need to carry around a password manager vault: there's no "state" (other than site-specific settings, which can be easily guessed). A password hasher named Master Password makes a compelling case against traditional password managers in its documentation:
It's as though the implicit assumptions are that everybody backs all of their stuff up to at least two different devices and backups in the cloud in at least two separate countries. Well, people don't always have perfect backups. In fact, they usually don't have any.
It goes on to argue that, when you lose your password: "You lose everything. You lose your own identity." The stateless nature of password hashers also means you do not need to use cloud services to synchronize your passwords, as there is (generally, more on that later) no state to carry around. This means, for example, that the list of accounts that you have access to is only stored in your head, and not in some online database that could be hacked without your knowledge. The downside of this is, of course, that attackers do not actually need to have access to your password hasher to start cracking it: they can try to guess your master key without ever stealing anything from you other than a single token you used to log into some random web site. Password hashers also necessarily generate unique passwords for every site you use them on. While you can also do this with password managers, it is not an enforced decision. With hashers, you get distinct and strong passwords for every site with no effort.

The problem with password hashers If hashers are so great, why would you use a password manager? Programs like LessPass and Master Password seem to have strong crypto that is well implemented, so why isn't everyone using those tools? Password hashing, as a general concept, actually has serious problems: since the hashing outputs are constantly compromised (they are sent in password forms to various possibly hostile sites), it's theoretically possible to derive the master password and then break all the generated tokens in one shot. The use of stronger key derivation functions (like PBKDF2, scrypt, or HMAC) or seeds (like a profile-specific secret) makes those attacks much harder, especially if the seed is long enough to make brute-force attacks infeasible. (Unfortunately, in the case of Password Hasher Plus, the seed is derived from Math.random() calls, which are not considered cryptographically secure.) Basically, as stated by Julian Morrison in this discussion:
A password is now ciphertext, not a block of line noise. Every time you transmit it, you are giving away potential clues of use to an attacker. [...] You only have one password for all the sites, really, underneath, and it's your secret key. If it's broken, it's now a skeleton-key [...]
Newer implementations like LessPass and Master Password fix this by using reasonable key derivation algorithms (PBKDF2 and scrypt, respectively) that are more resistant to offline cracking attacks, but who knows how long those will hold? To give a concrete example, if you would like to use the new winner of the password hashing competition (Argon2) in your password manager, you can patch the program (or wait for an update) and re-encrypt your database. With a password hasher, it's not so easy: changing the algorithm means logging in to every site you visited and changing the password. As someone who used a password hasher for a few years, I can tell you this is really impractical: you quickly end up with hundreds of passwords. The LessPass developers tried to facilitate this, but they ended up mostly giving up. Which brings us to the question of state. A lot of those tools claim to work "without a server" or as being "stateless" and while those claims are partly true, hashers are way more usable (and more secure, with profile secrets) when they do keep some sort of state. For example, Password Hasher Plus records, in your browser profile, which site you visited and which settings were used on each site, which makes it easier to comply with weird password policies. But then that state needs to be backed up and synchronized across multiple devices, which led LessPass to offer a service (which you can also self-host) to keep those settings online. At this point, a key benefit of the password hasher approach (not keeping state) just disappears and you might as well use a password manager. Another issue with password hashers is choosing the right one from the start, because changing software generally means changing the algorithm, and therefore changing passwords everywhere. If there was a well-established program that was be recognized as a solid cryptographic solution by the community, I would feel more confident. But what I have seen is that there are a lot of different implementations each with its own warts and flaws; because changing is so painful, I can't actually use any of those alternatives. All of the password hashers I have reviewed have severe security versus usability tradeoffs. For example, LessPass has what seems to be a sound cryptographic implementation, but using it requires you to click on the icon, fill in the fields, click generate, and then copy the password into the field, which means at least four or five actions per password. The venerable Password Hasher is much easier to use, but it makes you type the master password directly in the site's password form, so hostile sites can simply use JavaScript to sniff the master password while it is typed. While there are workarounds implemented in Password Hasher Plus (the profile-specific secret), both tools are more or less abandoned now. The Password Hasher homepage, linked from the extension page, is now a 404. Password Hasher Plus hasn't seen a release in over a year and there is no space for collaborating on the software the homepage is simply the author's Google+ page with no information on the project. I couldn't actually find the source online and had to download the Chrome extension by hand to review the source code. Software abandonment is a serious issue for every project out there, but I would argue that it is especially severe for password hashers. Furthermore, I have had difficulty using password hashers in unified login environments like Wikipedia's or StackExchange's single-sign-on systems. Because they allow you to log in with the same password on multiple sites, you need to choose (and remember) what label you used when signing in. Did I sign in on stackoverflow.com? Or was it stackexchange.com? Also, as mentioned in the previous article about password managers, web-based password managers have serious security flaws. Since more than a few password hashers are implemented using bookmarklets, they bring all of those serious vulnerabilities with them, which can range from account name to master password disclosures. Finally, some of the password hashers use dubious crypto primitives that were valid and interesting a decade ago, but are really showing their age now. Stanford's pwdhash uses MD5, which is considered "cryptographically broken and unsuitable for further use". We have seen partial key recovery attacks against MD5 already and while those do not allow an attacker to recover the full master password yet (especially not with HMAC-MD5), I would not recommend anyone use MD5 in anything at this point, especially if changing that algorithm later is hard. Some hashers (like Password Hasher and Password Plus) use a a single round of SHA-1 to derive a token from a password; WPA2 (standardized in 2004) uses 4096 iterations of HMAC-SHA1. A recent US National Institute of Standards and Technology (NIST) report also recommends "at least 10,000 iterations of the hash function".

Conclusion Forced to suggest a password hasher, I would probably point to LessPass or Master Password, depending on the platform of the person asking. But, for now, I have determined that the security drawbacks of password hashers are not acceptable and I do not recommend them. It makes my password management recommendation shorter anyway: "remember a few carefully generated passwords and shove everything else in a password manager". [Many thanks to Daniel Kahn Gillmor for the thorough reviews provided for the password articles.]
Note: this article first appeared in the Linux Weekly News. Also, details of my research into password hashers are available in the password hashers history article.

15 February 2017

Antoine Beaupr : A look at password managers

As we noted in an earlier article, passwords are a liability and we'd prefer to get rid of them, but the current reality is that we do use a plethora of passwords in our daily lives. This problem is especially acute for technology professionals, particularly system administrators, who have to manage a lot of different machines. But it also affects regular users who still use a large number of passwords, from their online bank to their favorite social-networking site. Despite the remarkable memory capacity of the human brain, humans are actually terrible at recalling even short sets of arbitrary characters with the precision needed for passwords. Therefore humans reuse passwords, make them trivial or guessable, write them down on little paper notes and stick them on their screens, or just reset them by email every time. Our memory is undeniably failing us and we need help, which is where password managers come in. Password managers allow users to store an arbitrary number of passwords and just remember a single password to unlock them all. But there is a large variety of password managers out there, so which one should we be using? At my previous job, an inventory was done of about 40 different free-software password managers in different stages of development and of varying quality. So, obviously, this article will not be exhaustive, but instead focus on a smaller set of some well-known options that may be interesting to readers.

KeePass: the popular alternative The most commonly used password-manager design pattern is to store passwords in a file that is encrypted and password-protected. The most popular free-software password manager of this kind is probably KeePass. An important feature of KeePass is the ability to auto-type passwords in forms, most notably in web browsers. This feature makes KeePass really easy to use, especially considering it also supports global key bindings to access passwords. KeePass databases are designed for simultaneous access by multiple users, for example, using a shared network drive. KeePass has a graphical interface written in C#, so it uses the Mono framework on Linux. A separate project, called KeePassX is a clean-room implementation written in C++ using the Qt framework. Both support the AES and Twofish encryption algorithms, although KeePass recently added support for the ChaCha20 cipher. AES key derivation is used to generate the actual encryption key for the database, but the latest release of KeePass also added using Argon2, which was the winner of the July 2015 password-hashing competition. Both programs are more or less equivalent, although the original KeePass seem to have more features in general. The KeePassX project has recently been forked into another project now called KeePassXC that implements a set of new features that are present in KeePass but missing from KeePassX like:
  • auto-type on Linux, Mac OS, and Windows
  • database merging which allows multi-user support
  • using the web site's favicon in the interface
So far, the maintainers of KeePassXC seem to be open to re-merging the project "if the original maintainer of KeePassX in the future will be more active and will accept our merge and changes". I can confirm that, at the time of writing, the original KeePassX project now has 79 pending pull requests and only one pull request was merged since the last release, which was 2.0.3 in September 2016. While KeePass and derivatives allow multiple users to access the same database through the merging process, they do not support multi-party access to a single database. This may be a limiting factor for larger organizations, where you may need, for example, a different password set for different technical support team levels. The solution in this case is to use separate databases for each team, with each team using a different shared secret.

Pass: the standard password manager? I am currently using password-store, or pass, as a password manager. It aims to be "the standard Unix password manager". Pass is a GnuPG-based password manager that features a surprising number of features given its small size:
  • copy-paste support
  • Git integration
  • multi-user/group support
  • pluggable extensions (in the upcoming 1.7 release)
The command-line interface is simple to use and intuitive. The following, will, for example, create a pass repository, a 20 character password for your LWN account and copy it to the clipboard:
    $ pass init
    $ pass generate -c lwn 20
The main issue with pass is that it doesn't encrypt the name of those entries: if someone were to compromise my machine, they could easily see which sites I have access to simply by listing the passwords stored in ~/.password-store. This is a deliberate design decision by the upstream project, as stated by a mailing list participant, Allan Odgaard:
Using a single file per item has the advantage of shell completion, using version control, browse, move and rename the items in a file browser, edit them in a regular editor (that does GPG, or manually run GPG first), etc.
Odgaard goes on to point out that there are alternatives that do encrypt the entire database (including the site names) if users really need that feature. Furthermore, there is a tomb plugin for pass that encrypts the password store in a LUKS container (called a "tomb"), although it requires explicitly opening and closing the container, which makes it only marginally better than using full disk encryption system-wide. One could also argue that password file names do not hold secret information, only the site name and username, perhaps, and that doesn't require secrecy. I do believe those should be kept secret, however, as they could be used to discover (or prove) which sites you have access to and then used to perform other attacks. One could draw a parallel with the SSH known_hosts file, which used to be plain text but is now hashed so that hosts are more difficult to discover. Also, sharing a database for multi-user support will require some sort of file-sharing mechanism. Given the integrated Git support, this will likely involve setting up a private Git repository for your team, something which may not be accessible to the average Linux user. Nothing keeps you, however, from sharing the ~/.password-store directory through another file sharing mechanism like (say) Syncthing or Dropbox. You can use multiple distinct databases easily using the PASSWORD_STORE_DIR environment variable. For example, you could have a shell alias to use a different repository for your work passwords with:
    alias work-pass="PASSWORD_STORE_DIR=~/work-passwords pass"
Group support comes from a clever use of the GnuPG multiple-recipient encryption support. You simply have to specify multiple OpenPGP identities when initializing the repository, which also works in subdirectories:
    $ pass init -p Ateam me@example.com joelle@example.com
    mkdir: created directory '/home/me/.password-store/Ateam'
    Password store initialized for me@example.com, joelle@example.com
    [master 0e3dbe7] Set GPG id to me@example.com, joelle@example.com.
     1 file changed, 2 insertions(+)
     create mode 100644 Ateam/.gpg-id
The above will configure pass to encrypt the passwords in the Ateam directory for me@example.com and joelle@example.com. Pass depends on GnuPG to do the right thing when encrypting files and how those identities are treated is entirely delegated to GnuPG's default configuration. This could lead to problems if arbitrary keys can be injected into your key ring, which could confuse GnuPG. I would therefore recommend using full key fingerprints instead of user identifiers. Regarding the actual encryption algorithms used, in my tests, GnuPG 1.4.18 and 2.1.18 seemed to default to 256-bit AES for encryption, but that has not always been the case. The chosen encryption algorithm actually depends on the recipient's key preferences, which may vary wildly: older keys and versions may use anything from 128-bit AES to CAST5 or Triple DES. To figure out which algorithm GnuPG chose, you may want to try this pipeline:
    $ echo test   gpg -e -r you@example.com   gpg -d -v
    [...]
    gpg: encrypted with 2048-bit RSA key, ID XXXXXXX, created XXXXX
      "You Person You <you@example.com>"
    gpg: AES256 encrypted data
    gpg: original file name=''
    test
As you can see, pass is primarily a command-line application, which may make it less accessible to regular users. The community has produced different graphical interfaces that are either using pass directly or operate on the storage with their own GnuPG integration. I personally use pass in combination with Rofi to get quick access to my passwords, but less savvy users may want to try the QtPass interface, which should be more user-friendly. QtPass doesn't actually depend on pass and can use GnuPG directly to interact with the pass database; it is available for Linux, BSD, OS X, and Windows.

Browser password managers Most users are probably already using a password manager through their web browser's "remember password" functionality. For example, Chromium will ask if you want it to remember passwords and encrypt them with your operating system's facilities. For Windows, this encrypts the passwords with your login password and, for GNOME, it will store the passwords in the gnome-keyring storage. If you synchronize your Chromium settings with your Google account, Chromium will store those passwords on Google's servers, encrypted with a key that is stored in the Google Account itself. So your passwords are then only as safe as your Google account. Note that this was covered here in 2010, although back then Chromium didn't synchronize with the Google cloud or encrypt with the system-level key rings. That facility was only added in 2013. In Firefox, there's an optional, profile-specific master password that unlocks all passwords. In this case, the issue is that browsers are generally always open, so the vault is always unlocked. And this is for users that actually do pick a master password; users are often completely unaware that they should set one. The unlocking mechanism is a typical convenience-security trade-off: either users need to constantly input their master passwords to login or they don't, and the passwords are available in the clear. In this case, Chromium's approach of actually asking users to unlock their vault seems preferable, even though the developers actually refused to implement the feature for years. Overall, I would recommend against using a browser-based password manager. Even if it is not used for critical sites, you will end up with hundreds of such passwords that are vulnerable while the browser is running (in the case of Firefox) or at the whim of Google (in the case of Chromium). Furthermore, the "auto-fill" feature that is often coupled with browser-based password managers is often vulnerable to serious attacks, which is mentioned below. Finally, because browser-based managers generally lack a proper password generator, users may fail to use properly generated passwords, so they can then be easily broken. A password generator has been requested for Firefox, according to this feature request opened in 2007, and there is a password generator in Chrome, but it is disabled by default and hidden in the mysterious chrome://flags URL.

Other notable password managers Another alternative password manager, briefly mentioned in the previous article, is the minimalistic Assword password manager that, despite its questionable name, is also interesting. Its main advantage over pass is that it uses a single encrypted JSON file for storage, and therefore doesn't leak the name of the entries by default. In addition to copy/paste, Assword also supports automatically entering passphrases in fields using the xdo library. Like pass, it uses GnuPG to encrypt passphrases. According to Assword maintainer Daniel Kahn Gillmor in email, the main issue with Assword is "interaction between generated passwords and insane password policies". He gave the example of the Time-Warner Cable registration form that requires, among other things, "letters and numbers, between 8 and 16 characters and not repeat the same characters 3 times in a row". Another well-known password manager is the commercial LastPass service which released a free-software command-line client called lastpass-cli about three years ago. Unfortunately, the server software of the lastpass.com service is still proprietary. And given that LastPass has had at least two serious security breaches since that release, one could legitimately question whether this is a viable solution for storing important secrets. In general, web-based password managers expose a whole new attack surface that is not present in regular password managers. A 2014 study by University of California researchers showed that, out of five password managers studied, every one of them was vulnerable to at least one of the vulnerabilities studied. LastPass was, in particular, vulnerable to a cross-site request forgery (CSRF) attack that allowed an attacker to bypass account authentication and access the encrypted database.

Problems with password managers When you share a password database within a team, how do you remove access to a member of the team? While you can, for example, re-encrypt a pass database with new keys (thereby removing or adding certain accesses) or change the password on a KeePass database, a hostile party could have made a backup of the database before the revocation. Indeed, in the case of pass, older entries are still in the Git history. So access revocation is a problematic issue found with all shared password managers, as it may actually mean going through every password and changing them online. This fundamental problem with shared secrets can be better addressed with a tool like Vault or SFLvault. Those tools aim to provide teams with easy ways to store dynamic tokens like API keys or service passwords and share them not only with other humans, but also make them accessible to machines. The general idea of those projects is to store secrets in a central server and send them directly to relevant services without human intervention. This way, passwords are not actually shared anymore, which is similar in spirit to the approach taken by centralized authentication systems like Kerberos. If you are looking at password management for teams, those projects may be worth a look. Furthermore, some password managers that support auto-typing were found to be vulnerable to HTML injection attacks: if some third-party ad or content is able to successfully hijack the parent DOM content, it masquerades as a form that could fool auto-typing software as demonstrated by this paper that was submitted at USENIX 2014. Fortunately, KeePass was not vulnerable according to the security researchers, but LastPass was, again, vulnerable.

Future of password managers? All of the solutions discussed here assume you have a trusted computer you regularly have access to, which is a usage pattern that seems to be disappearing with a majority of the population. You could consider your phone to be that trusted device, yet a phone can be lost or stolen more easily than a traditional workstation or even a laptop. And while KeePass has Android and iOS ports, those do not resolve the question of how to share the password storage among those devices or how to back them up. Password managers are fundamentally file-based, and the "file" concept seems to be quickly disappearing, faster than we technologists sometimes like to admit. Looking at some relatives' use of computers, I notice it is less about "files" than images, videos, recipes, and various abstract objects that are stored in the "cloud". They do not use local storage so much anymore. In that environment, password managers lose their primary advantage, which is a local, somewhat offline file storage that is not directly accessible to attackers. Therefore certain password managers are specifically designed for the cloud, like LastPass or web browser profile synchronization features, without necessarily addressing the inherent issues with cloud storage and opening up huge privacy and security issues that we absolutely need to address. This is where the "password hasher" design comes in. Also known as "stateless" or "deterministic" password managers, password hashers are emerging as a convenient solution that could possibly replace traditional password managers as users switch from generic computing platforms to cloud-based infrastructure. We will cover password hashers and the major security challenges they pose in a future article.
Note: this article first appeared in the Linux Weekly News.

Antoine Beaupr : A look at password managers

As we noted in an earlier article, passwords are a liability and we'd prefer to get rid of them, but the current reality is that we do use a plethora of passwords in our daily lives. This problem is especially acute for technology professionals, particularly system administrators, who have to manage a lot of different machines. But it also affects regular users who still use a large number of passwords, from their online bank to their favorite social-networking site. Despite the remarkable memory capacity of the human brain, humans are actually terrible at recalling even short sets of arbitrary characters with the precision needed for passwords. Therefore humans reuse passwords, make them trivial or guessable, write them down on little paper notes and stick them on their screens, or just reset them by email every time. Our memory is undeniably failing us and we need help, which is where password managers come in. Password managers allow users to store an arbitrary number of passwords and just remember a single password to unlock them all. But there is a large variety of password managers out there, so which one should we be using? At my previous job, an inventory was done of about 40 different free-software password managers in different stages of development and of varying quality. So, obviously, this article will not be exhaustive, but instead focus on a smaller set of some well-known options that may be interesting to readers.

KeePass: the popular alternative The most commonly used password-manager design pattern is to store passwords in a file that is encrypted and password-protected. The most popular free-software password manager of this kind is probably KeePass. An important feature of KeePass is the ability to auto-type passwords in forms, most notably in web browsers. This feature makes KeePass really easy to use, especially considering it also supports global key bindings to access passwords. KeePass databases are designed for simultaneous access by multiple users, for example, using a shared network drive. KeePass has a graphical interface written in C#, so it uses the Mono framework on Linux. A separate project, called KeePassX is a clean-room implementation written in C++ using the Qt framework. Both support the AES and Twofish encryption algorithms, although KeePass recently added support for the ChaCha20 cipher. AES key derivation is used to generate the actual encryption key for the database, but the latest release of KeePass also added using Argon2, which was the winner of the July 2015 password-hashing competition. Both programs are more or less equivalent, although the original KeePass seem to have more features in general. The KeePassX project has recently been forked into another project now called KeePassXC that implements a set of new features that are present in KeePass but missing from KeePassX like:
  • auto-type on Linux, Mac OS, and Windows
  • database merging which allows multi-user support
  • using the web site's favicon in the interface
So far, the maintainers of KeePassXC seem to be open to re-merging the project "if the original maintainer of KeePassX in the future will be more active and will accept our merge and changes". I can confirm that, at the time of writing, the original KeePassX project now has 79 pending pull requests and only one pull request was merged since the last release, which was 2.0.3 in September 2016. While KeePass and derivatives allow multiple users to access the same database through the merging process, they do not support multi-party access to a single database. This may be a limiting factor for larger organizations, where you may need, for example, a different password set for different technical support team levels. The solution in this case is to use separate databases for each team, with each team using a different shared secret.

Pass: the standard password manager? I am currently using password-store, or pass, as a password manager. It aims to be "the standard Unix password manager". Pass is a GnuPG-based password manager that features a surprising number of features given its small size:
  • copy-paste support
  • Git integration
  • multi-user/group support
  • pluggable extensions (in the upcoming 1.7 release)
The command-line interface is simple to use and intuitive. The following, will, for example, create a pass repository, a 20 character password for your LWN account and copy it to the clipboard:
    $ pass init
    $ pass generate -c lwn 20
The main issue with pass is that it doesn't encrypt the name of those entries: if someone were to compromise my machine, they could easily see which sites I have access to simply by listing the passwords stored in ~/.password-store. This is a deliberate design decision by the upstream project, as stated by a mailing list participant, Allan Odgaard:
Using a single file per item has the advantage of shell completion, using version control, browse, move and rename the items in a file browser, edit them in a regular editor (that does GPG, or manually run GPG first), etc.
Odgaard goes on to point out that there are alternatives that do encrypt the entire database (including the site names) if users really need that feature. Furthermore, there is a tomb plugin for pass that encrypts the password store in a LUKS container (called a "tomb"), although it requires explicitly opening and closing the container, which makes it only marginally better than using full disk encryption system-wide. One could also argue that password file names do not hold secret information, only the site name and username, perhaps, and that doesn't require secrecy. I do believe those should be kept secret, however, as they could be used to discover (or prove) which sites you have access to and then used to perform other attacks. One could draw a parallel with the SSH known_hosts file, which used to be plain text but is now hashed so that hosts are more difficult to discover. Also, sharing a database for multi-user support will require some sort of file-sharing mechanism. Given the integrated Git support, this will likely involve setting up a private Git repository for your team, something which may not be accessible to the average Linux user. Nothing keeps you, however, from sharing the ~/.password-store directory through another file sharing mechanism like (say) Syncthing or Dropbox). You can use multiple distinct databases easily using the PASSWORD_STORE_DIR environment variable. For example, you could have a shell alias to use a different repository for your work passwords with:
    alias work-pass="PASSWORD_STORE_DIR=~/work-passwords pass"
Group support comes from a clever use of the GnuPG multiple-recipient encryption support. You simply have to specify multiple OpenPGP identities when initializing the repository, which also works in subdirectories:
    $ pass init -p Ateam me@example.com joelle@example.com
    mkdir: created directory '/home/me/.password-store/Ateam'
    Password store initialized for me@example.com, joelle@example.com
    [master 0e3dbe7] Set GPG id to me@example.com, joelle@example.com.
     1 file changed, 2 insertions(+)
     create mode 100644 Ateam/.gpg-id
The above will configure pass to encrypt the passwords in the Ateam directory for me@example.com and joelle@example.com. Pass depends on GnuPG to do the right thing when encrypting files and how those identities are treated is entirely delegated to GnuPG's default configuration. This could lead to problems if arbitrary keys can be injected into your key ring, which could confuse GnuPG. I would therefore recommend using full key fingerprints instead of user identifiers. Regarding the actual encryption algorithms used, in my tests, GnuPG 1.4.18 and 2.1.18 seemed to default to 256-bit AES for encryption, but that has not always been the case. The chosen encryption algorithm actually depends on the recipient's key preferences, which may vary wildly: older keys and versions may use anything from 128-bit AES to CAST5 or Triple DES. To figure out which algorithm GnuPG chose, you may want to try this pipeline:
    $ echo test   gpg -e -r you@example.com   gpg -d -v
    [...]
    gpg: encrypted with 2048-bit RSA key, ID XXXXXXX, created XXXXX
      "You Person You <you@example.com>"
    gpg: AES256 encrypted data
    gpg: original file name=''
    test
As you can see, pass is primarily a command-line application, which may make it less accessible to regular users. The community has produced different graphical interfaces that are either using pass directly or operate on the storage with their own GnuPG integration. I personally use pass in combination with Rofi to get quick access to my passwords, but less savvy users may want to try the QtPass interface, which should be more user-friendly. QtPass doesn't actually depend on pass and can use GnuPG directly to interact with the pass database; it is available for Linux, BSD, OS X, and Windows.

Browser password managers Most users are probably already using a password manager through their web browser's "remember password" functionality. For example, Chromium will ask if you want it to remember passwords and encrypt them with your operating system's facilities. For Windows, this encrypts the passwords with your login password and, for GNOME, it will store the passwords in the gnome-keyring storage. If you synchronize your Chromium settings with your Google account, Chromium will store those passwords on Google's servers, encrypted with a key that is stored in the Google Account itself. So your passwords are then only as safe as your Google account. Note that this was covered here in 2010, although back then Chromium didn't synchronize with the Google cloud or encrypt with the system-level key rings. That facility was only added in 2013. In Firefox, there's an optional, profile-specific master password that unlocks all passwords. In this case, the issue is that browsers are generally always open, so the vault is always unlocked. And this is for users that actually do pick a master password; users are often completely unaware that they should set one. The unlocking mechanism is a typical convenience-security trade-off: either users need to constantly input their master passwords to login or they don't, and the passwords are available in the clear. In this case, Chromium's approach of actually asking users to unlock their vault seems preferable, even though the developers actually refused to implement the feature for years. Overall, I would recommend against using a browser-based password manager. Even if it is not used for critical sites, you will end up with hundreds of such passwords that are vulnerable while the browser is running (in the case of Firefox) or at the whim of Google (in the case of Chromium). Furthermore, the "auto-fill" feature that is often coupled with browser-based password managers is often vulnerable to serious attacks, which is mentioned below. Finally, because browser-based managers generally lack a proper password generator, users may fail to use properly generated passwords, so they can then be easily broken. A password generator has been requested for Firefox, according to this feature request opened in 2007, and there is a password generator in Chrome, but it is disabled by default and hidden in the mysterious chrome://flags URL.

Other notable password managers Another alternative password manager, briefly mentioned in the previous article, is the minimalistic Assword password manager that, despite its questionable name, is also interesting. Its main advantage over pass is that it uses a single encrypted JSON file for storage, and therefore doesn't leak the name of the entries by default. In addition to copy/paste, Assword also supports automatically entering passphrases in fields using the xdo library. Like pass, it uses GnuPG to encrypt passphrases. According to Assword maintainer Daniel Kahn Gillmor in email, the main issue with Assword is "interaction between generated passwords and insane password policies". He gave the example of the Time-Warner Cable registration form that requires, among other things, "letters and numbers, between 8 and 16 characters and not repeat the same characters 3 times in a row". Another well-known password manager is the commercial LastPass service which released a free-software command-line client called lastpass-cli about three years ago. Unfortunately, the server software of the lastpass.com service is still proprietary. And given that LastPass has had at least two serious security breaches since that release, one could legitimately question whether this is a viable solution for storing important secrets. In general, web-based password managers expose a whole new attack surface that is not present in regular password managers. A 2014 study by University of California researchers showed that, out of five password managers studied, every one of them was vulnerable to at least one of the vulnerabilities studied. LastPass was, in particular, vulnerable to a cross-site request forgery (CSRF) attack that allowed an attacker to bypass account authentication and access the encrypted database.

Problems with password managers When you share a password database within a team, how do you remove access to a member of the team? While you can, for example, re-encrypt a pass database with new keys (thereby removing or adding certain accesses) or change the password on a KeePass database, a hostile party could have made a backup of the database before the revocation. Indeed, in the case of pass, older entries are still in the Git history. So access revocation is a problematic issue found with all shared password managers, as it may actually mean going through every password and changing them online. This fundamental problem with shared secrets can be better addressed with a tool like Vault or SFLvault. Those tools aim to provide teams with easy ways to store dynamic tokens like API keys or service passwords and share them not only with other humans, but also make them accessible to machines. The general idea of those projects is to store secrets in a central server and send them directly to relevant services without human intervention. This way, passwords are not actually shared anymore, which is similar in spirit to the approach taken by centralized authentication systems like Kerberos). If you are looking at password management for teams, those projects may be worth a look. Furthermore, some password managers that support auto-typing were found to be vulnerable to HTML injection attacks: if some third-party ad or content is able to successfully hijack the parent DOM content, it masquerades as a form that could fool auto-typing software as demonstrated by this paper that was submitted at USENIX 2014. Fortunately, KeePass was not vulnerable according to the security researchers, but LastPass was, again, vulnerable.

Future of password managers? All of the solutions discussed here assume you have a trusted computer you regularly have access to, which is a usage pattern that seems to be disappearing with a majority of the population. You could consider your phone to be that trusted device, yet a phone can be lost or stolen more easily than a traditional workstation or even a laptop. And while KeePass has Android and iOS ports, those do not resolve the question of how to share the password storage among those devices or how to back them up. Password managers are fundamentally file-based, and the "file" concept seems to be quickly disappearing, faster than we technologists sometimes like to admit. Looking at some relatives' use of computers, I notice it is less about "files" than images, videos, recipes, and various abstract objects that are stored in the "cloud". They do not use local storage so much anymore. In that environment, password managers lose their primary advantage, which is a local, somewhat offline file storage that is not directly accessible to attackers. Therefore certain password managers are specifically designed for the cloud, like LastPass or web browser profile synchronization features, without necessarily addressing the inherent issues with cloud storage and opening up huge privacy and security issues that we absolutely need to address. This is where the "password hasher" design comes in. Also known as "stateless" or "deterministic" password managers, password hashers are emerging as a convenient solution that could possibly replace traditional password managers as users switch from generic computing platforms to cloud-based infrastructure. We will cover password hashers and the major security challenges they pose in a future article.
Note: this article first appeared in the Linux Weekly News.

22 October 2016

Matthieu Caneill: Debugging 101

While teaching this semester a class on concurrent programming, I realized during the labs that most of the students couldn't properly debug their code. They are at the end of a 2-year cursus, know many different programming languages and frameworks, but when it comes to tracking down a bug in their own code, they often lacked the basics. Instead of debugging for them I tried to give them general directions that they could apply for the next bugs. I will try here to summarize the very first basic things to know about debugging. Because, remember, writing software is 90% debugging, and 10% introducing new bugs (that is not from me, but I could not find the original quote). So here is my take at Debugging 101. Use the right tools Many good tools exist to assist you in writing correct software, and it would put you behind in terms of productivity not to use them. Editors which catch syntax errors while you write them, for example, will help you a lot. And there are many features out there in editors, compilers, debuggers, which will prevent you from introducing trivial bugs. Your editor should be your friend; explore its features and customization options, and find an efficient workflow with them, that you like and can improve over time. The best way to fix bugs is not to have them in the first place, obviously. Test early, test often I've seen students writing code for one hour before running make, that would fail so hard that hundreds of lines of errors and warnings were outputted. There are two main reasons doing this is a bad idea: I recommend to test your code (compilation and execution) every few lines of code you write. When something breaks, chances are it will come from the last line(s) you wrote. Compiler errors will be shorter, and will point you to the same place in the code. Once you get more confident using a particular language or framework, you can write more lines at once without testing. That's a slow process, but it's ok. If you set up the right keybinding for compiling and executing from within your editor, it shouldn't be painful to test early and often. Read the logs Spot the places where your program/compiler/debugger writes text, and read it carefully. It can be your terminal (quite often), a file in your current directory, a file in /var/log/, a web page on a local server, anything. Learn where different software write logs on your system, and integrate reading them in your workflow. Often, it will be your only information about the bug. Often, it will tell you where the bug lies. Sometimes, it will even give you hints on how to fix it. You may have to filter out a lot of garbage to find relevant information about your bug. Learn to spot some keywords like error or warning. In long stacktraces, spot the lines concerning your files; because more often, your code is to be blamed, rather than deeper library code. grep the logs with relevant keywords. If you have the option, colorize the output. Use tail -f to follow a file getting updated. There are so many ways to grasp logs, so find what works best with you and never forget to use it! Print foobar That one doesn't concern compilation errors (unless it's a Makefile error, in that case this file is your code anyway). When the program logs and output failed to give you where an error occured (oh hi Segmentation fault!), and before having to dive into a memory debugger or system trace tool, spot the portion of your program that causes the bug and add in there some print statements. You can either print("foo") and print("bar"), just to know that your program reaches or not a certain place in your code, or print(some_faulty_var) to get more insights on your program state. It will give you precious information.
stderr >> "foo" >> endl;
my_db.connect(); // is this broken?
stderr >> "bar" >> endl;
In the example above, you can be sure it is the connection to the database my_db that is broken if you get foo and not bar on your standard error. (That is an hypothetical example. If you know something can break, such as a database connection, then you should always enclose it in a try/catch structure). Isolate and reproduce the bug This point is linked to the previous one. You may or may not have isolated the line(s) causing the bug, but maybe the issue is not always raised. It can depend on many other things: the program or function parameters, the network status, the amount of memory available, the decisions of the OS scheduler, the user rights on the system or on some files, etc. More generally, any assumption you made on any external dependency can appear to be wrong (even if it's right 99% of the time). According to the context, try to isolate the set of conditions that trigger the bug. It can be as simple as "when there is no internet connection", or as complicated as "when the CPU load of some external machine is too high, it's a leap year, and the input contains illegal utf-8 characters" (ok, that one is fucked up; but it surely happens!). But you need to reliably be able to reproduce the bug, in order to be sure later that you indeed fixed it. Of course when the bug is triggered at every run, it can be frustrating that your program never works but it will in general be easier to fix. RTFM Always read the documentation before reaching out for help. Be it man, a book, a website or a wiki, you will find precious information there to assist you in using a language or a specific library. It can be quite intimidating at first, but it's often organized the same way. You're likely to find a search tool, an API reference, a tutorial, and many examples. Compare your code against them. Check in the FAQ, maybe your bug and its solution are already referenced there. You'll rapidly find yourself getting used to the way documentation is organized, and you'll be more and more efficient at finding instantly what you need. Always keep the doc window open! Google and Stack Overflow are your friends Let's be honest: many of the bugs you'll encounter have been encountered before. Learn to write efficient queries on search engines, and use the knowledge you can find on questions&answers forums like Stack Overflow. Read the answers and comments. Be wise though, and never blindly copy and paste code from there. It can be as bad as introducing malicious security issues into your code, and you won't learn anything. Oh, and don't copy and paste anyway. You have to be sure you understand every single line, so better write them by hand; it's also better for memorizing the issue. Take notes Once you have identified and solved a particular bug, I advise to write about it. No need for shiny interfaces: keep a list of your bugs along with their solutions in one or many text files, organized by language or framework, that you can easily grep. It can seem slightly cumbersome to do so, but it proved (at least to me) to be very valuable. I can often recall I have encountered some buggy situation in the past, but don't always remember the solution. Instead of losing all the debugging time again, I search in my bug/solution list first, and when it's a hit I'm more than happy I kept it. Further reading degugging Remember this was only Debugging 101, that is, the very first steps on how to debug code on your own, instead of getting frustrated and helplessly stare at your screen without knowing where to begin. When you'll write more software, you'll get used to more efficient workflows, and you'll discover tools that are here to assist you in writing bug-free code and spotting complex bugs efficiently. Listed below are some of the tools or general ideas used to debug more complex software. They belong more to a software engineering course than a Debugging 101 blog post. But it's good to know as soon as possible these exist, and if you read the manuals there's no reason you can't rock with them! Don't hesitate to comment on this, and provide your debugging 101 tips! I'll be happy to update the article with valuable feedback. Happy debugging!

14 October 2016

Antoine Beaupr : Managing good bug reports

Bug reporting is an art form that is too often neglected in software projects. Bug reports allow contributors to participate without deep technical knowledge and at the same time provide a crucial space for developers to be made aware of issues with their software that they could not have foreseen or found themselves, for lack of resources, variety or imagination.

Prior art Unfortunately, there are rarely good guidelines for submitting bug reports. Historically, people have pointed towards How to report bugs effectively or How to ask questions the smart way. While those guides can be useful for motivated people and may seem attractive references for project managers, they suffer from serious issues:
  • they are written by technical people, for non-technical people
  • as a result, they have a deeply condescending attitude such as calling people "stupid" or various animal names like "mongoose"
  • they are also very technical themselves: one starts with a copyright notice and a changelog, the other uses magic words like "Core dumps" and $Id$
  • they are too long: sgtatham's is about 3600 words long, esr's is even longer at about 11800 words. those texts will take about 20 to 60 minutes to read by an average reader, according to research
Individual projects have their own guides as well. Linux has the REPORTING_BUGS file that is a much shorter 1200 that can be read under 5 minutes, provided that you can understand the topic at hand. Interestingly, that guide refers to both esr's and sgtatham's guidelines, which means, in the degenerate case where the user hasn't had the "privilege" of reading esr's prose already, they will have an extra hour and a half of reading to do to have honestly followed the guidelines before reporting the bug. I often find good documentation in the Tails project. Their bug reporting guidelines are easily accessible and quick to read, although they still might be too technical. It could be argued that you need to get technical at some point to get that information out, of course. In the Monkeysign project, I have started a bug reporting guide that doesn't yet address all those issues. I am considering writing a new guide, but I figured I would look at other people's work and get feedback before writing my own standard.

What's the point? Why have those documents been written? Are people really expected to read them before seeking help? It seems to me unlikely that someone would:
  1. be motivated enough to do something about a broken part of their computer
  2. figure out they can do something about it
  3. read a fifteen thousand words novel about how to report a bug...
  4. just to finally write a 20-line bug report that has no warranty of support attached to it
And if I would be a paying customer, I wouldn't want to be forced to waste my time reading that prose either: it's your job to help me fix your broken things, not the reverse. As someone doing consulting these days: I totally understand: it's not you, the user, it's us, the developers, that have a problem. We have been socialized through computers, and it makes us weird and obtuse, but that's no excuse, and we need to clean up our act. Furthermore, it's surprising how often we get (and make!) bug reports that can be difficult to use. The Monkeysign project is very "technical" and I have expected that the bug reports I would get would be well written, with ways to reproduce and so on, but it happened that I received bug reports that were all over the place, didn't have any ways of reproducing or were simply incomplete. Those three bug reports were filed by people that I know to be very technically capable: one is a fellow Debian developer, the second had filed a good bug report 5 days before, and the third one is a contributor that sent good patches before. In all three cases, they knew what they were doing. Those three people probably read the guidelines mentioned in the past. They may have even read the Monkeysign bug reporting guidelines as well. I can only explain those bug reports by the lack of time: people thought the issue was obvious, that it would get fixed rapidly because, obviously, something is broken. We need a better way.

The takeaway What are those guides trying to tell us?
  1. ask questions in the right place
  2. search for similar questions and issues before reporting the bug
  3. try to make the developers reproduce the issues
  4. failing that, try to describe the issue as well as you can
  5. write clearly, be specific and verbose yet concise
There are obviously contradictions in there, like sgtatham telling us to be verbose and esr tells us to, basically, not be verbose. There is definitely a tension in there, and there are many, many more details about how great bug reports can be if done properly. I tend towards the side of terseness in our descriptions: people that will know how to be concise will be, people that don't will most likely not learn by reading a 12 000 words novel that, in itself, didn't manage to be parsimonious. But I am willing to allow for verbosity in bug reports: I prefer too many details instead of missing a key bit of information.

Issue trackers Step 1 is our job: we should send people in the right place, and give them the right tools. Monkeysign used to manage bugs with bugs-everywhere and this turned out to be a terrible idea: you had to understand git and bugs-everywhere to file any bug reports. As a result, there were exactly zero bug reports filed by non-developers during the whole time BE was used, although some bugs were filed in the Debian Bugtracker. So have a good bug tracker. A mailing list or email address is not a good bug tracker: you lose track of old issues, and it's hard for newcomers to search the archives. It does have the advantage of having a unified interface for the support forum and bug tracking, however. Redmine, Gitlab, Github and others are all decent-enough bug trackers. The key point is that the issue tracker should be publicly available, and users should be able to register easily to file new issues. You should also be able to mass-edit tickets and users should be able to discover the tracker's features easily. I am sorry to say that the Debian BTS somewhat falls short on those two features. Step 2 is a shared responsibility: there should be an easy way to search for issues, and we should help the user looking for similar issues. Stackexchange sites do an excellent job at this, by automatically searching for similar questions while you write your question, suggesting similar ones in an attempt to weed out duplicates. Duplicates still happen, but they can then clearly be marked and linked with a distinct mechanism. Most bug trackers do not offer such high level functionality, but should, so I feel the fault lies more on "our" end than at the user's end.

Reproducing the environment Step 3 and 4 are more or less the user's responsibility. We can detail in our documentation how to clearly share the environment where we reproduced the bug, for example, but in the end, the user decides if they want to share that information or not. In Monkeysign, I have finally implemented joeyh's suggestion of shipping the test suite with the program. I can now tell people to run the test suite in their environment to see if this is a regression that is specific to their environment - so a known bug, in a way - or a novel bug for which I can look at writing a new unit test. I also include way more information about the environment in the --version output, an idea I brought forward in the borg project to ease debugging. That way, people can just send the output of monkeysign --test and monkeysign --version, and I have a very good overview of what is happening on their end. Of course, Monkeysign also supports the usual --verbose and --debug flag that users should enable when reproducing issues. Another idea is to report bugs directly from the application. We have all seen Firefox or other software have automatic bug reporting tools, but somehow those seem unsatisfactory for a user: we have no feedback of where the report goes, if it's followed up on. It is useful for larger project to get statistical data, but not so useful for users in the short term. Monkeysign tries to handle exceptions in the code in a graceful way, but could do better. We use a small library to handle exceptions, but that library has since then been improved to directly file bugs against the Github project. This assumes the user is logged into Github, but it is nice to pre-populate bug reports with the relevant information up front.

Issue templates In the meantime, to make sure people provide enough information, I have now moved a lot of the bug reporting guidelines to a separate issue template. That issue template is available through the issue creation form now, although it is not enabled by default, a weird limitation of Gitlab. Issue templates are available in Gitlab and Github. Issue templates somewhat force users in a straight jacket: there is already something to structure their bug report. Those could be distinct form elements that had to be filled in, but I like the flexibility of the template, and the possibility for users to just escape the formalism and just plead for help in their own way.

Issue guidelines In the end, I opted for a short few paragraphs in the style of the Tails documentation, including a reference to sgtatham, as an optional future reference:
  • Before you report a new bug, review the existing issues in the online issue tracker and the Debian BTS for Monkeysign to make sure the bug has not already been reported elsewhere.
  • The first aim of a bug report is to tell the developers exactly how to reproduce the failure, so try to reproduce the issue yourself and describe how you did that.
  • If that is not possible, try to describe what went wrong in detail. Write down the error messages, especially if they have numbers.
  • Take the necessary time to write clearly and precisely. Say what you mean, and make sure it cannot be misinterpreted.
  • Include the output of monkeysign --test, monkeysign --version and monkeysign --debug in your bug reports. See the issue template for more details about what to include in bug reports.
If you wish to read more about issues regarding communication in bug reports, you can read How to Report Bugs Effectively, which takes around 20 to 30 minutes.
Unfortunately, short of rewriting sgtatham's guide, I do not feel there is much more we can do as a general guide. I find esr's guide to be too verbose and commanding, so sgtatham it will be for now.

The prose and literacy In the end, there is a fundamental issue with reporting bugs: it assumes our users are literate and capable of writing amazing prose that we will enjoy reading as the last J.K. Rowling novel (if you're into that kind of thing). It's just an unreasonable expectation: some of your users don't even speak the same language as you, let alone read or write it. This makes for challenging collaboration, to say the least. This is where automated reporting makes sense: it doesn't require user's intervention, and the communication is mediated by machines without human intervention and their pesky culture. But we should, as maintainers, "be liberal in what we accept and conservative in what we send". Be tolerant, and help your users in fixing their issues. It's what you are there for, after all. And in the end, we all fail the same way. In an attempt to improve the situation on bug reporting guides, I seem to have myself written a 2000 short story that will have taken up a hopefully pleasant 10 minutes of your time at minimum. Hopefully I will have succeeded at being clear, specific, verbose and concise all at once and look forward to your feedback on how to improve our bug reporting culture.

Next.