Dowry
Few days back, had posted about the movie Raksha Bandhan and whatever I felt about it. Sadly, just couple of days back, somebody shared this link. Part of me was shocked and part of me was not. Couple of acquaintances of mine in the past had said the same thing for their daughters. And in such situations you are generally left speechless because you don t know what the right thing to do is. If he has shared it with you being an outsider, how many times he must have told the same to their wife and daughters? And from what little I have gathered in life, many people have justified it on similar lines. And while the protests were there, sadly the book was not removed. Now if nurses are reading such literature, how their thought process might be forming, you can tell :(. And these are the ones whom we call for when we are sick and tired :(. And I have not taken into account how the girls/women themselves might be feeling. There are similar things in another country but probably not the same, nor the same motivations though although feeling helplessness in both would be a common thing.
But such statements are not alone. Another gentleman in slightly different context shared this as well
The above is a statement shared in a book recommended for CTET (Central Teacher s Eligibility Test that became mandatory to be taken as the RTE (Right To Education) Act came in.). The statement says People from cold places are white, beautiful, well-built, healthy and wise. And people from hot places are black, irritable and of violent nature.
Now while I can agree with one part of the statement that people residing in colder regions are more fair than others but there are loads of other factors that determine fairness or skin color/skin pigmentation. After a bit of search came to know that this and similar articulation have been made in an idea/work called Environmental Determinism . Now if you look at that page, you would realize this was what colonialism is and was all about. The idea that the white man had god-given right to rule over others. Similarly, if you are fair, you can lord over others. Seems simplistic, but yet it has a powerful hold on many people in India. Forget the common man, this thinking is and was applicable to some of our better-known Freedom fighters. Pune s own Bal Gangadhar Tilak The Artic Home to the Vedas. It sort of talks about Aryans and how they invaded India and became settled here. I haven t read or have access to the book so have to rely on third-party sources. The reason I m sharing all this is that the right-wing has been doing this myth-making for sometime now and unless and until you put a light on it, it will continue to perpetuate . For those who have read this blog, do know that India is and has been in casteism from ever. They even took the fair comment and applied it to all Brahmins. According to them, all Brahmins are fair and hence have god-given right to lord over others. What is called the Eton boy s network serves the same in this casteism. The only solution is those idea under limelight and investigate. To take the above, how does one prove that all fair people are wise and peaceful while all people black and brown are violent. If that is so, how does one count for Mahatma Gandhi, Martin Luther King Junior, Nelson Mandela, Michael Jackson the list is probably endless. And not to forget that when Mahatma Gandhiji did his nonviolent movements either in India or in South Africa, both black and brown people in millions took part. Similar examples of Martin Luther King Jr. I know and read of so many non-violent civl movements that took place in the U.S. For e.g. Rosa Parks and the Montgomery Bus Boycott. So just based on these examples, one can conclude that at least the part about the fair having exclusive rights to being fair and noble is not correct.
Now as far as violence goes, while every race, every community has had done violence in the past or been a victim of the same. So no one is and can be blameless, although in light of the above statement, the question can argumentated as to who were the Vikings? Both popular imagination and serious history shares stories about Vikings. The Vikings were somewhat nomadic in nature even though they had permanent settlements but even then they went on raids, raped women, captured both men and women and sold them at slaves. So they are what pirates came to be, but not the kind Hollywood romanticizes about. Europe in itself has been a tale in conflict since time immemorial. It is only after the formation of EU that most of these countries stopped fighting each other From a historical point perspective, it is too new. So even the part of fair being non-violent dies in face of this evidence. I could go on but this is enough on that topic.
Railways and Industrial Action around the World.
While I have shared about Railways so many times on this blog, it continues to fascinate me that how people don t understand the first things about Railways. For e.g. Railways is a natural monopoly. What that means is and you can look at all and any type of privatization around the world, you will see it is a monopoly. Unlike the road or Skies, Railways is and would always be limited by infrastructure and the ability to have new infrastructure. Unlike in road or Skies (even they have their limits) you cannot run train services on a whim. At any particular point in time, only a single train could and should occupy a stretch of Railway network. You could have more trains on one line, but then the likelihood of front or rear-end collisions becomes a real possibility. You also need all sorts of good and reliable communications, redundant infrastructure so if one thing fails then you have something in place. The reason being a single train can carry anywhere from 2000 to 5000 passengers or more. While this is true of Indian Railways, Railways around the world would probably have some sort of similar numbers.It is in this light that I share the below videos.
To be more precise, see the fuller video
Now to give context to the recording above, Mike Lynch is the general secretary at RMT. For those who came in late, both UK and the U.S. have been threatened by railway strikes. And the reason for the strikes or threat of strikes is similar. Now from the company perspective, all they care is to invest less and make the most profits that can be given to equity shareholders. At the same time, they have freezed the salaries of railway workers for the last 3 years. While the politicians who were asking the questions, apparently gave themselves raise twice this year. They are asking them to negotiate at 8% while inflation in the UK has been 12.3% and projected to go higher. And it is not only the money. Since the 1980s when UK privatized the Railways, they stopped investing in the infrastructure. And that meant that the UK Railway infrastructure over period of time started getting behind and is even behind say Indian Railways which used to provide most bang for the buck. And Indian Railways is far from ideal. Ironically, most of the operators on UK are nationalized Railways of France, Germany etc. but after the hard Brexit, they too are mulling to cut their operations short, they have too There is also the EU Entry/Exit system that would come next year.
Why am I sharing about what is happening in UK Rail, because the Indian Government wants to follow the same thing, and fooling the public into saying we would do it better. What inevitably will happen is that ticket prices go up, people no longer use the service, the number of services go down and eventually they are cancelled. This has happened both in Indian Railways as well as Airlines. In fact, GOI just recently announced a credit scheme just a few days back to help Airlines stay afloat. I was chatting with a friend who had come down to Pune from Chennai and the round-trip cost him INR 15k/- on that single trip alone. We reminisced how a few years ago, 8 years to be precise, we could buy an Air ticket for 2.5k/- just a few days before the trip and did it. I remember doing/experiencing at least a dozen odd trips via air in the years before 2014. My friend used to come to Pune, almost every weekend because he could afford it, now he can t do that. And these are people who are in the above 5-10% of the population. And this is not just in UK, but also in the United States. There is one big difference though, the U.S. is mainly a freight carrier while the UK Railway Operations are mostly passenger based. What was and is interesting that Scotland had to nationalize their services as they realized the Operators cannot or will not function when they were most needed. Most of the public even in the UK seem to want a nationalized rail service, at least their polls say so. So, it would definitely be interesting to see what happens in the UK next year.
In the end, I know I promised to share about books, but the above incidents have just been too fascinating to not just share the news but also share what I think about them. Free markets function good where there is competition, for example what is and has been happening in China for EV s but not where you have natural monopolies. In all Railway privatization, you have to handover the area to one person, then they have no motivation. If you have multiple operators, then there would always be haggling as to who will run the train and at what time. In either scenario, it doesn t work and raises prices while not delivering anything better
I do take examples from UK because lot of things are India are still the legacy of the British. The whole civil department that was created in 1953 is/was a copy of the British civil department at that time and it is to this day.
P.S. Just came to know that the UK Chancellor Kwasi Kwarteng was just sacked as UK Chancellor. I do commend Truss for facing the press even though she might be dumped a week later unlike our PM who hasn t faced a single press conference in the last 8 odd years.
https://www.youtube.com/watch?v=oTP6ogBqU7of
The difference in Indian and UK politics seems to be that the English are now asking questions while here in India, most people are still sleeping without a care in the world.
Another thing to note Minidebconf Palakkad is gonna happen 12-13th November 2022. I am probably not gonna go but would request everyone who wants to do something in free software to attend it. I am not sure whether I would be of any use like this and also when I get back, it would be an empty house. But for people young and old, who want to do anything with free/open source software it is a chance not to be missed. Registration of the same closes on 1st of November 2022. All the best, break a leg
Just read this, beautifully done.
J.R.R. Tolkein
Now unless you have been living under a rock cave, I am sure you know who Mr. Tolkein is. Apparently, the gentleman passed away on 2nd September 1973 at the sprightly age of 80. And this gives fans like me to talk about fantasy, fantasy authors, and the love-hate relationship we have with them. For a matter of record, I am currently reading Babylon Steel by Gaie Sebold. Now while I won t go into many details (I never like to, if I enjoy a book, I would want the book to be mysterious rather than give praise, simply so that the next person enjoys it as much as I did without having any expectations.) Now this book has plenty of sex so wouldn t recommend it for teenagers but more perhaps to mature audiences, although for the life of me couldn t find any rating on the book. I did come across common sense media but unfortunately, it isn t well known beyond perhaps some people who use it. They sadly don t have a google/Android app And before anybody comments, I know that Android is no longer interested in supporting FOSS, their loss, not ours but that is entirely a blog post/article in itself. so let s leave that aside for now.
Fantasy
So before talking about Mr. Tolkien and his creations let s talk and share a bit about fantasy. We know for a fact that the conscious mind functions at less than 5%, while the other bits are made by the subconscious and the unconscious mind (the three mind model.) So any thought or idea first germinates n either the unconscious or the subconscious part of the mind and then comes into the conscious mind. It is the reason we also dream. That s the subconscious and unconscious mind at work. While we say fantasy mostly to books, it is all around us and not just in prose but in song, dance, and all sorts of creativity are fantasy. Even Sci-fi actually comes from fantasy. Unfortunately, for reasons best known to people, they took out sci-fi and even divided fantasy into high fantasy and low fantasy. I am not going to go much into that but here s a helpful link for those who might want to look more into it. Now the question arises, why do people write? I have asked this question many a time to the authors I have met and the answers are as varied as they come. Two of the most common answers are the need to write (an itch they can t control or won t control) and the other is it s extremely healing. In my own case, even writing mere blog posts I found it unburdening & cathartetic. I believe this last part is what drove Mr. Tolkein and the story and arc that LOTR became.
Tolkien, LOTR, World War I
The casual reader might not know but if you followed or were curious about Mr. Tolkien, you would have found out that Mr. Tolkien served in World War 1 or what is known as the Great War. It was supposed to be the war that ended all wars but sadly didn t. One of the things that set apart Mr. Tolkein from many of his peers was that Mr. Tolkien was very straight about himself and corresponded with people far and wide. There is actually a book called The Letters of J.R.R. Tolkien that I hope to get at one of the used book depots. That book spans about 480 pages and gives all the answers as to why Mr. Tolkien made Middle-earth as it was made. I sadly haven t had the opportunity to get it and it is somewhat expensive. But I m sure that if World War 1 wouldn t have happened and Mr. Tolkein hadn t taken part and experienced what he experienced, we wouldn t have LOTR. I can bet losing his friends and comrades, and the pain he felt for those around him propelled him to write about land and a race called Hobbits. I haven t done enough fantasy reading but I do feel that his description of hobbits and the way they were and are is unique. The names and surnames he used were for humor as well as to make a statement about them. Having names such as Harfoots, Padfoot, Took and others just wouldn t be for fun, would it? Also, the responses and the behavior in the four books by Hobbits are almost human-like. It is almost like they are or were our cousins at one point in time but we allowed ourselves to forget. Even the corruption of humans has been shown as well as self-doubt.
There is another part that I found and find fascinating, unlike most books where there is a single hero, in LOTR we have many heroes and heroines. This again, I would attribute to Mr. Tolkien and the heroism he saw on the battlefield and beyond it. All the tender emotions he shares with readers like us are because either he himself or others around him were subjected to grace and wonderment. This is all I derive from the books, those who have The letters of J.R.R. Tolkein , feel free to correct me. I was supposed to write this yesterday but real life has its own way.
I could go on and on, perhaps at a later date or time I may expand on it, but it isn t a coincidence that Lord of the Rings: Rings of Power is starting broadcast on the same day when Mr. Tolkein died. In the very end, fantasy is something humans got and does not matter how rich or poor you are. If one were to look, both artists like Michaelangelo and many other artists, who often didn t have enough to have two square meals in the day, but still somehow were inspired to sketch models of airplanes, flying machines which are shockingly similar to the real thing. Many may not know that almost all primates, including apes, monkeys, squirrels, and even dolphins dream. And all of them have elaborate, complex dreams just as we do. Sadly, this info. is not known by most people otherwise, we would be so much empathetic towards our cousins in the animal kingdom.
The seventeenth release of littler as a CRAN package just landed, following in the now sixteen year history (!!) as a package started by Jeff in 2006, and joined by me a few weeks later.
littler is the first command-line interface for R as it predates Rscript. It allows for piping as well for shebang scripting via #!, uses command-line arguments more consistently and still starts faster. It also always loaded the methods package which Rscript only started to do in recent years.
littler lives on Linux and Unix, has its difficulties on macOS due to yet-another-braindeadedness there (who ever thought case-insensitive filesystems as a default were a good idea?) and simply does not exist on Windows (yet the build system could be extended see RInside for an existence proof, and volunteers are welcome!). See the FAQ vignette on how to add it to your PATH. A few examples are highlighted at the Github repo, as well as in the examples vignette.
This release, the first since last December, further extends install2.r accept multiple repos options thanks to Tatsuya Shima, overhauls and substantially extends installBioc.r thanks to Pieter Moris, and includes a number of (generally smaller) changes I added (see below).
The full change description follows.
Changes in littler version 0.3.16 (2022-08-28)
Changes in package
The configure code checks for two more headers
The RNG seeding matches the current version in R (Dirk)
Changes in examples
A cowu.r 'check Window UCRT' helper was added (Dirk)
A getPandoc.r downloader has been added (Dirk)
The -r option tp install2.r has been generalzed (Tatsuya Shima in #95)
The rcc.r code / package checker now has valgrind option (Dirk)
install2.r now installs to first element in .libPaths() by default (Dirk)
A very simple r2u.r help has been added (Dirk)
The installBioc.r has been generalized and extended similar to install2.r (Pieter Moris in #103)
My CRANberries service provides a comparison to the previous release. Full details for the littler release are provided as usual at the ChangeLog page, and also on the package docs website. The code is available via the GitHub repo, from tarballs and now of course also from its CRAN page and via install.packages("littler"). Binary packages are available directly in Debian as well as soon via Ubuntu binaries at CRAN thanks to the tireless Michael Rutter.
Comments and suggestions are welcome at the GitHub repo.
If you like this or other open-source work I do, you can now sponsor me at GitHub.
Rogers had a catastrophic failure in July
2022. It affected emergency services (as in: people couldn't call 911,
but also some 911 services themselves failed), hospitals (which
couldn't access prescriptions), banks and payment systems (as payment
terminals stopped working), and regular users as well. The outage
lasted almost a full day, and Rogers took days to give any technical
explanation on the outage, and even when they did, details were
sparse. So far the only detailed account is from outside actors like
Cloudflare which seem to point at an internal BGP failure.
Its impact on the economy has yet to be measured, but it probably cost
millions of dollars in wasted time and possibly lead to
life-threatening situations. Apart from holding Rogers (criminally?)
responsible for this, what should be done in the future to avoid such
problems?
It's not the first time something like this has happened: it happened to
Bell Canada as well. The Rogers outage is also strangely similar
to the Facebook outage last year, but, to its credit, Facebook
did post a fairly detailed explanation only a day later.
The internet is designed to be decentralised, and having
large companies like Rogers hold so much power is a crucial mistake
that should be reverted. The question is how. Some critics were
quick to point out that we need more ISP diversity and competition,
but I think that's missing the point. Others have suggested that the
internet should be a public good or even straight out
nationalized.
I believe the solution to the problem of large, private, centralised
telcos and ISPs is to replace them with smaller, public, decentralised
service providers. The only way to ensure that works is to make sure
that public money ends up creating infrastructure controlled by the
public, which means treating ISPs as a public utility. This has been
implemented elsewhere: it works, it's cheaper, and provides better
service.
A modest proposal
Global wireless services (like phone services) and home internet
inevitably grow into monopolies. They are public utilities, just like
water, power, railways, and roads. The question of how they should be
managed is therefore inherently political, yet people don't seem to
question the idea that only the market (i.e. "competition") can solve
this problem. I disagree.
10 years ago (in french), I suggested we, in Qu bec, should
nationalize large telcos and internet service providers. I no longer
believe is a realistic approach: most of those companies have crap
copper-based networks (at least for the last mile), yet are worth
billions of dollars. It would be prohibitive, and a waste, to buy them
out.
Back then, I called this idea "R seau-Qu bec", a reference to the
already nationalized power company, Hydro-Qu bec. (This idea,
incidentally, made it into the plan of a political party.)
Now, I think we should instead build our own, public internet. Start
setting up municipal internet services, fiber to the home in all
cities, progressively. Then interconnect cities with fiber, and build
peering agreements with other providers. This also includes a bid on
wireless spectrum to start competing with phone providers as well.
And while that sounds really ambitious, I think it's possible to take
this one step at a time.
Municipal broadband
In many parts of the world, municipal broadband is an elegant
solution to the problem, with solutions ranging from Stockholm's
city-owned fiber network (dark fiber, layer 1) to Utah's
UTOPIA network (fiber to the premises, layer 2) and
municipal wireless networks like Guifi.net which connects
about 40,000 nodes in Catalonia.
A good first step would be for cities to start providing broadband
services to its residents, directly. Cities normally own sewage and
water systems that interconnect most residences and therefore have
direct physical access everywhere. In Montr al, in particular,
there is an ongoing project to replace a lot of old lead-based
plumbing which would give an
opportunity to lay down a wired fiber network across the city.
This is a wild guess, but I suspect this would be much less
expensive than one would think. Some people agree with me and quote
this as low as 1000$ per household. There is about 800,000
households in the city of Montr al, so we're talking about a 800
million dollars investment here, to connect every household in
Montr al with fiber and incidentally a quarter of the province's
population. And this is not an up-front cost: this can be built
progressively, with expenses amortized over many years.
(We should not, however, connect Montr al first: it's used as an
example here because it's a large number of households to connect.)
Such a network should be built with a redundant topology.
I leave it as an open question whether we
should adopt Stockholm's more minimalist approach or provide direct IP
connectivity. I would tend to favor the latter, because then you can
immediately start to offer the service to households and generate
revenues to compensate for the capital expenditures.
Given the ridiculous profit margins telcos currently have 8 billion
$CAD net income for BCE (2019), 2 billion $CAD for Rogers
(2020) I also believe this would actually turn into a
profitable revenue stream for the city, the same way Hydro-Qu bec is
more and more considered as a revenue stream for the state. (I
personally believe that's actually wrong and we should treat those
resources as human rights and not money cows, but I digress. The
point is: this is not a cost point, it's a revenue.)
The other major challenge here is that the city will need competent
engineers to drive this project forward. But this is not different
from the way other public utilities run: we have electrical engineers
at Hydro, sewer and water engineers at the city, this is just another
profession. If anything, the computing science sector might be more at
fault than the city here in its failure to provide competent and
accountable engineers to society...
Right now, most of the network in Canada is copper: we are hitting the
limits of that technology with DSL, and while cable has some life
left to it (DOCSIS 4.0 does 4Gbps), that is nowhere near the
capacity of fiber. Take the town of Chattanooga, Tennessee: in
2010, the city-owned ISP EPB finished deploying a fiber network to
the entire town and provided gigabit internet to everyone. Now, 12
years later, they are using this same network to provide the
mind-boggling speed of 25 gigabit to the home. To give you an
idea, Chattanooga is roughly the size and density of Sherbrooke.
Provincial public internet
As part of building a municipal network, the question of getting
access to "the internet" will immediately come up. Naturally, this
will first be solved by using already existing commercial providers to
hook up residents to the rest of the global network.
But eventually, networks should inter-connect: Montr al should connect
with Laval, and then Trois-Rivi res, then Qu bec
City. This will require long haul fiber runs, but those links are
not actually that expensive, and many of those already exist as a
public resource at RISQ and CANARIE, which cross-connects
universities and colleges across the province and the
country. Those networks might not have the capacity to cover the
needs of the entire province right now, but that is a router upgrade
away, thanks to the amazing capacity of fiber.
There are two crucial mistakes to avoid at this point. First, the
network needs to remain decentralised. Long haul links should be IP
links with BGP sessions, and each city (or MRC) should have its
own independent network, to avoid Rogers-class catastrophic failures.
Second, skill needs to remain in-house: RISQ has already made that
mistake, to a certain extent, by selling its neutral datacenter.
Tellingly, MetroOptic, probably the largest commercial
dark fiber provider in the province, now operates the QIX, the second
largest "public" internet exchange in Canada.
Still, we have a lot of infrastructure we can leverage here. If RISQ
or CANARIE cannot be up to the task, Hydro-Qu bec has power lines
running into every house in the province, with high voltage power lines
running hundreds of kilometers far north. The logistics of long
distance maintenance are already solved by that institution.
In fact, Hydro already has fiber all over the province, but it is
a private network, separate from the internet for security reasons
(and that should probably remain so). But this only shows they already
have the expertise to lay down fiber: they would just need to lay down
a parallel network to the existing one.
In that architecture, Hydro would be a "dark fiber" provider.
International public internet
None of the above solves the problem for
the entire population of Qu bec, which is notoriously dispersed, with
an area three times the size of France, but with only an eight of its
population (8 million vs 67). More specifically, Canada was originally
a french colony, a land violently stolen from native people who
have lived here for thousands of years. Some of those people now live
in reservations, sometimes far from urban centers (but definitely
not always). So the idea of leveraging the Hydro-Qu bec
infrastructure doesn't always work to solve this, because while Hydro
will happily flood a traditional hunting territory for an electric
dam, they don't bother running power lines to the village they
forcibly moved, powering it instead with noisy and polluting diesel
generators. So before giving me fiber to the home, we should give
power (and potable water, for that matter), to those communities
first.
So we need to discuss international connectivity. (How else could we
consider those communities than peer nations anyways?c) Qu bec has
virtually zero international links. Even in Montr al, which likes to
style itself a major player in gaming, AI, and technology, most
peering goes through either Toronto or New York.
That's a problem that we must fix,
regardless of the other problems stated here.
Looking at the submarine cable map, we
see very few international links actually landing in Canada. There is
the Greenland connect which connects Newfoundland to Iceland
through Greenland. There's the EXA which lands in Ireland, the UK
and the US, and Google has the Topaz link on the west
coast. That's about it, and none of those land anywhere near any major
urban center in Qu bec.
We should have a cable running from France up to
Saint-F licien. There should be a cable from Vancouver to
China. Heck, there should be a fiber cable running all the way
from the end of the great lakes through Qu bec, then up around the
northern passage and back down to British Columbia. Those cables are
expensive, and the idea might sound ludicrous, but Russia is actually
planning such a project for 2026. The US has cables running all the
way up (and around!) Alaska, neatly bypassing all of Canada in the
process. We just look ridiculous on that map.
(Addendum: I somehow forgot to talk about Teleglobe here was
founded as publicly owned company in 1950, growing international phone
and (later) data links all over the world. It was privatized by the
conservatives in 1984, along with rails and other "crown
corporations". So that's one major risk to any effort to make public
utilities work properly: some government might be elected and promptly
sell it out to its friends for peanuts.)
Wireless networks
I know most people will have rolled their eyes so far back their heads
have exploded. But I'm not done yet. I want wireless too. And by wireless, I
don't mean a bunch of geeks setting up OpenWRT routers on rooftops. I
tried that, and while it was fun and educational, it didn't scale.
A public networking utility wouldn't be complete without providing
cellular phone service. This involves bidding for frequencies at
the federal level, and deploying a rather large amount of
infrastructure, but it could be a later phase, when the engineers and
politicians have proven their worth.
At least part of the Rogers fiasco would have been averted if such a
decentralized network backend existed. One might even want to argue that a separate
institution should be setup to provide phone services, independently
from the regular wired networking, if only for reliability.
Because remember here: the problem we're trying to solve is not just
technical, it's about political boundaries, centralisation, and
automation. If everything is ran by this one organisation again, we
will have failed.
However, I must admit that phone services is where my ideas fall a
little short. I can't help but think it's also an accessible goal
maybe starting with a virtual operator but it seems slightly
less so than the others, especially considering how closed the phone
ecosystem is.
Counter points
In debating these ideas while writing this article, the following
objections came up.
I don't want the state to control my internet
One legitimate concern I have about the idea of the state running the
internet is the potential it would have to censor or control the
content running over the wires.
But I don't think there is necessarily a direct relationship between
resource ownership and control of content. Sure, China has strong
censorship in place, partly implemented through state-controlled
businesses. But Russia also has strong censorship in place, based on
regulatory tools: they force private service providers to install
back-doors in their networks to control content and surveil their
users.
Besides, the USA have been doing warrantless wiretapping since
at least 2003 (and yes, that's 10 years before the Snowden
revelations) so a commercial internet is no assurance that we have
a free internet. Quite the contrary in fact: if anything, the
commercial internet goes hand in hand with the neo-colonial
internet, just like businesses
did in the "good old colonial days".
Large media companies are the primary
censors of content here. In Canada, the media cartel requested the
first site-blocking order in 2018. The plaintiffs (including
Qu becor, Rogers, and Bell Canada) are both content providers and
internet service providers, an obvious conflict of interest.
Nevertheless, there are some strong arguments against having a
centralised, state-owned monopoly on internet service providers. FDN
makes a good point on this. But this is not what I am suggesting:
at the provincial level, the network would be purely physical, and
regional entities (which could include private companies) would peer
over that physical network, ensuring decentralization. Delegating the
management of that infrastructure to an independent non-profit or
cooperative (but owned by the state) would also ensure some level of
independence.
Isn't the government incompetent and corrupt?
Also known as "private enterprise is better skilled at handling this,
the state can't do anything right"
I don't think this is a "fait accomplit". If anything, I have found
publicly ran utilities to be spectacularly reliable here. I rarely
have trouble with sewage, water, or power, and keep in mind I live in
a city where we receive about 2 meters of snow a year, which tend
to create lots of trouble with power lines. Unless there's a major
weather event, power just runs here.
I think the same can happen with an internet service provider. But it
would certainly need to have higher standards to what we're used to,
because frankly Internet is kind of janky.
A single monopoly will be less reliable
I actually agree with that, but that is not what I am
proposing anyways.
Current commercial or non-profit entities will be free to offer their services on
top of the public network.
And besides, the current "ha! diversity is
great" approach is exactly what we have now, and it's not working.
The pretense that we can have competition over a single network is
what led the US into the ridiculous situation where they also pretend
to have competition over the power utility market. This led to
massive forest fires in California and major power outages in
Texas. It doesn't work.
Wouldn't this create an isolated network?
One theory is that this new network would be so hostile to incumbent
telcos and ISPs that they would simply refuse to network with the
public utility. And while it is true that the telcos currently do
also act as a kind of "tier one" provider in some places, I
strongly feel this is also a problem that needs to be solved,
regardless of ownership of networking infrastructure.
Right now, telcos often hold both ends of the stick: they are the
gateway to users, the "last mile", but they also provide peering to
the larger internet in some locations. In at least one datacenter in
downtown Montr al, I've seen traffic go through Bell Canada that was
not directly targeted at Bell customers. So in effect, they are in a
position of charging twice for the same traffic, and that's not only
ridiculous, it should just be plain illegal.
And besides, this is not a big problem: there are other providers
out there. As bad as the market is in Qu bec, there is still some
diversity in Tier one providers that could allow for some exits to the
wider network (e.g. yes, Cogent is here too).
What about Google and Facebook?
Nationalization of other service
providers like Google and Facebook is out of scope of this discussion.
That said, I am not sure the state should get into the business of
organising the web or providing content services however, but I will
point out it already does do some of that through its own
websites. It should probably keep itself to this, and also
consider providing normal services for people who don't or can't
access the internet.
(And I would also be ready to argue that Google and Facebook already
act as extensions of the state: certainly if Facebook didn't exist,
the CIA or the NSA would like to create it at this point. And Google
has lucrative business with the US department of defense.)
What does not work
So we've seen one thing that could work. Maybe it's too
expensive. Maybe the political will isn't there. Maybe it will
fail. We don't know yet.
But we know what does not work, and it's what we've been doing ever
since the internet has gone commercial.
Legal pressure and regulation
In 1984 (of all years), the US Department of Justice finally
broke up AT&T in half a dozen corporations, after a 10 year legal
battle. Yet a decades later, we're back to only three large
providers doing essentially what AT&T was doing back then, and those
are regional monopolies: AT&T, Verizon, and Lumen (not counting
T-Mobile that is from a different breed). So the legal approach
really didn't work that well, especially considering the political
landscape changed in the US, and the FTC seems perfectly happy to let
those major mergers continue.
In Canada, we never even pretended we would solve this problem at all:
Bell Canada (the literal "father" of AT&T) is in the same
situation now. We have either a regional monopoly (e.g. Videotron for
cable in Qu bec) or an oligopoly (Bell, Rogers, and Telus controlling
more than 90% of the market). Telus does have one competitor in the
west of Canada, Shaw, but Rogers has been trying to buy it
out. The competition bureau seems to have blocked the merger
for now, but it didn't stop other recent mergers like Bell's
acquisition one of its main competitors in Qu bec, eBox.
Regulation doesn't seem capable of ensuring those profitable
corporations provide us with decent pricing, which makes Canada one
of the most expensive countries (research) for mobile data on
the planet. The recent failure of the CRTC to properly protect smaller
providers has even lead to price hikes. Meanwhile the oligopoly
is actually agreeing on their own price hikes therefore becoming
a real cartel, complete with price fixing and reductions in
output.
There are actually regulations in Canada supposed to keep the worst of
the Rogers outage from happening at all. According to CBC:
Under Canadian Radio-television and Telecommunications Commission
(CRTC) rules in place since 2017, telecom networks are supposed to
ensure that cellphones are able to contact 911 even if they do not
have service.
I could personally confirm that my phone couldn't reach 911 services,
because all calls would fail: the problem was that towers were still
up, so your phone wouldn't fall back to alternative service providers
(which could have resolved the issue). I can only speculate as to why
Rogers didn't take cell phone towers out of the network to let phones
work properly for 911 service, but it seems like a dangerous game to
play.
Hilariously, the CRTC itself didn't have a reliable phone service due
to the service outage:
Please note that our phone lines are affected by the Rogers network outage.
Our website is still available: https://crtc.gc.ca/eng/contact/
https://mobile.twitter.com/CRTCeng/status/1545421218534359041
I wonder if they will file a complaint against Rogers themselves about
this. I probably should.
It seems the federal government is thinking more of the same medicine
will fix the problem and has told companies should "help" each other
in an emergency. I doubt this will fix anything, and could
actually make things worse if the competitors actually interoperate
more, as it could cause multi-provider, cascading failures.
Subsidies
The absurd price we pay for data does not actually mean everyone gets
high speed internet at home. Large swathes of the Qu bec countryside
don't get broadband at all, and it can be difficult or expensive, even
in large urban centers like Montr al, to get high speed internet.
That is despite having a series of subsidies that all avoided
investing in our own infrastructure. We had the "fonds de l'autoroute
de l'information", "information highway fund" (site dead since
2003, archive.org link) and "branchez les familles",
"connecting families" (site dead since 2003, archive.org
link)
which subsidized the development of a copper network. In 2014, more of
the same: the federal government poured hundreds of millions of
dollars into a program called connecting Canadians to connect 280
000 households to "high speed internet". And now, the federal and
provincial governments are proudly announcing that "everyone is now
connected to high speed internet", after pouring more than 1.1
billion dollars to connect, guess what, another 380 000 homes, right
in time for the provincial election.
Of course, technically, the deadline won't actually be met
until 2023. Qu bec is a big area to cover, and you can guess what
happens next: the telcos threw up their hand and said some areas just
can't be connected. (Or they connect their CEO but not the poor folks
across the lake.) The story then takes the predictable twist of
giving more money out to billionaires, subsidizing now Musk's
Starlink system to connect those remote areas.
To give a concrete example: a friend who lives about 1000km away from
Montr al, 4km from a small, 2500 habitant village, has recently
got symmetric 100 mbps fiber at home from Telus, thanks to those subsidies. But I can't get that
service in Montr al at all, presumably because Telus and Bell colluded
to split that market. Bell doesn't provide me with such a service
either: they tell me they have "fiber to my neighborhood", and only
offer me a 25/10 mbps ADSL service. (There is Vid otron offering
400mbps, but that's copper cable, again a dead technology, and
asymmetric.)
Conclusion
Remember Chattanooga? Back in 2010, they funded the development of a
fiber network, and now they have deployed a network roughly a
thousand times faster than what we have just funded with a billion
dollars. In 2010, I was paying Bell
Canada
60$/mth for 20mbps and a 125GB cap, and now, I'm still (indirectly)
paying Bell for roughly the same speed (25mbps). Back then, Bell was
throttling their competitors networks until 2009, when they were
forced by the CRTC to stop
throttling. Both
Bell and Vid otron still explicitly forbid you from running your own servers
at home, Vid otron charges prohibitive prices which make it near
impossible for resellers to sell uncapped services. Those companies
are not spurring innovation: they are blocking it.
We have spent all this money for the private sector to build
us a private internet, over decades, without any assurance of quality,
equity or reliability. And while in some locations, ISPs did deploy
fiber to the home, they certainly didn't upgrade their entire network
to follow suit, and even less allowed resellers to compete on that
network.
In 10 years, when 100mbps will be laughable, I bet those service
providers will again punt the ball in the public courtyard and tell us
they don't have the money to upgrade everyone's equipment.
We got screwed. It's time to try something new.
Updates
There was a discussion about this article on Hacker News which
was surprisingly productive. Trigger warning: Hacker News is kind of
right-wing, in case you didn't know.
Since this article was written, at least two more major acquisitions
happened, just in Qu bec:
In the latter case, vMedia was explicitly saying it couldn't grow
because of "lack of access to capital". So basically, we have given
those companies a billion dollars, and they are not using that very
money to buy out their competition. At least we could have given
that money to small players to even out the playing field. But this is
not how that works at all. Also, in a bizarre twist, an "analyst"
believes the acquisition is likely to help Rogers acquire Shaw.
Also, since this article was written, the Washington Post published a
review of a book bringing similar ideas: Internet for the People
The Fight for Our Digital Future, by Ben Tarnoff, at Verso
books. It's short, but even more ambitious than what I am suggesting
in this article, arguing that all big tech companies should be broken
up and better regulated:
He pulls from Ethan Zuckerman s idea of a web that is plural in
purpose that just as pool halls, libraries and churches each have
different norms, purposes and designs, so too should different
places on the internet. To achieve this, Tarnoff wants governments
to pass laws that would make the big platforms unprofitable and, in
their place, fund small-scale, local experiments in social media
design. Instead of having platforms ruled by engagement-maximizing
algorithms, Tarnoff imagines public platforms run by local
librarians that include content from public media.
(Links mine: the Washington Post obviously prefers to not link to the
real web, and instead doesn't link to Zuckerman's site all and
suggests Amazon for the book, in a cynical example.)
And in another example of how the private sector has failed us, there
was recently a fluke in the AMBER alert system where the entire
province was warned about a loose shooter in Saint-Elz arexcept
the people in the town, because they have spotty cell phone
coverage. In other words, millions of people received a strongly
toned, "life-threatening", alert for a city sometimes hours away,
except the people most vulnerable to the alert. Not missing a beat,
the CAQ party is promising more of the same medicine again and giving
more money to telcos to fix the problem, suggesting to spend
three billion dollars in private infrastructure.
The last day
The first lesson I would like everybody to know and have is to buy two machines, especially a machine to check low blood pressure. I had actually ordered one from Amazon but they never delivered. I hope to sue them in consumer court in due course of time. The other one is a blood sugar machine which I ordered and did get, but the former is more important than the latter, and the reason why will be known soon.
Mum had stopped eating solids and was entirely on liquids for the last month of her life. I did try enticing her however I could with aromatic food but failed. Add to that we had weird weather this entire year. June is supposed to be when the weather turns and we have gentle showers, but this whole June it felt like we were in an oven. She asked for liquids whenever and although I hated that she was not eating solids, at least she was having liquids (juices and whatnot) and that s how I pacified myself. I had been repeatedly told by family and extended family to get a full-time nurse but she objected time and again for the same and I had to side with her.
Then July 1st came around and part of extended family also came, and they impressed both on me and her to get a nurse so finally, I was able to get her nurse. I was also being pulled in various directions (outside my stuff, mumma s stuff) and doing whatever she needed in terms of supplies. On July 4th, think she had low blood pressure but without a machine, one cannot know. At least that s what I know. If somebody knows anything better, please share, who knows it may save lives. I don t have a blood pressure monitor even to date
There used to be 5-6 doctors in our locality before the Pandemic, but because of the Pandemic and whatever other reasons, almost all doctors had given up attending house calls. And the house where I live is a 100-year-old house so it has narrow passageways and we have no lift. So taking her in and out is a challenge and an ordeal, and something that is not easily done. I had to do some more works so I asked the nurse to stay a bit over 8 p.m. I came and the nurse left for the day. That day I had been distracted for a number of reasons which I don t remember what was but at that point in time, doing those works seemed important. I called out to her but she didn t respond. I remember the night before she had been agitated while sleeping, I slept nearby and kept an eye on her. I had called her a few times to ask whether she needed something but she didn t respond. (this is about the earlier night). That evening, it was raining quite a bit, I called her a few times but she didn t speak. I kissed her on the cheek and realized she is cold. Mumma usually becomes very agitated if she feels cold and shouts at me. I realized she is cold and her body a bit stiff. I was supposed to eat but just couldn t. I dunno what I suspected, I just hired a rickshaw and went around till 9 p.m. and it was a fruitless search for a doctor. I returned home, and again called her but there was no response.
Because she was not responding, I became fearful, had a shat, and then dialed the hospital. Asking for the ambulance, it took about an hr. but finally, the ambulance came in. It was now 11 o clock or 2300 hrs. when the ambulance arrived in. It took another half an hr. getting few kids who had come from some movie or something to get them to help mum get down through the passage to the ambulance. We finally reached the hospital at 2330. The people on casualty that day were known to me, and they also knew my hearing problem, so it was much easier to communicate. Half an hour later, they proclaimed her dead.
Fortunately or not, I had just bought the newer mobile phone just a few days back. And right now, In India, WhatsApp is one of the most used apps. So I was able to chat with everybody and tell them what was happening or rather what has happened. Of all, mamaji (mother s brother) shared that most members of the family would not be able to come except a cousin sister who lives in Mumbai. I was instructed to get the body refrigerated for a few hrs. It is only then I came to know various ways in which the body is refrigerated and how cruel it would have been towards Atal Bihari Vajpayee s family, but that is politics. I had to go to quite a few places and was back home around 3 a.m. I was supposed to sleep, but sleep was out of the question. I whiled away a few hrs. playing, seeing movies, something or the other to keep myself distracted as literally, I had no idea what to do. Morning came, took a bath, went outside, had some snacks, came home and somewhere then slept. One of my Mausi s (mother s sister) was insisting to get the body burnt in the morning itself but I wanted at least one relative to be there on the last journey. Cousin sister and her husband came to Pune around 4 p.m. I somehow woke, ringing, the vibration I do not know what. I took a short bath, rushed to the place where we had kept the body, got the body and from there where we had asked permission to get the body burned. More than anything else, I felt so sad that except for cousin sister, and me, nobody was with her on the last journey. Even that day, it was raining hard, so people avoided going out. Brother-in-law tried to give me some money, but I brushed it off. I just wanted their company, money is and was never the criteria. So, in the evening we had a meal, my cousin sister, brother-in-law, their two daughters and me. The next day we took the bones and ash to Alandi and did what was needed.
I have tried to resurrect the day so many times in my head trying to figure out what I could have done better and am inconclusive. Having a blood pressure monitor for sure would have prevented the tragedy or at least post-phoned for it for a few more days, weeks, years, dunno. I am not medically inclined.
The Books
I have to confess, the time they said she is no more, I was hoping that the doctors would say, we have a pill, would you like to take it, it would reunite you with mum. Maybe it wa crazy or whatever, but if such a situation had been, I would have easily gone for it. If I were to go, some people might miss me, but nobody would miss me terribly, and at least I would be with her. There was nothing to look forward to. What saved me from going mad was Michael Crichton s Timeline. It is a fascinating and seductive book. I had actually read it years ago but had forgotten. So many days and nights I was able to sleep hoping that quantum teleportation can be achieved. Anybody in my space would be easily enticed. What joy would it be if I were to meet mum once again. I can tell my other dumb child what to do so she lives for few more years. I could talk to her, just be with her for some time. It is a powerful and seductive idea. I can see so many cults and whatnot that can be formed around it, there may already be, who knows.
Another good book that helped me to date has been Through The, Rings Of Fire (Hardcover, J. D. Benedict Thyagarajan). It is an autobiography of Venkat Chalasany (story of an orphan boy who became a successful builder in Pune and the setbacks he had.) While the author has very strong views and I sometimes feel very naive views about things, I was taken a ride of my own city as it was in 1970s and 1980s. I could very well imagine all the different places and people as if they were happening right now. While I have finished the main story, there is still a bit left to read and I read 5-10 minutes every day as it s like a sweet morsel, it s like somebody sharing a tale passed without me having to make an effort. And no lies, the author has been pretty upfront where he has exaggerated or told lies or simply made-up stuff.
I was thinking of adding something about movies and some more info or impressions about android but it seems that would have to wait, I do hope, it does work for somebody, even if a single life can be saved from what I shared above, my job is done.
Had some time to add more features to my libvirt backup utility, now it
supports:
backing up virtual domains UEFI bios, initrd and kernel images if
defined.
virtnbdmap now uses the nbdkit COW plugin to map the backups as regular
NBD device. This allows users to replay complete backup chains
(full+inc/diff) to recover single files. Also makes the mapped device
writable, as such one can directly boot the virtual machine from the backup
images.
Check out my last article on that
topic or watch it in action.
As a side note: still there s an RFP open,
if one is interested in maintaining, as i find myself not having a valid
key in the keyring.. laziness.
Had some time to add more features to my libvirt backup utility, now it
supports:
Added backup mode differencial.
Save virtual domains UEFI bios, initrd and kernel images if defined.
virtnbdmap now uses the nbdkit COW plugin to map the backups as regular
NBD device. This allows users to replay complete backup chains
(full+inc/diff) to recover single files. As the resulting device is
writable, one can directly boot the virtual machine from the backup
images.
Check out my last article on that
topic or watch it in action.
Also, the dirty bitmap (incremental backup) feature now seems to be enabled by
default as of newer qemu and libvirt (8.2.x) versions.
As a side note: still there s an RFP open,
if one is interested in maintaining, as i find myself not having a valid
key in the keyring.. laziness.
Debian continues participating in Outreachy, and we're excited to announce that
Debian has selected two interns for the Outreachy May 2022 - August 2022 round.
Israel Galadima and Michael Ikwuegbu
will work on
Improve yarn package manager integration with Debian,
mentored by Akshay S Dinesh and Pirate Praveen.
Congratulations and welcome to Israel Galadima and Michael Ikwuegbu!
From the official website: Outreachy provides
three-month internships for people from groups traditionally underrepresented
in tech. Interns work remotely with mentors from Free and Open Source Software
(FOSS) communities on projects ranging from programming, user experience,
documentation, illustration and graphical design, to data science.
The Outreachy programme is possible in Debian thanks to the efforts of Debian
developers and contributors who dedicate their free time to mentor students and
outreach tasks, and the
Software Freedom Conservancy's administrative
support, as well as the continued support of Debian's donors, who provide
funding for the internships.
Join us and help extend Debian! You can follow the work of the Outreachy
interns reading their blogs (they are syndicated in Planet Debian),
and chat with us in the #debian-outreach IRC channel and
mailing list.
One month ago I started work on a new side project which is now up and running, and deserving on an introductory blog post: r2u. It was announced in two earlier tweets (first, second) which contained the two (wicked) demos below also found at the documentation site.
So what is this about? It brings full and completeCRAN installability to Ubuntu LTS, both the focal release 20.04 and the recent jammy release 22.04. It is unique in resolving all R and CRAN packages with the system package manager. So whenever you install something it is guaranteed to run as its dependencies are resolved and co-installed as needed. Equally important, no shared library will be updated or removed by the system as the possible dependency of the R package is known and declared. No other package management system for R does that as only apt on Debian or Ubuntu can and this project integrates all CRAN packages (plus 200+ BioConductor packages). It will work with any Ubuntu installation on laptop, desktop, server, cloud, container, or in WSL2 (but is limited to Intel/AMD chips, sorry Raspberry Pi or M1 laptop). It covers all of CRAN (or nearly 19k packages), all the BioConductor packages depended-upon (currently over 200), and only excludes less than a handful of CRAN packages that cannot be built.
Usage
Setup instructions approaches described concisely in the repo README.md and documentation site. It consists of just five (or fewer) simple steps, and scripts are provided too for focal (20.04) and jammy (22.04).
Demos
Check out these two demos (also at the r2u site):
Installing the full tidyverse in one command and 18 seconds
Installing brms and its depends in one command and 13 seconds (and show gitpod.io)
Integration via bspm
The r2u setup can be used directly with apt (or dpkg or any other frontend to the package management system). Once installed apt update; apt upgrade will take care of new packages. For this to work, all CRAN packages (and all BioConductor packages depended upon) are mapped to names like r-cran-rcpp and r-bioc-s4vectors: an r prefix, the repo, and the package name, all lower-cased. That works but thanks to the wonderful bspm package by I aki car we can do much better. It connects R s own install.packages() and update.packages() to apt. So we can just say (as the demos above show) install.packages("tidyverse") or install.packages("brms") and binaries are installed via apt which is fantastic and it connects R to the system package manager. The setup is really only two lines and described at the r2u site as part of the setup.
History and Motivation
Turning CRAN packages into .deb binaries is not a new idea. Albrecht Gebhardt was the first to realize this about twenty years ago (!!) and implemented it with a single Perl script. Next, Albrecht, Stefan Moeller, David Vernazobres and I built on top of this which is described in this useR! 2007 paper. A most excellent generalization and rewrite was provided by Charles Blundell in an superb Google Summer of Code contribution in 2008 which I mentored. Charles and I described it in this talk at useR! 2009. I ran that setup for a while afterwards, but it died via an internal database corruption in 2010 right when I tried to demo it at CRAN headquarters in Vienna. This peaked at, if memory serves, about 5k packages: all of CRAN at the time. Don Armstrong took it one step further in a full reimplemenation which, if I recall correctly, coverd all of CRAN and BioConductor for what may have been 8k or 9k packages. Don had a stronger system (with full RAID-5) but it also died in a crash and was never rebuilt even though he and I could have relied on Debian resources (as all these approaches focused on Debian). During that time, Michael Rutter created a variant that cleverly used an Ubuntu-only setup utilizing Launchpad. This repo is still going strong, used and relied-upon by many, and about 5k packages (per distribution) strong. At one point, a group consisting of Don, Michael, G bor Cs rdi and myself (as lead/PI) had financial support from the RConsortium ISC for a more general re-implementation , but that support was withdrawn when we did not have time to deliver.
We should also note other long-standing approaches. Detlef Steuer has been using the openSUSE Build Service to provide nearly all of CRAN for openSUSE for many years. I aki car built a similar system for Fedora described in this blog post. I aki and I also have a arXiv paper describing all this.
Details
Please see the the r2u site for all details on using r2u.
Acknowledgements
The help of everybody who has worked on this is greatly appreciated. So a huge Thank you! to Albrecht, David, Stefan, Charles, Don, Michael, Detlef, G bor, I aki and whoever I may have omitted. Similarly, thanks to everybody working on R, CRAN, Debian, or Ubuntu it all makes for a superb system. And another big Thank you! goes to my GitHub sponsors whose continued support is greatly appreciated.
A new release 0.4.19 of RProtoBuf arrived on CRAN earlier today. RProtoBuf provides R with bindings for the Google Protocol Buffers ( ProtoBuf ) data encoding and serialization library used and released by Google, and deployed very widely in numerous projects as a language and operating-system agnostic protocol.
This release contains a pull request contribution by Michael Chirico to add support for the TextFormat API, a minor maintenance fix ensuring (standard) string are referenced as std::string to avoid a hickup on Arch builds, some repo updates, plus reporting of (package and library) versions on startup. The following section from the NEWS.Rd file has more details.
Changes in RProtoBuf version 0.4.19 (2022-05-06)
Small cleanups to repository
Raise minimum Protocol Buffers version to 3.3 (closes #83)
Update package version display, added to startup message
Expose TextFormat API (Michael Chirico in #88 closing #87)
Add missing explicit std:: on seven string instances in one file (closes #89)
The Freedom Phone advertises itself as a "Free speech and privacy first focused phone". As documented on the features page, it runs ClearOS, an Android-based OS produced by Clear United (or maybe one of the bewildering array of associated companies, we'll come back to that later). It's advertised as including Signal, but what's shipped is not the version available from the Signal website or any official app store - instead it's this fork called "ClearSignal".
The first thing to note about ClearSignal is that the privacy policy link from that page 404s, which is not a great start. The second thing is that it has a version number of 5.8.14, which is strange because upstream went from 5.8.10 to 5.9.0. The third is that, despite Signal being GPL 3, there's no source code available. So, I grabbed jadx and started looking for differences between ClearSignal and the upstream 5.8.10 release. The results were, uh, surprising.
First up is that they seem to have integrated ACRA, a crash reporting framework. This feels a little odd - in the absence of a privacy policy, it's unclear what information this gathers or how it'll be stored. Having a piece of privacy software automatically uploading information about what you were doing in the event of a crash with no notification other than a toast that appears saying "Crash Report" feels a little dubious.
Next is that Signal (for fairly obvious reasons) warns you if your version is out of date and eventually refuses to work unless you upgrade. ClearSignal has dealt with this problem by, uh, simply removing that code. The MacOS version of the desktop app they provide for download seems to be derived from a release from last September, which for an Electron-based app feels like a pretty terrible idea. Weirdly, for Windows they link to an official binary release from February 2021, and for Linux they tell you how to use the upstream repo properly. I have no idea what's going on here.
They've also added support for network backups of your Signal data. This involves the backups being pushed to an S3 bucket using credentials that are statically available in the app. It's ok, though, each upload has some sort of nominally unique identifier associated with it, so it's not trivial to just download other people's backups. But, uh, where does this identifier come from? It turns out that Clear Center, another of the Clear family of companies, employs a bunch of people to work on a ClearID[1], some sort of decentralised something or other that seems to be based on KERI. There's an overview slide deck here which didn't really answer any of my questions and as far as I can tell this is entirely lacking any sort of peer review, but hey it's only the one thing that stops anyone on the internet being able to grab your Signal backups so how important can it be.
The final thing, though? They've extended Signal's invitation support to encourage users to get others to sign up for Clear United. There's an exposed API endpoint called "get_user_email_by_mobile_number" which does exactly what you'd expect - if you give it a registered phone number, it gives you back the associated email address. This requires no authentication. But it gets better! The API to generate a referral link to send to others sends the name and phone number of everyone in your phone's contact list. There does not appear to be any indication that this is going to happen.
So, from a privacy perspective, going to go with things being some distance from ideal. But what's going on with all these Clear companies anyway? They all seem to be related to Michael Proper, who founded the Clear Foundation in 2009. They are, perhaps unsurprisingly, heavily invested in blockchain stuff, while Clear United also appears to be some sort of multi-level marketing scheme which has a membership agreement that includes the somewhat astonishing claim that:
Specifically, the initial focus of the Association will provide members with supplements and technologies for:
9a. Frequency Evaluation, Scans, Reports;
9b. Remote Frequency Health Tuning through Quantum Entanglement;
9c. General and Customized Frequency Optimizations;
- there's more discussion of this and other weirdness here. Clear Center, meanwhile, has a Chief Physics Officer? I have a lot of questions.
Anyway. We have a company that seems to be combining blockchain and MLM, has some opinions about Quantum Entanglement, bases the security of its platform on a set of novel cryptographic primitives that seem to have had no external review, has implemented an API that just hands out personal information without any authentication and an app that appears more than happy to upload all your contact details without telling you first, has failed to update this app to keep up with upstream security updates, and is violating the upstream license. If this is their idea of "privacy first", I really hate to think what their code looks like when privacy comes further down the list.
Raspberry Pi computers require a piece of non-free software to boot
the infamous
raspi-firmware
package. But for almost as long as there has been a Raspberry Pi to
talk of (this year it turns 10 years old!), there have been efforts to
get it to boot using only free software. How is it progressing?
Michael Bishop (IRC user clever) explained today in the
#debian-raspberrypi channel in OFTC that it advances far better than
what I expected: It is even possible to boot a usable system under the
RPi2 family! Just There is somewhat incomplete hardware support:
For his testing, he has managed to use a xfce environment but over
the composite (NTSC) video output, as HDMI initialization support is
not there.
However, he shared with me several interesting links and videos, and I
told him I d share them there are still many issues; I do not
believe it is currently worth it to make Debian images with this
firmware.
Before anything else: Go visit the
librerpi/lk-overlay
repository. Its README outlines hardware support for each of the RPi
families; there is a binary build available with
nixos if you want to try it
out, and instructions to build
it.
But what clever showed me that made me write this post Is the
amount of stuff you can do with the RPi s VPU (why Vision Vector
Processing Unit and not the more familiar GPU, Graphical Processing
Unit? I don t really know But I trust clever s definitions beyond
how I trust my own ) before it loads an opearting system:
There s not too much I can add to this. I was just Truly
amazed. And I hope to see the remaining hurdles for regular Linux
booting on this range of machines with purely free software quickly go
away!
Packaging this for Debian? Well, not yet not so fast I first told
clever we could push this firmware to experimental instead of
unstable, as it is not yet ready for most production
systems. However, pabs made some spot-on further
questions. And yes, it requires installing three(!) different
cross-compilers, one of which vc4-toolchain, for the
VPU is free software, but
not yet upstreamed, and hence is not available for Debian.
Anyway, the talk continued long after I had to go. I have gone a bit
over the backlog, but I have to leave now so that will be it as for
this blog post
Everyone knows that an application exit code should change based on
the success, error or maybe warnings that happened during execution.
Lately i came along some python code that was structured the following way:
#!/usr/bin/python3
importsysimportloggingdefwarnme():# something bad happens
logging.warning("warning")sys.exit(2)defevil():# something evil happens
logging.error("error")sys.exit(1)defmain():logging.basicConfig(level=logging.DEBUG,)[..]
the situation was a little bit more complicated, some functions in other
modules also exited the application, so sys.exit() calls were distributed
in lots of modules an files.
Exiting the application in some random function of another module is
something i dont consider nice coding style, because it makes it hard
to track down errors.
I expect:
exit code 0 on success
exit code 1 on errors
exit code 2 on warnings
warnings or errors shall be logged in the function where they actually
happen: the logging module will show the function name with a better
format option: nice for debugging.
one function that exits accordingly, preferrably main()
How to do better?
As the application is using the logging module, we have a single point to
collect warnings and errors that might happen accross all modules. This works
by passing a custom handler to the logging module which tracks emitted
messages.
Heres an small example:
#!/usr/bin/python3
importsysimportloggingclasslogCount(logging.Handler):classLogType:def__init__(self):self.warnings=0self.errors=0def__init__(self):super().__init__()self.count=self.LogType()defemit(self,record):ifrecord.levelname=="WARNING":self.count.warnings+=1ifrecord.levelname=="ERROR":self.count.errors+=1definfome():logging.info("hello world")defwarnme():logging.warning("help, an warning")defevil():logging.error("yikes")defmain():EXIT_WARNING=2EXIT_ERROR=1counter=logCount()logging.basicConfig(level=logging.DEBUG,handlers=[counter,logging.StreamHandler(sys.stderr)],)infome()warnme()evil()ifcounter.count.errors!=0:raiseSystemExit(EXIT_ERROR)ifcounter.count.warnings!=0:raiseSystemExit(EXIT_WARNING)if__name__=="__main__":main()
python3 count.py ;echo$?
INFO:root:hello world
WARNING:root:help, an warning
ERROR:root:yikes
1
This also makes easy to define something like:
hey, got 2 warnings, change exit code to error?
got 3 warnings, but no strict passed, ingore those, exit with success!
fs.com s5850 and s8050 series type switches have a secret mode which
lets you enter a regular shell from the switch cli, like so:
hostname# start shell
Password:
The command and password are not documented by the manufacturer,
i wondered wether if its possible to extract that password from
the firmware. After all: its my device, and i want to have access
to all the features!
Download the latest firmware image for those switch types and let binwalk do
its magic:
TLDR: we are making progress on the Rust implementation of the GNU coreutils.
Well, it is an understatement to say my previous blog post interested many people. Many articles, blog posts and some podcasts talked about it! As we pushed coreutils 0.0.12 a few days ago and getting closer to the 10 000 stars on github, it is now time to give an update!
This has brought a lot of new contributors to this project. Instead of 30 to 60 patches per month, we jumped to 400 to 472 patches every month. Similarly, we saw an increase in the number of contributors (20 to 50 per month from 3 to 8). Two new maintainers (Michael Debertol & Terts Diepraam) stepped in and have been doing a much better job than myself as reviewers now! As a silly metric, according to github, we had 5 561 clones of the repository over the last 2 weeks!
The new contributors focused on:
Performances. Now, some binaries are significantly faster than GNU (ex: head, cut, etc)
Adding missing binaries or options (see below)
Improve the testsuite: we grew the overall code coverage from 55% to 75% (in general, we consider that a 80% code coverage on a project is excellent).
Refactoring the code to simplify the maintenance. Examples:
Managing error the same way in the various binaries - (Kudos to Jeffrey Finkelstein for the huge work)
Improving the GNU compatibility (thanks to Jan Verbeek, Jan Scheer, kimono-koans and many others)
Move toclap 3. Upgrade by Terts which unblocks us on various problems.
...
Closing the gap with GNU
As far as I know, we are only missing stty (change and print terminal line settings) as a program.
Thanks to some heroes, basenc, pr, chcon and runcon have been implemented. For example, for the two last programs, Koutheir Attouchi wrote new crates to manage SELinux properly. This crate has been used for some other utilities like cp, ls or id.
Leveraging the GNU testsuite to test this implementation
Because the GNU testsuite is excellent, we now have a proper CI using it to run the tests. It is pretty long on the Github action CI (almost two hours to run it) but it is an amazing improvement to the way we work. It was a joint work from a bunch of folks (James Robson, Roy Ivy III, etc). To achieve this, we also made it easier to run the GNU testsuite locally with the Rust implementation but also to ignore some tests or adjust some error messages (see build-gnu.sh and run-gnu-test.sh).
Following a suggestion of Brian G, a colleague at Mozilla (he did the same for some Firefox major change), we are now collecting the history of fail/pass/error into a separate repository and generating a daily graph showing the evolution of regression. At this date, we have, with GNU/Coreutils 9.0:
Total
611 tests
Pass
214
Skip
84
Fail
298
Error
15
We are now automatically identifying new passing tests and regressions in the CI.
For example: Warning: Congrats! The gnu test tests/chmod/c-option is now passing!
<br />Warning: Congrats! The gnu test tests/chmod/silent is now passing!
<br />Warning: Congrats! The gnu test tests/chmod/umask-x is now passing!
<br />Error: GNU test failed: tests/du/long-from-unreadable. tests/du/long-from-unreadable is passing on 'master'. Maybe you have to rebase?
[...]
<br />Warning: Changes from master: PASS +4 / FAIL +0 / ERROR -4 / SKIP +0
This is also beneficial to GNU as, by implementing some options, Michael Debertol noticed some incorrect behaviors (with sort and cat) or an uninitialized variable (with chmod).
Documentations
Every day, we are generating the user documentation and of the internal coreutils.
User documentation: https://uutils.github.io/coreutils-docs/user/ Example: ls or cp
The internal documentation can be seen on:https://uutils.github.io/coreutils-docs/dev/uucore/ For example, the backup style is documented here: https://uutils.github.io/coreutils-docs/dev/uucore/backup_control/index.html
More?
Besides my work on Debian/Ubuntu, I have also noticed that more and more operating systems are starting to look at this:
In parallel, https://github.com/uutils/findutils/, a rust dropped-in replacement for find, is getting more attention lately! Here, the graph showing the evolution of the program using the BFS testsuite (much better than GNU's).
What is next?
stty needs to be implemented
Improve the GNU compatibility on key programs and reduce the gap
Investigate how to reduce the size of the binaries
Allow Debian and Ubuntu to switch by default without tricky manipulation
How to help?
I have been maintaining a list of good first bugs for new comers in the repo!
Don't hesitate to contribute, it is much easier than it seems and a terrific way to learn Rust!
In my last article i showed how to use the
new features included in Debian Bullseye to easily create backups of your
libvirt managed domains.
A few years ago as this topic came to my interest, i also implemented a rather
small utility (POC) to create full and incremental backups from standalone qemu
processes: qmpbackup
The workflow for this is a little bit different from the approach i have taken
with virtnbdbackup.
While with libvirt managed virtual machines, the libvirt API provides all
necessary API calls to create backups, a running qemu process only provides the
QMP protocol socket to get things going.
Using the QMP protocol its possible to create bitmaps for the attached disks
and make Qemu push the contents of the bitmaps to a specified target directory.
As the bitmaps keep track of the changes on the attached block devices, you can
create incremental backups too.
The nice thing here is that the Qemu process actually does this all by itself
and you dont have to care about which blocks are dirty, like you would have
to do with the Pull based approach.
So how does it work?
The utility requires to start your qemu process with an active QMP socket
attached, like so:
Now you can easily make qemu push the latest data for a created bitmap
to a given target directory:
# qmpbackup --socket /tmp/socket backup --level full --target /tmp/backup/[2022-01-27 19:41:33,819] INFO Version: 0.10
[2022-01-27 19:41:33,819] INFO Qemu version: [5.0.2] [Debian 1:5.2+dfsg-11+deb11u1]
[2022-01-27 19:41:33,825] INFO Guest Agent socket connected
[2022-01-27 19:41:33,825] INFO Trying to ping guest agent
[2022-01-27 19:41:38,827] WARNING Unable to reach Guest Agent: cant freeze file systems.
[2022-01-27 19:41:38,828] INFO Backup target directory: /tmp/backup/
[2022-01-27 19:41:38,828] INFO FULL Backup operation: "/tmp/backup//ide0-hd0/FULL-1643308898"[2022-01-27 19:41:38,836] INFO Wrote Offset: 0% (0 of 2147483648)[2022-01-27 19:41:39,838] INFO Wrote Offset: 25% (541065216 of 2147483648)[2022-01-27 19:41:40,840] INFO Wrote Offset: 33% (701890560 of 2147483648)[2022-01-27 19:41:41,841] INFO Wrote Offset: 40% (867041280 of 2147483648)[2022-01-27 19:41:42,844] INFO Wrote Offset: 50% (1073741824 of 2147483648)[2022-01-27 19:41:43,846] INFO Wrote Offset: 59% (1269760000 of 2147483648)[2022-01-27 19:41:44,847] INFO Wrote Offset: 75% (1610612736 of 2147483648)[2022-01-27 19:41:45,848] INFO Saved disk: [ide0-hd0]
The resulting directory now contains a full backup image of the disk attached.
From this point on, its possible to create further incremental backups:
# qmpbackup --socket /tmp/socket backup --level inc --target /tmp/backup/[2022-01-27 19:42:03,930] INFO Version: 0.10
[2022-01-27 19:42:03,931] INFO Qemu version: [5.0.2] [Debian 1:5.2+dfsg-11+deb11u1]
[2022-01-27 19:42:03,933] INFO Guest Agent socket connected
[2022-01-27 19:42:03,933] INFO Trying to ping guest agent
[2022-01-27 19:42:08,938] WARNING Unable to reach Guest Agent: cant freeze file systems.
[2022-01-27 19:42:08,939] INFO Backup target directory: /tmp/backup/
[2022-01-27 19:42:08,939] INFO INC Backup operation: "/tmp/backup//ide0-hd0/INC-1643308928"[2022-01-27 19:42:08,953] INFO Wrote Offset: 0% (0 of 1835008)[2022-01-27 19:42:09,957] INFO Saved disk: [ide0-hd0]
The target directory will now have multiple data backups:
Restoring the image
Using the qmprebase utility you can now rebase the images to the latest
state. The --dry-run option gives an good impression which command sequences
are required, if one wants only rebase to a specific incremental backup, thats
possible using the --until option.
# qmprebase rebase --dir /tmp/backup/ide0-hd0/ --dry-run[2022-01-27 17:18:08,790] INFO Version: 0.10
[2022-01-27 17:18:08,790] INFO Dry run activated, not applying any changes
[2022-01-27 17:18:08,790] INFO qemu-img check /tmp/backup/ide0-hd0/INC-1643308928
[2022-01-27 17:18:08,791] INFO qemu-img rebase -b"/tmp/backup/ide0-hd0/FULL-1643308898""/tmp/backup/ide0-hd0/INC-1643308928"-u[2022-01-27 17:18:08,791] INFO qemu-img commit "/tmp/backup/ide0-hd0/INC-1643308928"[2022-01-27 17:18:08,791] INFO Rollback of latest [FULL]<-[INC] chain complete, ignoring older chains
[2022-01-27 17:18:08,791] INFO Image files rollback successful.
Filesystem consistency
The backup utility also supports to freeze and thaw the virtual machines file
system in case qemu is started with a guest agent socket and the guest agent is
reachable during backup operation.
Check out the README for the full
feature set.
Uhm yeah, so this shirt didn t age well. :) Mainly to recall what happened, I m once again revisiting my previous year (previous edition: 2020).
2021 was quite challenging overall. It started with four weeks of distance learning at school. Luckily at least at school things got back to "some kind of normal" afterwards. The lockdowns turned out to be an excellent opportunity for practising Geocaching though, and that s what I started to do with my family. It s a great way to grab some fresh air, get to know new areas, and spend time with family and friends I plan to continue doing this. :)
We bought a family season ticket for Freib der (open-air baths) in Graz; this turned out to be a great investment I enjoyed the open air swimming with family, as well as going for swimming laps on my own very much, and plan to do the same in 2022. Due to the lockdowns and the pandemics, the weekly Badminton sessions sadly didn t really take place, so I pushed towards the above-mentioned outdoor swimming and also some running; with my family we managed to do some cycling, inline skating and even practiced some boulder climbing.
For obvious reasons plenty of concerts I was looking forward didn t take place. With my parents we at least managed to attend a concert performance of Puccinis Tosca with Jonas Kaufmann at Schlo bergb hne Kasematten/Graz, and with the kids we saw "Robin Hood" in Oper Graz and "Pippi Langstrumpf" at Studiob hne of Oper Graz. The lack of concerts and rehearsals once again and still severely impacts my playing the drums, including at HTU BigBand Graz. :-/
Grml-wise we managed to publish release 2021.07, codename JauKerl. Debian-wise we got version 11 AKA bullseye released as new stable release in August.
For 2021 I planned to and also managed to minimize buying (new) physical stuff, except for books and other reading stuff. Speaking of reading, 2021 was nice I managed to finish more than 100 books (see Mein Lesejahr 2021 ), and I d like to keep the reading pace.
Now let s hope for better times in 2022!
A new version, now at 0.3.13, of the Rblpapi package just arrived at CRAN. Rblpapi provides a direct interface between R and the Bloomberg Terminal via the C++ API provided by Bloomberg (but note that a valid Bloomberg license and installation is required).
This is the thirteenth release since the package first appeared on CRAN in 2016. It comprises the PRs from three different contributors (with special thanks once again to Michael Kerber), and extends test and documentation, and extends two function interfaces to control explicitly whether returned lists of length one should be simplified.
The list of changes follow below.
Add simplify argument (and option) to bdh and bds (Dirk in #354 closing #353, #351)
Improve documentation for bsearch (John in #357 closing #356)
Courtesy of my CRANberries, there is also a diffstat report for the this release. As always, more detailed information is on the Rblpapi page. Questions, comments etc should go to the issue tickets system at the GitHub repo.
If you like this or other open-source work I do, you can now sponsor me at GitHub.