Introduction
DebConf23, the 24th annual Debian Conference, was held in India in the city of Kochi, Kerala from the 3rd to the 17th of September, 2023. Ever since I got to know about it (which was more than an year ago), I was excited to attend DebConf in my home country. This was my second DebConf, as I attended one last year in Kosovo. I was very happy that I didn t need to apply for a visa to attend. This time I submitted two talks - one on Debian packaging for beginners and the other on ideas on sustainable solutions for self-hosting. I got full bursary to attend the event (thanks a lot to Debian for that!) which is always helpful in covering the expenses, especially if the venue is a five star hotel :)
My friend Suresh - who is enthusiastic about Debian and free software - wanted to attend it too. When the registration started, I reminded him about applying. We landed in Kochi on the 28th of August 2023 during the festival of Onam. We celebrated Onam in Kochi, had a trip to Wayanad, and returned to Kochi. On the evening of the 3rd of September, we reached the venue - Four Points Hotel by Sheraton, at Infopark Kochi, Ernakulam, Kerala, India.
Suresh and me celebrating Onam in Kochi.
Hotel overview
The hotel had 14 floors, and featured a swimming pool and gym (these were included in our package). The hotel gave us elevator access for only our floor, along with public spaces like the reception, gym, swimming pool, and dining areas. The temperature inside the hotel was pretty cold and I had to buy a jacket to survive. Perhaps the hotel was in cahoots with winterwear companies? :)
Four Points Hotel by Sheraton was the venue of DebConf23. Credits: Bilal
Photo of the pool. Credits: Andreas Tille.
Meals
On the first day, Suresh and I had dinner at the eatery on the third floor. At the entrance, a member of the hotel staff asked us about how many people we wanted a table for. I told her that it s just the two of us at the moment, but (as we are attending a conference) we might be joined by others. Regardless, they gave us a table for just two. Within a few minutes, we were joined by Alper from Turkey and urbec from Germany. So we shifted to a larger table but then we were joined by even more people, so we were busy adding more chairs to our table. urbec had already been in Kerala for the past 5-6 days and was, on one hand, very happy already with the quality and taste of bananas in Kerala and on the other, rather afraid of the spicy food :)
Two days later, the lunch and dinner were shifted to the All Spice Restaurant on the 14th floor, but the breakfast was still served at the eatery. Since the eatery (on the 3rd floor) had greater variety of food than the other venue, this move made breakfast the best meal for me and many others. Many attendees from outside India were not accustomed to the spicy food. It is difficult for locals to help them, because what we consider mild can be spicy for others. It is not easy to satisfy everyone at the dining table, but I think the organizing team did a very good job in the food department. (That said, it didn t matter for me after a point, and you will know why.) The pappadam were really good, and I liked the rice labelled Kerala rice . I actually brought that exact rice and pappadam home during my last trip to Kochi and everyone at my home liked it too (thanks to Abhijit PA). I also wished to eat all types of payasams from Kerala and this really happened (thanks to Sruthi who designed the menu). Every meal had a different variety of payasam and it was awesome, although I didn t like some of them, mostly because they were very sweet. Meals were later shifted to the ground floor (taking away the best breakfast option which was the eatery).
This place served as lunch and dinner place and later as hacklab during debconf. Credits: Bilal
The excellent Swag Bag
The DebConf registration desk was at the second floor. We were given a very nice swag bag. They were available in multiple colors - grey, green, blue, red - and included an umbrella, a steel mug, a multiboot USB drive by Mostly Harmless, a thermal flask, a mug by Canonical, a paper coaster, and stickers. It rained almost every day in Kochi during our stay, so handing out an umbrella to every attendee was a good idea.
Picture of the awesome swag bag given at DebConf23.
A gift for Nattie
During breakfast one day, Nattie expressed the desire to buy a coffee filter. The next time I went to the market, I bought a coffee filter for her as a gift. She seemed happy with the gift and was flattered to receive a gift from a young man :)
Being a mentor
There were many newbies who were eager to learn and contribute to Debian. So, I mentored whoever came to me and was interested in learning. I conducted a packaging workshop in the bootcamp, but could only cover how to set up the Debian Unstable environment, and had to leave out how to package (but I covered that in my talk). Carlos (Brazil) gave a keysigning session in the bootcamp. Praveen was also mentoring in the bootcamp. I helped people understand why we sign GPG keys and how to sign them. I planned to take a workshop on it but cancelled it later.
My talk
My Debian packaging talk was on the 10th of September, 2023. I had not prepared slides for my Debian packaging talk in advance - I thought that I could do it during the trip, but I didn t get the time so I prepared them on the day before the talk. Since it was mostly a tutorial, the slides did not need much preparation. My thanks to Suresh, who helped me with the slides and made it possible to complete them in such a short time frame.
My talk was well-received by the audience, going by their comments. I am glad that I could give an interesting presentation.
My presentation photo. Credits: Valessio
Visiting a saree shop
After my talk, Suresh, Alper, and I went with Anisa and Kristi - who are both from Albania, and have a never-ending fascination for Indian culture :) - to buy them sarees. We took autos to Kakkanad market and found a shop with a great variety of sarees. I was slightly familiar with the area around the hotel, as I had been there for a week. Indian women usually don t try on sarees while buying - they just select the design. But Anisa wanted to put one on and take a few photos as well. The shop staff did not have a trial saree for this purpose, so they took a saree from a mannequin. It took about an hour for the lady at the shop to help Anisa put on that saree but you could tell that she was in heaven wearing that saree, and she bought it immediately :) Alper also bought a saree to take back to Turkey for his mother. Me and Suresh wanted to buy a kurta which would go well with the mundu we already had, but we could not find anything to our liking.
Selfie with Anisa and Kristi.
Cheese and Wine Party
On the 11th of September we had the Cheese and Wine Party, a tradition of every DebConf. I brought Kaju Samosa and Nankhatai from home. Many attendees expressed their appreciation for the samosas. During the party, I was with Abhas and had a lot of fun. Abhas brought packets of paan and served them at the Cheese and Wine Party. We discussed interesting things and ate burgers. But due to the restrictive alcohol laws in the state, it was less fun compared to the previous DebConfs - you could only drink alcohol served by the hotel in public places. If you bought your own alcohol, you could only drink in private places (such as in your room, or a friend s room), but not in public places.
Me helping with the Cheese and Wine Party
Party at my room
Last year, Joenio (Brazilian) brought pastis from France which I liked. He brought the same alocholic drink this year too. So I invited him to my room after the Cheese and Wine party to have pastis. My idea was to have them with my roommate Suresh and Joenio. But then we permitted Joenio to bring as many people as he wanted and he ended up bringing some ten people. Suddenly, the room was crowded. I was having good time at the party, serving them the snacks given to me by Abhas. The news of an alcohol party at my room spread like wildfire. Soon there were so many people that the AC became ineffective and I found myself sweating.
I left the room and roamed around in the hotel for some fresh air. I came back after about 1.5 hours - for most part, I was sitting at the ground floor with TK Saurabh. And then I met Abraham near the gym (which was my last meeting with him). I came back to my room at around 2:30 AM. Nobody seemed to have realized that I was gone. They were thanking me for hosting such a good party. A lot of people left at that point and the remaining people were playing songs and dancing (everyone was dancing all along!). I had no energy left to dance and to join them. They left around 03:00 AM. But I am glad that people enjoyed partying in my room.
This picture was taken when there were few people in my room for the party.
Sadhya Thali
On the 12th of September, we had a sadhya thali for lunch. It is a vegetarian thali served on a banana leaf on the eve of Thiruvonam. It wasn t Thiruvonam on this day, but we got a special and filling lunch. The rasam and payasam were especially yummy.
Sadhya Thali: A vegetarian meal served on banana leaf. Payasam and rasam were especially yummy!
Sadhya thali being served at debconf23. Credits: Bilal
Day trip
On the 13th of September, we had a daytrip. I chose the daytrip houseboat in Allepey. Suresh chose the same, and we registered for it as soon as it was open. This was the most sought-after daytrip by the DebConf attendees - around 80 people registered for it.
Our bus was set to leave at 9 AM on the 13th of September. Me and Suresh woke up at 8:40 and hurried to get to the bus in time. It took two hours to reach the venue where we get the houseboat.
The houseboat experience was good. The trip featured some good scenery. I got to experience the renowned Kerala backwaters. We were served food on the boat. We also stopped at a place and had coconut water. By evening, we came back to the place where we had boarded the boat.
Group photo of our daytrip. Credits: Radhika Jhalani
A good friend lost
When we came back from the daytrip, we received news that Abhraham Raji was involved in a fatal accident during a kayaking trip.
Abraham Raji was a very good friend of mine. In my Albania-Kosovo-Dubai trip last year, he was my roommate at our Tirana apartment. I roamed around in Dubai with him, and we had many discussions during DebConf22 Kosovo. He was the one who took the photo of me on my homepage. I also met him in MiniDebConf22 Palakkad and MiniDebConf23 Tamil Nadu, and went to his flat in Kochi this year in June.
We had many projects in common. He was a Free Software activist and was the designer of the DebConf23 logo, in addition to those for other Debian events in India.
A selfie in memory of Abraham.
We were all fairly shocked by the news. I was devastated. Food lost its taste, and it became difficult to sleep. That night, Anisa and Kristi cheered me up and gave me company. Thanks a lot to them.
The next day, Joenio also tried to console me. I thank him for doing a great job. I thank everyone who helped me in coping with the difficult situation.
On the next day (the 14th of September), the Debian project leader Jonathan Carter addressed and announced the news officially. THe Debian project also mentioned it on their website.
Abraham was supposed to give a talk, but following the incident, all talks were cancelled for the day. The conference dinner was also cancelled.
As I write, 9 days have passed since his death, but even now I cannot come to terms with it.
Visiting Abraham s house
On the 15th of September, the conference ran two buses from the hotel to Abraham s house in Kottayam (2 hours ride). I hopped in the first bus and my mood was not very good. Evangelos (Germany) was sitting opposite me, and he began conversing with me. The distraction helped and I was back to normal for a while. Thanks to Evangelos as he supported me a lot on that trip. He was also very impressed by my use of the StreetComplete app which I was using to edit OpenStreetMap.
In two hours, we reached Abraham s house. I couldn t control myself and burst into tears. I went to see the body. I met his family (mother, father and sister), but I had nothing to say and I felt helpless. Owing to the loss of sleep and appetite over the past few days, I had no energy, and didn t think it was good idea for me to stay there. I went back by taking the bus after one hour and had lunch at the hotel. I withdrew my talk scheduled for the 16th of September.
A Japanese gift
I got a nice Japanese gift from Niibe Yutaka (Japan) - a folder to keep papers which had ancient Japanese manga characters. He said he felt guilty as he swapped his talk with me and so it got rescheduled from 12th September to 16 September which I withdrew later.
Thanks to Niibe Yutaka (the person towards your right hand) from Japan (FSIJ) gave me a wonderful Japanese gift during debconf23: A folder to keep pages with ancient Japanese manga characters printed on it. I realized I immediately needed that :)
This is the Japanese gift I recieved.
Group photo
On the 16th of September, we had a group photo. I am glad that this year I was more clear in this picture than in DebConf22.
Click to enlarge
Volunteer work and talks attended
I attended the training session for the video team and worked as a camera operator. The Bits from DPL was nice. I enjoyed Abhas presentation on home automation. He basically demonstrated how he liberated Internet-enabled home devices. I also liked Kristi s presentation on ways to engage with the GNOME community.
Bits from the DPL. Credits: Bilal
Kristi on GNOME community.
Abhas' talk on home automation
I also attended lightning talks on the last day. Badri, Wouter, and I gave a demo on how to register on the Prav app. Prav got a fair share of advertising during the last few days.
I was roaming around with a QR code on my T-shirt for downloading Prav.
The night of the 17th of September
Suresh left the hotel and Badri joined me in my room. Thanks to the efforts of Abhijit PA, Kiran, and Ananthu, I wore a mundu.
Me in mundu. Picture credits: Abhijith PA
I then joined Kalyani, Mangesh, Ruchika, Anisa, Ananthu and Kiran. We took pictures and this marked the last night of DebConf23.
Departure day
The 18th of September was the day of departure. Badri slept in my room and left early morning (06:30 AM). I dropped him off at the hotel gate. The breakfast was at the eatery (3rd floor) again, and it was good.
Sahil, Saswata, Nilesh, and I hung out on the ground floor.
From left: Nilesh, Saswata, me, Sahil
I had an 8 PM flight from Kochi to Delhi, for which I took a cab with Rhonda (Austria), Michael (Nigeria) and Yash (India). We were joined by other DebConf23 attendees at the Kochi airport, where we took another selfie.
Ruchika (taking the selfie) and from left to right: Yash, Joost (Netherlands), me, Rhonda
Joost and I were on the same flight, and we sat next to each other. He then took a connecting flight from Delhi to Netherlands, while I went with Yash to the New Delhi Railway Station, where we took our respective trains. I reached home on the morning of the 19th of September, 2023.
Joost and me going to Delhi
Big thanks to the organizers
DebConf23 was hard to organize - strict alcohol laws, weird hotel rules, death of a close friend (almost a family member), and a scary notice by the immigration bureau. The people from the team are my close friends and I am proud of them for organizing such a good event.
None of this would have been possible without the organizers who put more than a year-long voluntary effort to produce this. In the meanwhile, many of them had organized local events in the time leading up to DebConf. Kudos to them.
The organizers also tried their best to get clearance for countries not approved by the ministry. I am also sad that people from China, Kosovo, and Iran could not join. In particular, I feel bad for people from Kosovo who wanted to attend but could not (as India does not consider their passport to be a valid travel document), considering how we Indians were so well-received in their country last year.
Note about myself
I am writing this on the 22nd of September, 2023. It took me three days to put up this post - this was one of the tragic and hard posts for me to write. I have literally forced myself to write this. I have still not recovered from the loss of my friend. Thanks a lot to all those who helped me.
PS: Credits to contrapunctus for making grammar, phrasing, and capitalization changes.
This is a short announcement to say that I have changed my main
OpenPGP key. A signed statement is available with the
cryptographic details but, in short, the reason is that I stopped
using my old YubiKey NEO that I have worn on my keyring since
2015.
I now have a YubiKey 5 which supports ED25519 which features much
shorter keys and faster decryption. It allowed me to move all my
secret subkeys on the key (including encryption keys) while retaining
reasonable performance.
I have written extensive documentation on how to do that OpenPGP key
rotation and also YubiKey OpenPGP operations.
Warning on storing encryption keys on a YubiKey
People wishing to move their private encryption keys to such a
security token should be very careful as there are special
precautions to take for disaster recovery.
I am toying with the idea of writing an article specifically about
disaster recovery for secrets and backups, dealing specifically with
cases of death or disabilities.
Autocrypt changes
One nice change is the impact on Autocrypt headers, which are
considerably shorter.
Before, the header didn't even fit on a single line in an email, it
overflowed to five lines:
Note that I have implemented my own kind of ridiculous Autocrypt
support for the NotmuchEmacs email client I use, see this
elisp code. To import keys, I pipe the message into this
script which is basically just:
sq autocrypt decode gpg --import
... thanks to Sequoia best-of-class Autocrypt support.
Note on OpenPGP usage
While some have claimed OpenPGP's death, I believe those are
overstated. Maybe it's just me, but I still use OpenPGP for my
password management, to authenticate users and messages, and it's the
interface to my YubiKey for authenticating with SSH servers.
I understand people feel that OpenPGP is possibly insecure,
counter-intuitive and full of problems, but I think most of those
problems should instead be attributed to its current flagship
implementation, GnuPG. I have tried to work with GnuPG for years, and
it keeps surprising me with evilness and oddities.
I have high hopes that the Sequoia project can bring some sanity
into this space, and I also hope that RFC4880bis can eventually
get somewhere so we have a more solid specification with more robust
crypto. It's kind of a shame that this has dragged on for so
long, but Update: there's a separate draft called
openpgp-crypto-refresh that might actually be adopted as the
"OpenPGP RFC" soon! And
it doesn't keep real work from happening in Sequoia and other
implementations. Thunderbird rewrote their OpenPGP implementation with
RNP (which was, granted, a bumpy road because it lost
compatibility with GnuPG) and Sequoia now has a certificate store
with trust management (but still no secret storage), preliminary
OpenPGP card support and even a basic GnuPG compatibility
layer. I'm also curious to try out the OpenPGP CA
capabilities.
So maybe it's just because I'm becoming an old fart that doesn't want
to change tools, but so far I haven't seen a good incentive in
switching away from OpenPGP, and haven't found a good set of tools
that completely replace it. Maybe OpenSSH's keys and CA can eventually
replace it, but I suspect they will end up rebuilding most of OpenPGP
anyway, just more slowly. If they do, let's hope they avoid the
mistakes our community has done in the past at least...
I
uploaded
sgt-puzzles to unstable. This brought in the new upstream
version previously in experimental. I incorporated an updated
German translation from Helge Kreutzmann, and made translation
updates less tricky to do.
I updated the buster-security (4.19) branch of linux to
stable version 4.19.288, but didn't upload it this month.
I
fixed
build regressions for linux/experimental on several
architectures, and sent the changes upstream where appropriate
(hppa,
m68k,
and preemptively
sparc).
I created a bookworm-backports branch for the linux package, but
that suite is not yet open to uploads.
I uploaded linux version 6.1.27-1~bpo11+1 and firmware-nonfree
version 20230210-5~bpo11+1 to bullseye-backports, but they still
haven't been accepted.
The amdgpu driver lists some firmware files as potentially needed
that aren't packaged or even publicly available, which leads
to warnings from
initramfs-tools on systems using this driver. I
queried
these upstream, which should hopefully lead to a resolution
of the bug.
Reduce the size of your c: partition to the smallest it can be and then turn off windows with the understanding that you will never boot this system on the iron ever again.
Boot into a netinst installer image (no GUI). hold alt and press left arrow a few times until you get to a prompt to press enter. Press enter.
In this example /dev/sda is your windows disk which contains the c: partition
and /dev/disk/by-id/usb0 is the USB-3 attached SATA controller that you have your SSD attached to (please find an example attached). This SSD should be equal to or larger than the windows disk for best compatability.
To find the literal path names of your detected drives you can run fdisk -l. Pay attention to the names of the partitions and the sizes of the drives to help determine which is which.
Once you have a shell in the netinst installer, you should maybe be able to run a command like the following. This will duplicate the disk located at if (in file) to the disk located at of (out file) while showing progress as the status.
If you confirm that dd is available on the netinst image and the previous command runs successfully, test that your windows partition is visible in the new disk s partition table. The start block of the windows partition on each should match, as should the partition size.
fdisk -l /dev/disk/by-id/usb0
fdisk -l /dev/sda
If the output from the first is the same as the output from the second, then you are probably safe to proceed.
Once you confirm that you have made and tested a full copy of the blocks from your windows drive saved on your usb disk, nuke your windows partition table from orbit.
dd if=/dev/zero of=/dev/sda bs=1M count=42
You can press alt-f1 to return to the Debian installer now. Follow the instructions to install Debian. Don t forget to remove all attached USB drives.
Once you install Debian, press ctrl-alt-f3 to get a root shell.
Add your user to the sudoers group:
# adduser cjac sudoers
log out
# exit
log in as your user and confirm that you have sudo
$ sudo ls
Don t forget to read the spider man advice
enter your password
you ll need to install virt-manager. I think this should help:
I personally create a volume group called /dev/vg00 for the stuff I want to run raw and instead of converting to qcow2 like all of the other users do, I instead write it to a new logical volume.
sudo lvcreate /dev/vg00 -n windows -L 42G # or however large your drive was
sudo dd if=/dev/disk/by-id/usb0 of=/dev/vg00/windows status=progress
Now that you ve got the qcow2 file created, press alt-left until you return to your GDM session.
The apt-get install command above installed virt-manager, so log in to your system if you haven t already and open up gnome-terminal by pressing the windows key or moving your mouse/gesture to the top left of your screen. Type in gnome-terminal and either press enter or click/tap on the icon.
I like to run this full screen so that I feel like I m in a space ship. If you like to feel like you re in a spaceship, too, press F11.
You can start virt-manager from this shell or you can press the windows key and type in virt-manager and press enter. You ll want the shell to run commands such as virsh console windows or virsh list
When virt-manager starts, right click on QEMU/KVM and select New.
In the New VM window, select Import existing disk image
When prompted for the path to the image, use the one we created with sudo qemu-img convert above.
Select the version of Windows you want.
Select memory and CPUs to allocate to the VM.
Tick the Customize configuration before install box
If you re prompted to enable the default network, do so now.
The default hardware layout should probably suffice. Get it as close to the underlying hardware as it is convenient to do. But Windows is pretty lenient these days about virtualizing licensed windows instances so long as they re not running in more than one place at a time.
Good luck! Leave comments if you have questions.
Pearls of Luthra
Pearls of Luthra is the first book by Brian Jacques and I think I am going to be a fan of his work. This particular book you have to be wary of. While it is a beautiful book with quite a few illustrations, I have to warn that if you are somebody who feels hungry at the very mention of food, then you will be hungry throughout the book. There isn t a single page where food isn t mentioned and not just any kind of food, the kind of food that is geared towards sweet tooth. So if you fancy tarts or chocolates or anything sweet you will right at home. The book also touches upon various teas and wines and various liquors but food is where it shines in literally. The tale is very much like a Harry Potter adventure but isn t as dark as HP was. In fact, apart from one death and one ear missing rest of our heroes and heroines and there are quite a few. I don t want to give too much away as it s a book to be treasured.
Dahaad
Dahaad (the roar) is Sonakshi Sinha s entry in OTT/Web Series. The stage is set somewhere in North India while the exploits are based on a real life person called Cyanide Mohan who killed 20 women between 2005-2009. In the web series however, the antagonist s crimes are done over a period of 12 years and has 29 women as his victims. Apart from that it s pretty much a copy of what was done by the person above. It s a melting pot of a series which quite a few stories enmeshed along with the main one. The main onus and plot of the movie is about women from lower economic and caste order whose families want them to be wed but cannot due to huge demands for dowry. Now in such a situation, if a person were to give them a bit of attention, promise marriage and ask them to steal a bit and come with him and whatever, they will do it. The same modus operandi was done by Cynaide Mohan. He had a car that was not actually is but used it show off that he s from a richer background, entice the women, have sex, promise marriage and in the morning after pill there will be cynaide which the women unwittingly will consume.
This is also framed by the protagonist Sonakshi Sinha to her mother as her mother is also forcing her to get married as she is becoming older. She shows some of the photographs of the victims and says that while the perpetrator is guilty but so is the overall society that puts women in such vulnerable positions. AFAIK, that is still the state of things. In fact, there is a series called Indian Matchmaking that has all the snobbishness that you want. How many people could have a lifestyle like the ones shown in that, less than 2% of the population. It s actually shows like the above that make the whole thing even more precarious
Apart from it, the show also shows prejudice about caste and background. I wouldn t go much into it as it s worth seeing and experiencing.
Tetris
Tetris in many a ways is a story of greed. It s also a story of a lone inventor who had to wait almost 20 odd years to profit from his invention. Forbes does a marvelous job of giving some more background and foreground info. about Tetris, the inventor and the producer that went to strike it rich. It also does share about copyright misrepresentation happens but does nothing to address it. Could talk a whole lot but better to see the movie and draw your own conclusions. For me it was 4/5.
Discord
Discord became Discord 2.0 and is a blank to me. A blank page. Can t do anything. First I thought it was a bug. Waited for a few days as sometimes webservices do fix themselves. But two weeks on and it still wasn t fixed then decided to look under. One of the tools in Firefox is Web Developer Tools ( CTRL+Shift+I) that tells you if an element of a page is not appearing or at least gives you a hint. To me it gave me the following
Content Security Policy: Ignoring 'unsafe-inline' within script-src or style-src: nonce-source or hash-source specified Content Security Policy: The page s settings blocked the loading of a resource at data:text/css,%0A%20%20%20%20%20%20%20%2 ( style-src ). data:44:30 Content Security Policy: Ignoring 'unsafe-inline' within script-src or style-src: nonce-source or hash-source specified TypeError: AudioContext is not a constructor 138875 https://discord.com/assets/cbf3a75da6e6b6a4202e.js:262 l https://discord.com/assets/f5f0b113e28d4d12ba16.js:1ed46a18578285e5c048b.js:241:118
What is being done is dom.webaudio.enabled being disabled in Firefox.
Then on a hunch, searched on reddit and saw the following. Be careful while visiting the link as it s labelled NSFW although to my mind there wasn t anything remotely NSFW about it. They do mention using another tool AudioContext Fingerprint Defender which supposedly fakes or spoofs an id. As this add-on isn t tracked by Firefox privacy team it s hard for me to say anything positive or negative.
So, in the end I stopped using discord as the alternative was being tracked by them
Last but not the least, saw this about a week back. Sooner or later this had to happen as Elon tries to make money off Twitter.
Ken MacLeod is a Scottish science fiction writer who has become amusingly
famous for repeatedly winning the libertarian Prometheus Award despite
being a (somewhat libertarian-leaning) socialist. The Star
Fraction is the first of a loose series of four novels about future solar
system politics and was nominated for the Clarke Award (as well as winning the Prometheus). It was MacLeod's
first novel.
Moh Kohn is a mercenary, part of the Felix Dzerzhinsky Workers' Defence
collective. They're available for hire to protect research labs and
universities against raids from people such as animal liberationists and
anti-AI extremists (or, as Moh calls them, creeps and cranks). As
The Star Fraction opens, he and his smart gun are protecting a lab
against an attack.
Janis Taine is a biologist who is currently testing a memory-enhancing
drug on mice. It's her lab that is attacked, although it isn't vandalized
the way she expected. Instead, the attackers ruined her experiment by
releasing the test drug into the air, contaminating all of the controls.
This sets off a sequence of events that results in Moh, Janis, and Jordon
Brown, a stock trader for a religious theocracy, on the run from the US/UN
and Space Defense.
I had forgotten what it was like to read the uncompromising old-school
style of science fiction novel that throws you into the world and explains
nothing, leaving it to the reader to piece the world together as you go.
It's weirdly fun, but I'm either out of practice or this was a
particularly challenging example of the genre. MacLeod throws a lot of
characters at you quickly, including some that have long and complicated
personal histories, and it's not until well into the book that the pieces
start to cohere into a narrative. Even once that happens, the
relationship between the characters and the plot is unobvious until late
in the book, and comes from a surprising direction.
Science fiction as a genre is weirdly conservative about political
systems. Despite the grand, futuristic ideas and the speculation about
strange alien societies, the human governments rarely rise to the
sophistication of a modern democracy. There are a lot of empires,
oligarchies, and hand-waved libertarian semi-utopias, but not a lot of
deep engagement with the speculative variety of government systems humans
have proposed. The rare exceptions therefore get a lot of attention from
those of us who find political systems fascinating.
MacLeod has a reputation for writing political SF in that sense, and
The Star Fraction certainly delivers. Moh (despite the name of his
collective, which is explained briefly in the book) is a Trotskyist with a
family history with the Fourth International that is central to the plot.
The setting is a politically fractured Britain full of autonomous zones
with wildly different forms of government, theoretically ruled by a
restored monarchy. That monarchy is opposed by the Army of the New
Republic, which claims to be the legitimate government of the United
Kingdom and is considered by everyone else to be terrorists. Hovering in
the background is a UN entirely subsumed by the US, playing global
policeman over a chaotic world shattered by numerous small-scale wars.
This satisfyingly different political world is a major plus for me. The
main drawback is that I found the world-building and politics more
interesting than the characters. It's not that I disliked them; I found
them enjoyably quirky and odd. It's more that so much is happening and
there are so many significant characters, all set in an unfamiliar and
unexplained world and often divided into short scenes of a few pages, that
I had a hard time keeping track of them all. Part of the point of
The Star Fraction is digging into their tangled past and connecting
it up with the present, but the flashbacks added a confused timeline on
top of the other complexity and made it hard for me to get lost in the
story. The characters felt a bit too much like puzzle pieces until the
very end of the book.
The technology is an odd mix with a very 1990s feel. MacLeod is one of
the SF authors who can make computers and viruses believable, avoiding the
cyberpunk traps, but AI becomes relevant to the plot and the conception of
AI here feels oddly retro. (Not MacLeod's fault; it's been nearly 30
years and a lot has changed.) On-line discussion in the book is still
based on newsgroups, which added to the nostalgic feel. I did like the
eventual explanation for the computing part of the plot, though; I can't
say much while avoiding spoilers, but it's one of the more believable
explanations for how a technology could spread in a way required for the
plot that I've read.
I've been planning on reading this series for years but never got around
to it. I enjoyed my last try at a MacLeod
series well enough to want to keep reading, but not well enough to keep
reading immediately, and then other books happened and now it's been 19
years. I feel similarly about The Star Fraction: it's good enough
(and in a rare enough subgenre of SF) that I want to keep reading, but not
enough to keep reading immediately. We'll see if I manage to get to the
next book in a reasonable length of time.
Followed by The Stone Canal.
Rating: 6 out of 10
Dominik George
did 0.0h (out of 10.0h assigned and 14.0h from previous period), thus carrying over 24.0h to the next month.
Emilio Pozuelo Monfort
did 8.0h in December, 8.0h in November (out of 1.5h assigned and 49.5h from previous period), thus carrying over 43.0h to the next month.
Enrico Zini
did 0.0h (out of 0h assigned and 8.0h from previous period), thus carrying over 8.0h to the next month.
Guilhem Moulin
did 17.5h (out of 20.0h assigned), thus carrying over 2.5h to the next month.
Helmut Grohne
did 15.0h (out of 15.0h assigned, 2.5h were taken from the extra-budget and worked on).
Utkarsh Gupta
did 51.5h (out of 42.5h assigned and 9.0h from previous period).
Evolution of the situation
In December, we have released 47 DLAs, closing 232 CVEs.
In the same year, in total we released 394 DLAs, closing 1450 CVEs.
We are constantly growing and seeking new contributors. If you are a Debian Developer and want to join the LTS team,
please contact us.
Thanks to our sponsors
Sponsors that joined recently are in bold.
Welcome to yet another report from the Reproducible Builds project, this time for November 2022. In all of these reports (which we have been publishing regularly since May 2015) we attempt to outline the most important things that we have been up to over the past month. As always, if you interested in contributing to the project, please visit our Contribute page on our website.
Reproducible Builds Summit 2022
Following-up from last month s report about our recent summit in Venice, Italy, a comprehensive report from the meeting has not been finalised yet watch this space!
As a very small preview, however, we can link to several issues that were filed about the website during the summit (#38, #39, #40, #41, #42, #43, etc.) and collectively learned about Software Bill of Materials (SBOM) s and how .buildinfo files can be seen/used as SBOMs. And, no less importantly, the Reproducible Builds t-shirt design has been updated
Reproducible Builds at European Cyber Week 2022
During the European Cyber Week 2022, a Capture The Flag (CTF) cybersecurity challenge was created by Fr d ric Pierret on the subject of Reproducible Builds. The challenge consisted in a pedagogical sense based on how to make a software release reproducible. To progress through the challenge issues that affect the reproducibility of build (such as build path, timestamps, file ordering, etc.) were to be fixed in steps in order to get the final flag in order to win the challenge.
At the end of the competition, five people succeeded in solving the challenge, all of whom were awarded with a shirt. Fr d ric Pierret intends to create similar challenge in the form of a how to in the Reproducible Builds documentation, but two of the 2022 winners are shown here:
[ ] industry application of R-Bs appears limited, and we seek to understand whether awareness is low or if significant technical and business reasons prevent wider adoption.
This is achieved through interviews with software practitioners and business managers, and touches on both the business and technical reasons supporting the adoption (or not) of Reproducible Builds. The article also begins with an excellent explanation and literature review, and even introduces a new helpful analogy for reproducible builds:
[Users are] able to perform a bitwise comparison of the two binaries to verify that they are identical and that the distributed binary is indeed built from the source code in the way the provider claims. Applied in this manner, R-Bs function as a canary, a mechanism that indicates when something might be wrong, and offer an improvement in security over running unverified binaries on computer systems.
The full paper is available to download on an open access basis.
Elsewhere in academia, Beatriz Michelson Reichert and Rafael R. Obelheiro have published a paper proposing a systematic threat model for a generic software development pipeline identifying possible mitigations for each threat (PDF). Under the Tampering rubric of their paper, various attacks against Continuous Integration (CI) processes:
An attacker may insert a backdoor into a CI or build tool and thus introduce vulnerabilities into the software (resulting in an improper build). To avoid this threat, it is the developer s responsibility to take due care when making use of third-party build tools. Tampered compilers can be mitigated using diversity, as in the diverse double compiling (DDC) technique. Reproducible builds, a recent research topic, can also provide mitigation for this problem. (PDF)
Misc news
A change was proposed for the Go programming language to enable reproducible builds when Link Time Optimisation (LTO) is enabled. As mentioned in the changelog, Morten Linderud s patch fixes two issues when the linker used in conjunction with the -flto option: the first involves solving an issue related to seeded random numbers; and the second involved the binary embedding the current working directory in compressed sections of the LTO object. Both of these issues made the build unreproducible.
Our monthly IRC meeting was held on November 29th 2022. Our next meeting will be on January 31st 2023; we ll skip the meeting in December due to the proximity to Christmas, etc.
Vagrant Cascadian posed an interesting question regarding the difference between test builds vs rebuilds (or verification rebuilds ). As Vagrant poses in their message, they re both useful for slightly different purposes, and it might be good to clarify the distinction [ ].
Debian & other Linux distributions
Over 50 reviews of Debian packages were added this month, another 48 were updated and almost 30 were removed, all of which adds to our knowledge about identified issues. Two new issue types were added as well. [][].
Vagrant Cascadian announced on our mailing list another online sprint to help clear the huge backlog of reproducible builds patches submitted by performing NMUs (Non-Maintainer Uploads). The first such sprint took place on September 22nd, but others were held on October 6th and October 20th. There were two additional sprints that occurred in November, however, which resulted in the following progress:
Upstream patches
The Reproducible Builds project detects, dissects and attempts to fix as many currently-unreproducible packages as possible. We endeavour to send all of our patches upstream where appropriate. This month, we wrote a large number of such patches, including:
diffoscopediffoscope is our in-depth and content-aware diff utility. Not only can it locate and diagnose reproducibility issues, it can provide human-readable diffs from many kinds of binary formats. This month, Chris Lamb prepared and uploaded versions 226 and 227 to Debian:
Support both python3-progressbar and python3-progressbar2, two modules providing the progressbar Python module. []
Don t run Python decompiling tests on Python bytecode that file(1) cannot detect yet and Python 3.11 cannot unmarshal. (#1024335)
Don t attempt to attach text-only differences notice if there are no differences to begin with. (#1024171)
Make sure we recommend apksigcopier. []
Tidy generation of os_list. []
Make the code clearer around generating the Debian substvars . []
Use our assert_diff helper in test_lzip.py. []
Drop other copyright notices from lzip.py and test_lzip.py. []
In addition to this, Christopher Baines added lzip support [], and FC Stegerman added an optimisation whereby we don t run apktool if no differences are detected before the signing block [].
A significant number of changes were made to the Reproducible Builds website and documentation this month, including Chris Lamb ensuring the openEuler logo is correctly visible with a white background [], FC Stegerman de-duplicated by email address to avoid listing some contributors twice [], Herv Boutemy added Apache Maven to the list of affiliated projects [] and boyska updated our Contribute page to remark that the Reproducible Builds presence on salsa.debian.org is not just the Git repository but is also for creating issues [][]. In addition to all this, however, Holger Levsen made the following changes:
Add a number of existing publications [][] and update metadata for some existing publications as well [].
Add the Warpforge build tool as a participating project of the summit. []
Clarify in the footer that we welcome patches to the website repository. []
Testing framework
The Reproducible Builds project operates a comprehensive testing framework at tests.reproducible-builds.org in order to check packages and other artifacts for reproducibility. In October, the following changes were made by Holger Levsen:
Improve the generation of meta package sets (used in grouping packages for reporting/statistical purposes) to treat Debian bookworm as equivalent to Debian unstable in this specific case []
and to parse the list of packages used in the Debian cloud images [][][].
Temporarily allow Frederic to ssh(1) into our snapshot server as the jenkins user. []
Keep some reproducible jobs Jenkins logs much longer [] (later reverted).
Improve the node health checks to detect failures to update the Debian cloud image package set [][] and to improve prioritisation of some kernel warnings [].
Always echo any IRC output to Jenkins output as well. []
Deal gracefully with problems related to processing the cloud image package set. []
Finally, Roland Clobus continued his work on testing Live Debian images, including adding support for specifying the origin of the Debian installer [] and to warn when the image has unmet dependencies in the package list (e.g. due to a transition) [].
If you are interested in contributing to the Reproducible Builds project, please visit our Contribute page on our website. You can get in touch with us via:
History, Setup
So for quite some time I have a QNAP TS-873x here, equipped with 8
Western Digital Red 10 TB disks, plus 2 WD Blue 500G M2 SSDs. The QNAP
itself has an AMD Embedded R-Series RX-421MD with 4 cores and was
equipped with 48G RAM.
Initially I had been quite happy, the system is nice. It was fast, it
was easy to get to run and the setup of things I wanted was simple
enough. All in a web interface that tries to imitate a kind of
workstation feeling and also tries to hide that it is actually a
webinterface.
Natually with that amount of disks I had a RAID6 for the disks, plus
RAID1 for the SSDs. And then configured as a big storage pool with the
RAID1 as cache. Below the hood QNAP uses MDADM Raid and LVM (if you
want, with thin provisioning), in some form of emdedded linux. The
interface allows for regular snapshots of your storage with flexible
enough schedules to create them, so it all appears pretty good.
QNAP slow
Fast forward some time and it gets annoying. First off you really
should have regular raid resyncs scheduled, and while you can set
priorities on them and have them low priority, they make the whole
system feel very sluggish, quite annoying. And sure, power failure
(rare, but can happen) means another full resync run. Also, it appears
all of the snapshots are always mounted to some
/mnt/snapshot/something place (df on the system gets quite unusable).
Second, the reboot times. QNAP seems to be affected by the more
features, fuck performance virus, and bloat their OS with more and
more features while completly ignoring the performance. Everytime they
do an upgrade it feels worse. Lately reboot times went up to 10 to
15 minutes - and then it still hadn t started the virtual machines /
docker containers one might run on. Another 5 to 10 minutes for those.
Opening the file explorer - ages on calculating what to show. Trying
to get the storage setup shown? Go get a coffee, but please fetch the
beans directly from the plantation, or you are too fast.
Annoying it was. And no, no broken disks or fan or anything, it all
checks out fine.
Replace QNAPs QTS system
So I started looking around what to do. More RAM may help a little
bit, but I already had 48G, the system itself appears to only do 64G
maximum, so not much chance of it helping enough. Hardware is all fine
and working, so software needs to be changed. Sounds hard, but turns
out, it is not.
TrueNAS
And I found that multiple people replaced the QNAPs own system with a
TrueNAS installation and generally had been happy. Looking further I
found that TrueNAS has a variant called Scale - which is based on
Debian. Doubly good, that, so I went off checking what I may need for
it.
Requirements
Heck, that was a step back. To install TrueNAS you need an HDMI out
and a disk to put it on. The one that QTS uses is too small, so no
option.
QNAPs original internal
USB drive, DOM
So either use one of the SSDs that played cache (and
should do so again in TrueNAS, or get the QNAP original replaced.
HDMI out is simple, get a cheap card and put it into one of the two
PCIe-4x slots, done. The disk thing looked more complicated, as QNAP
uses some internal usb stick thing . Turns out it is just a USB
stick that has an 8+1pin connector. Couldn t find anything nice as
replacement, but hey, there are 9-pin to USB-A adapters.
a 9pin to USB A adapter
With that adapter, one can take some random M2 SSD and an M2-to-USB
case, plus some cabling, and voila, we have a nice system disk.
9pin adapter to USB-A connected with some
more cable
Obviously there isn t a good place to put this SSD case and cable, but the
QNAP case is large enough to find space and use some cable ties to store it
safely. Space enough to get the cable from the side, where the
mainboard is to the place I mounted it, so all fine.
Mounted SSD in its external case
The next best M2 SSD was a Western Digital Red with 500G - and while
this is WAY too much for TrueNAS, it works. And hey, only using a tiny
fraction? Oh so much more cells available internally to use when
others break. Or something
Together with the Asus card mounted I was able to install TrueNAS.
Which is simple, their installer is easy enough to follow, just make
sure to select the right disk to put it on.
Preserving data during the move
Switching from QNAP QTS to TrueNAS Scale means changing from MDADM
Raid with LVM and ext4 on top to ZFS and as such all data on it gets
erased. So a backup first is helpful, and I got myself two external
Seagate USB Disks of 6TB each - enough for the data I wanted to keep.
Copying things all over took ages, especially as the QNAP backup
thingie sucks, it was breaking quite often. Also, for some reason I
did not investigate, the performance of it was real bad. It started at
a maximum of 50MB/s, but the last terabyte of data was copied
at MUCH less than that, and so it took much longer than I anticipated.
Copying back was slow too, but much less so. Of course reading things
usually is faster than writing, with it going around 100MB/s most of
the time, which is quite a bit more - still not what USB3 can actually
do, but I guess the AMD chip doesn t want to go that fast.
TrueNAS experience
The installation went mostly smooth, the only real trouble had been on
my side. Turns out that a bad network cable does NOT help the network
setup, who would have thought. Other than that it is the usual set of
questions you would expect, a reboot, and then some webinterface.
And here the differences start. The whole system boots up much faster.
Not even a third of the time compared to QTS.
One important thing: As TrueNAS scale is Debian based, and hence a
linux kernel, it automatically detects and assembles the old RAID
arrays that QTS put on. Which TrueNAS can do nothing with, so it helps
to manually stop them and wipe the disks.
Afterwards I put ZFS on the disks, with a similar setup to what I had
before. The spinning rust are the data disks in a RAIDZ2 setup, the
two SSDs are added as cache devices. Unlike MDADM, ZFS does not have a
long sync process. Also unlike the MDADM/LVM/EXT4 setup from before,
ZFS works different. It manages the raid thing but it also does the
volume and filesystem parts. Quite different handling, and I m still
getting used to it, so no, I won t write some ZFS introduction now.
Features
The two systems can not be compared completly, they are having a
pretty different target audience. QNAP is more for the user that wants
some network storage that offers a ton of extra features easily
available via a clickable interface. While TrueNAS appears more
oriented to people that want a fast but reliable storage system.
TrueNAS does not offer all the extra bloat the QNAP delivers. Still,
you have the ability to run virtual machines and it seems it comes
with Rancher, so some kubernetes/container ability is there. It lacks
essential features like assigning PCI devices to virtual machines, so
is not useful right now, but I assume that will come in a future
version.
I am still exploring it all, but I like what I have right now. Still
rebuilding my setup to have all shares exported and used again, but
the most important are working already.
I m trying to replace my old OpenPGP key with a new one. The old key wasn t compromised or lost or anything
bad. Is still valid, but I plan to get rid of it soon. It was created in 2013.
The new key id fingerprint is: AA66280D4EF0BFCC6BFC2104DA5ECB231C8F04C4
I plan to use the new key for things like encrypted emails, uploads to the Debian archive, and more. Also,
the new key includes an identity with a newer personal email address I plan to use soon: arturo.bg@arturo.bg
The new key has been uploaded to some public keyservers.
If you would like to sign the new key, please follow the steps in the Debian wiki.
If you are curious about what that long code block contains, check this https://cirw.in/gpg-decoder/
For the record, the old key fingerprint is: DD9861AB23DC3333892E07A968E713981D1515F8
Cheers!
Welcome to the September 2022 report from the Reproducible Builds project! In our reports we try to outline the most important things that we have been up to over the past month. As a quick recap, whilst anyone may inspect the source code of free software for malicious flaws, almost all software is distributed to end users as pre-compiled binaries. If you are interested in contributing to the project, please visit our Contribute page on our website.
David A. Wheeler reported to us that the US National Security Agency (NSA), Cybersecurity and Infrastructure Security Agency (CISA) and the Office of the Director of National Intelligence (ODNI) have released a document called Securing the Software Supply Chain: Recommended Practices Guide for Developers (PDF).
As David remarked in his post to our mailing list, it expressly recommends having reproducible builds as part of advanced recommended mitigations . The publication of this document has been accompanied by a press release.
Holger Levsen was made aware of a small Microsoft project called oss-reproducible. Part of, OSSGadget, a larger collection of tools for analyzing open source packages , the purpose of oss-reproducible is to:
analyze open source packages for reproducibility. We start with an existing package (for example, the NPM left-pad package, version 1.3.0), and we try to answer the question, Do the package contents authentically reflect the purported source code?
More details can be found in the README.md file within the code repository.
David A. Wheeler also pointed out that there are some potential upcoming changes to the OpenSSF Best Practices badge for open source software in relation to reproducibility. Whilst the badge programme has three certification levels ( passing , silver and gold ), the gold level includes the criterion that The project MUST have a reproducible build .
David reported that some projects have argued that this reproducibility criterion should be slightly relaxed as outlined in an issue on the best-practices-badge GitHub project. Essentially, though, the claim is that the reproducibility requirement doesn t make sense for projects that do not release built software, and that timestamp differences by themselves don t necessarily indicate malicious changes. Numerous pragmatic problems around excluding timestamps were raised in the discussion of the issue.
Sonatype, a pioneer of software supply chain management , issued a press release month to report that they had found:
[ ] a massive year-over-year increase in cyberattacks aimed at open source project ecosystems. According to early data from Sonatype s 8th annual State of the Software Supply Chain Report, which will be released in full this October, Sonatype has recorded an average 700% jump in repository attacks over the last three years.
More information is available in the press release.
A number of changes were made to the Reproducible Builds website and documentation this month, including Chris Lamb adding a redirect from /projects/ to /who/ in order to keep old or archived links working [], Jelle van der Waa added a Rust programming language example for SOURCE_DATE_EPOCH [][] and Mattia Rizzolo included Protocol Labs amongst our project-level sponsors [].
Debian
There was a large amount of reproducibility work taking place within Debian this month:
The nfft source package was removed from the archive, and now all packages in Debian bookworm now have a corresponding .buildinfo file. This can be confirmed and tracked on the associated page on thetests.reproducible-builds.org site.
Vagrant Cascadian announced on our mailing list an informal online sprint to help clear the huge backlog of reproducible builds patches submitted by performing NMU (Non-Maintainer Uploads). The first such sprint took place on September 22nd with the following results:
Holger Levsen:
Mailed #1010957 in man-db asking for an update and whether to remove the patch tag for now. This was subsequently removed and the maintainer started to address the issue.
Emailed #1017372 in plymouth and asked for the maintainer s opinion on the patch. This resulted in the maintainer improving Vagrant s original patch (and uploading it) as well as filing an issue upstream.
The plan is to repeat these sprints every two weeks, with the next taking place on Thursday October 6th at 16:00 UTC on the #debian-reproducible IRC channel.
Roland Clobus posted his 13th update of the status of reproducible Debian ISO images on our mailing list. During the last month, Roland ensured that the live images are now automatically fed to openQA for automated testing after they have been shown to be reproducible. Additionally Roland asked on the debian-devel mailing list about a way to determine the canonical timestamp of the Debian archive. []
Lastly, 44 reviews of Debian packages were added, 91 were updated and 17 were removed this month adding to our knowledge about identified issues. A number of issue types have been updated too, including the descriptions of cmake_rpath_contains_build_path [], nondeterministic_version_generated_by_python_param [] and timestamps_in_documentation_generated_by_org_mode []. Furthermore, two new issue types were created: build_path_used_to_determine_version_or_package_name [] and captures_build_path_via_cmake_variables [].
Other distributions
In openSUSE, Bernhard M. Wiedemann published his usual openSUSE monthly report.
diffoscopediffoscope is our in-depth and content-aware diff utility. Not only can it locate and diagnose reproducibility issues, it can provide human-readable diffs from many kinds of binary formats. This month, Chris Lamb prepared and uploaded versions 222 and 223 to Debian, as well as made the following changes:
The cbfstools utility is now provided in Debian via the coreboot-utils package so we can enable that functionality within Debian. []
Fixed the try.diffoscope.org service by addressing a compatibility issue between glibc/seccomp that was preventing the Docker-contained diffoscope instance from spawning any external processes whatsoever []. I also updated the requirements.txt file, as some of the specified packages were no longer available [][].
In addition Jelle van der Waa added support for file version 5.43 [] and Mattia Rizzolo updated the packaging:
Also include coreboot-utils in the Build-Depends and Test-Depends fields so that it is available for tests. []
Use pep517 and pip to load the requirements. []
Remove packages in Breaks/Replaces that have been obsoleted since the release of Debian bullseye. []
Reprotestreprotest is our end-user tool to build the same source code twice in widely and deliberate different environments, and checking whether the binaries produced by the builds have any differences. This month, reprotest version 0.7.22 was uploaded to Debian unstable by Holger Levsen, which included the following changes by Philip Hands:
Actually ensure that the setarch(8) utility can actually execute before including an architecture to test. []
Include all files matching *.*deb in the default artifact_pattern in order to archive all results of the build. []
Emit an error when building the Debian package if the Debian packaging version does not patch the Python version of reprotest. []
Remove an unneeded invocation of the head(1) utility. []
Upstream patches
The Reproducible Builds project detects, dissects and attempts to fix as many currently-unreproducible packages as possible. We endeavour to send all of our patches upstream where appropriate. This month, we wrote a large number of such patches, including:
Testing framework
The Reproducible Builds project runs a significant testing framework at tests.reproducible-builds.org in order to check packages and other artifacts for reproducibility. This month, however, the following changes were made:
Holger Levsen:
Add a job to build reprotest from Git [] and use the correct Git branch when building it [].
Mattia Rizzolo:
Enable syncing of results from building live Debian ISO images. []
Use scp -p in order to preserve modification times when syncing live ISO images. []
Apply the shellcheck shell script analysis tool. []
In a build node wrapper script, remove some debugging code which was messing up calling scp(1) correctly [] and consquently add support to use both scp -p and regular scp [].
Roland Clobus:
Track and handle the case where the Debian archive gets updated between two live image builds. []
Remove a call to sudo(1) as it is not (or no longer) required to delete old live-build results. []
Contact
As ever, if you are interested in contributing to the Reproducible Builds project, please visit our Contribute page on our website. However, you can get in touch with us via:
Inspired by several others (such as Alex Schroeder s post and Szcze uja s prompt), as well as a desire to get this down for my kids, I figure it s time to write a bit about living through the PC and Internet revolution where I did: outside a tiny town in rural Kansas. And, as I ve been back in that same area for the past 15 years, I reflect some on the challenges that continue to play out.
Although the stories from the others were primarily about getting online, I want to start by setting some background. Those of you that didn t grow up in the same era as I did probably never realized that a typical business PC setup might cost $10,000 in today s dollars, for instance. So let me start with the background.
Nothing was easy
This story begins in the 1980s. Somewhere around my Kindergarten year of school, around 1985, my parents bought a TRS-80 Color Computer 2 (aka CoCo II). It had 64K of RAM and used a TV for display and sound.
This got you the computer. It didn t get you any disk drive or anything, no joysticks (required by a number of games). So whenever the system powered down, or it hung and you had to power cycle it a frequent event you d lose whatever you were doing and would have to re-enter the program, literally by typing it in.
The floppy drive for the CoCo II cost more than the computer, and it was quite common for people to buy the computer first and then the floppy drive later when they d saved up the money for that.
I particularly want to mention that computers then didn t come with a modem. What would be like buying a laptop or a tablet without wifi today. A modem, which I ll talk about in a bit, was another expensive accessory. To cobble together a system in the 80s that was capable of talking to others with persistent storage (floppy, or hard drive), screen, keyboard, and modem would be quite expensive. Adjusted for inflation, if you re talking a PC-style device (a clone of the IBM PC that ran DOS), this would easily be more expensive than the Macbook Pros of today.
Few people back in the 80s had a computer at home. And the portion of those that had even the capability to get online in a meaningful way was even smaller.
Eventually my parents bought a PC clone with 640K RAM and dual floppy drives. This was primarily used for my mom s work, but I did my best to take it over whenever possible. It ran DOS and, despite its monochrome screen, was generally a more capable machine than the CoCo II. For instance, it supported lowercase. (I m not even kidding; the CoCo II pretty much didn t.) A while later, they purchased a 32MB hard drive for it what luxury!
Just getting a machine to work wasn t easy. Say you d bought a PC, and then bought a hard drive, and a modem. You didn t just plug in the hard drive and it would work. You would have to fight it every step of the way. The BIOS and DOS partition tables of the day used a cylinder/head/sector method of addressing the drive, and various parts of that those addresses had too few bits to work with the big drives of the day above 20MB. So you would have to lie to the BIOS and fdisk in various ways, and sort of work out how to do it for each drive. For each peripheral serial port, sound card (in later years), etc., you d have to set jumpers for DMA and IRQs, hoping not to conflict with anything already in the system. Perhaps you can now start to see why USB and PCI were so welcomed.
Sharing and finding resources
Despite the two computers in our home, it wasn t as if software written on one machine just ran on another. A lot of software for PC clones assumed a CGA color display. The monochrome HGC in our PC wasn t particularly compatible. You could find a TSR program to emulate the CGA on the HGC, but it wasn t particularly stable, and there s only so much you can do when a program that assumes color displays on a monitor that can only show black, dark amber, or light amber.
So I d periodically get to use other computers most commonly at an office in the evening when it wasn t being used.
There were some local computer clubs that my dad took me to periodically. Software was swapped back then; disks copied, shareware exchanged, and so forth. For me, at least, there was no online to download software from, and selling software over the Internet wasn t a thing at all.
Three Different Worlds
There were sort of three different worlds of computing experience in the 80s:
Home users. Initially using a wide variety of software from Apple, Commodore, Tandy/RadioShack, etc., but eventually coming to be mostly dominated by IBM PC clones
Small and mid-sized business users. Some of them had larger minicomputers or small mainframes, but most that I had contact with by the early 90s were standardized on DOS-based PCs. More advanced ones had a network running Netware, most commonly. Networking hardware and software was generally too expensive for home users to use in the early days.
Universities and large institutions. These are the places that had the mainframes, the earliest implementations of TCP/IP, the earliest users of UUCP, and so forth.
The difference between the home computing experience and the large institution experience were vast. Not only in terms of dollars the large institution hardware could easily cost anywhere from tens of thousands to millions of dollars but also in terms of sheer resources required (large rooms, enormous power circuits, support staff, etc). Nothing was in common between them; not operating systems, not software, not experience. I was never much aware of the third category until the differences started to collapse in the mid-90s, and even then I only was exposed to it once the collapse was well underway.
You might say to me, Well, Google certainly isn t running what I m running at home! And, yes of course, it s different. But fundamentally, most large datacenters are running on x86_64 hardware, with Linux as the operating system, and a TCP/IP network. It s a different scale, obviously, but at a fundamental level, the hardware and operating system stack are pretty similar to what you can readily run at home. Back in the 80s and 90s, this wasn t the case. TCP/IP wasn t even available for DOS or Windows until much later, and when it was, it was a clunky beast that was difficult.
One of the things Kevin Driscoll highlights in his book called Modem World see my short post about it is that the history of the Internet we usually receive is focused on case 3: the large institutions. In reality, the Internet was and is literally a network of networks. Gateways to and from Internet existed from all three kinds of users for years, and while TCP/IP ultimately won the battle of the internetworking protocol, the other two streams of users also shaped the Internet as we now know it. Like many, I had no access to the large institution networks, but as I ve been reflecting on my experiences, I ve found a new appreciation for the way that those of us that grew up with primarily home PCs shaped the evolution of today s online world also.
An Era of Scarcity
I should take a moment to comment about the cost of software back then. A newspaper article from 1985 comments that WordPerfect, then the most powerful word processing program, sold for $495 (or $219 if you could score a mail order discount). That s $1360/$600 in 2022 money. Other popular software, such as Lotus 1-2-3, was up there as well. If you were to buy a new PC clone in the mid to late 80s, it would often cost $2000 in 1980s dollars. Now add a printer a low-end dot matrix for $300 or a laser for $1500 or even more. A modem: another $300. So the basic system would be $3600, or $9900 in 2022 dollars. If you wanted a nice printer, you re now pushing well over $10,000 in 2022 dollars.
You start to see one barrier here, and also why things like shareware and piracy if it was indeed even recognized as such were common in those days.
So you can see, from a home computer setup (TRS-80, Commodore C64, Apple ][, etc) to a business-class PC setup was an order of magnitude increase in cost. From there to the high-end minis/mainframes was another order of magnitude (at least!) increase. Eventually there was price pressure on the higher end and things all got better, which is probably why the non-DOS PCs lasted until the early 90s.
Increasing Capabilities
My first exposure to computers in school was in the 4th grade, when I would have been about 9. There was a single Apple ][ machine in that room. I primarily remember playing Oregon Trail on it. The next year, the school added a computer lab. Remember, this is a small rural area, so each graduating class might have about 25 people in it; this lab was shared by everyone in the K-8 building. It was full of some flavor of IBM PS/2 machines running DOS and Netware. There was a dedicated computer teacher too, though I think she was a regular teacher that was given somewhat minimal training on computers. We were going to learn typing that year, but I did so well on the very first typing program that we soon worked out that I could do programming instead. I started going to school early these machines were far more powerful than the XT at home and worked on programming projects there.
Eventually my parents bought me a Gateway 486SX/25 with a VGA monitor and hard drive. Wow! This was a whole different world. It may have come with Windows 3.0 or 3.1 on it, but I mainly remember running OS/2 on that machine. More on that below.
Programming
That CoCo II came with a BASIC interpreter in ROM. It came with a large manual, which served as a BASIC tutorial as well. The BASIC interpreter was also the shell, so literally you could not use the computer without at least a bit of BASIC.
Once I had access to a DOS machine, it also had a basic interpreter: GW-BASIC. There was a fair bit of software written in BASIC at the time, but most of the more advanced software wasn t. I wondered how these .EXE and .COM programs were written. I could find vague references to DEBUG.EXE, assemblers, and such. But it wasn t until I got a copy of Turbo Pascal that I was able to do that sort of thing myself. Eventually I got Borland C++ and taught myself C as well. A few years later, I wanted to try writing GUI programs for Windows, and bought Watcom C++ much cheaper than the competition, and it could target Windows, DOS (and I think even OS/2).
Notice that, aside from BASIC, none of this was free, and none of it was bundled. You couldn t just download a C compiler, or Python interpreter, or whatnot back then. You had to pay for the ability to write any kind of serious code on the computer you already owned.
The Microsoft Domination
Microsoft came to dominate the PC landscape, and then even the computing landscape as a whole. IBM very quickly lost control over the hardware side of PCs as Compaq and others made clones, but Microsoft has managed in varying degrees even to this day to keep a stranglehold on the software, and especially the operating system, side. Yes, there was occasional talk of things like DR-DOS, but by and large the dominant platform came to be the PC, and if you had a PC, you ran DOS (and later Windows) from Microsoft.
For awhile, it looked like IBM was going to challenge Microsoft on the operating system front; they had OS/2, and when I switched to it sometime around the version 2.1 era in 1993, it was unquestionably more advanced technically than the consumer-grade Windows from Microsoft at the time. It had Internet support baked in, could run most DOS and Windows programs, and had introduced a replacement for the by-then terrible FAT filesystem: HPFS, in 1988. Microsoft wouldn t introduce a better filesystem for its consumer operating systems until Windows XP in 2001, 13 years later. But more on that story later.
Free Software, Shareware, and Commercial Software
I ve covered the high cost of software already. Obviously $500 software wasn t going to sell in the home market. So what did we have?
Mainly, these things:
Public domain software. It was free to use, and if implemented in BASIC, probably had source code with it too.
Shareware
Commercial software (some of it from small publishers was a lot cheaper than $500)
Let s talk about shareware. The idea with shareware was that a company would release a useful program, sometimes limited. You were encouraged to register , or pay for, it if you liked it and used it. And, regardless of whether you registered it or not, were told please copy! Sometimes shareware was fully functional, and registering it got you nothing more than printed manuals and an easy conscience (guilt trips for not registering weren t necessarily very subtle). Sometimes unregistered shareware would have a nag screen a delay of a few seconds while they told you to register. Sometimes they d be limited in some way; you d get more features if you registered. With games, it was popular to have a trilogy, and release the first episode inevitably ending with a cliffhanger as shareware, and the subsequent episodes would require registration. In any event, a lot of software people used in the 80s and 90s was shareware. Also pirated commercial software, though in the earlier days of computing, I think some people didn t even know the difference.
Notice what s missing: Free Software / FLOSS in the Richard Stallman sense of the word. Stallman lived in the big institution world after all, he worked at MIT and what he was doing with the Free Software Foundation and GNU project beginning in 1983 never really filtered into the DOS/Windows world at the time. I had no awareness of it even existing until into the 90s, when I first started getting some hints of it as a port of gcc became available for OS/2. The Internet was what really brought this home, but I m getting ahead of myself.
I want to say again: FLOSS never really entered the DOS and Windows 3.x ecosystems. You d see it make a few inroads here and there in later versions of Windows, and moreso now that Microsoft has been sort of forced to accept it, but still, reflect on its legacy. What is the software market like in Windows compared to Linux, even today?
Now it is, finally, time to talk about connectivity!
Getting On-Line
What does it even mean to get on line? Certainly not connecting to a wifi access point. The answer is, unsurprisingly, complex. But for everyone except the large institutional users, it begins with a telephone.
The telephone system
By the 80s, there was one communication network that already reached into nearly every home in America: the phone system. Virtually every household (note I don t say every person) was uniquely identified by a 10-digit phone number. You could, at least in theory, call up virtually any other phone in the country and be connected in less than a minute.
But I ve got to talk about cost. The way things worked in the USA, you paid a monthly fee for a phone line. Included in that monthly fee was unlimited local calling. What is a local call? That was an extremely complex question. Generally it meant, roughly, calling within your city. But of course, as you deal with things like suburbs and cities growing into each other (eg, the Dallas-Ft. Worth metroplex), things got complicated fast. But let s just say for simplicity you could call others in your city.
What about calling people not in your city? That was long distance , and you paid often hugely by the minute for it. Long distance rates were difficult to figure out, but were generally most expensive during business hours and cheapest at night or on weekends. Prices eventually started to come down when competition was introduced for long distance carriers, but even then you often were stuck with a single carrier for long distance calls outside your city but within your state. Anyhow, let s just leave it at this: local calls were virtually free, and long distance calls were extremely expensive.
Getting a modem
I remember getting a modem that ran at either 1200bps or 2400bps. Either way, quite slow; you could often read even plain text faster than the modem could display it. But what was a modem?
A modem hooked up to a computer with a serial cable, and to the phone system. By the time I got one, modems could automatically dial and answer. You would send a command like ATDT5551212 and it would dial 555-1212. Modems had speakers, because often things wouldn t work right, and the telephone system was oriented around speech, so you could hear what was happening. You d hear it wait for dial tone, then dial, then hopefully the remote end would ring, a modem there would answer, you d hear the screeching of a handshake, and eventually your terminal would say CONNECT 2400. Now your computer was bridged to the other; anything going out your serial port was encoded as sound by your modem and decoded at the other end, and vice-versa.
But what, exactly, was the other end?
It might have been another person at their computer. Turn on local echo, and you can see what they did. Maybe you d send files to each other. But in my case, the answer was different: PC Magazine.
PC Magazine and CompuServe
Starting around 1986 (so I would have been about 6 years old), I got to read PC Magazine. My dad would bring copies that were being discarded at his office home for me to read, and I think eventually bought me a subscription directly. This was not just a standard magazine; it ran something like 350-400 pages an issue, and came out every other week. This thing was a monster. It had reviews of hardware and software, descriptions of upcoming technologies, pages and pages of ads (that often had some degree of being informative to them). And they had sections on programming. Many issues would talk about BASIC or Pascal programming, and there d be a utility in most issues. What do I mean by a utility in most issues ? Did they include a floppy disk with software?
No, of course not. There was a literal program listing printed in the magazine. If you wanted the utility, you had to type it in. And a lot of them were written in assembler, so you had to have an assembler. An assembler, of course, was not free and I didn t have one. Or maybe they wrote it in Microsoft C, and I had Borland C, and (of course) they weren t compatible. Sometimes they would list the program sort of in binary: line after line of a BASIC program, with lines like 64, 193, 253, 0, 53, 0, 87 that you would type in for hours, hopefully correctly. Running the BASIC program would, if you got it correct, emit a .COM file that you could then run. They did have a rudimentary checksum system built in, but it wasn t even a CRC, so something like swapping two numbers you d never notice except when the program would mysteriously hang.
Eventually they teamed up with CompuServe to offer a limited slice of CompuServe for the purpose of downloading PC Magazine utilities. This was called PC MagNet. I am foggy on the details, but I believe that for a time you could connect to the limited PC MagNet part of CompuServe for free (after the cost of the long-distance call, that is) rather than paying for CompuServe itself (because, OF COURSE, that also charged you per the minute.) So in the early days, I would get special permission from my parents to place a long distance call, and after some nerve-wracking minutes in which we were aware every minute was racking up charges, I could navigate the menus, download what I wanted, and log off immediately.
I still, incidentally, mourn what PC Magazine became. As with computing generally, it followed the mass market. It lost its deep technical chops, cut its programming columns, stopped talking about things like how SCSI worked, and so forth. By the time it stopped printing in 2009, it was no longer a square-bound 400-page beheamoth, but rather looked more like a copy of Newsweek, but with less depth.
Continuing with CompuServe
CompuServe was a much larger service than just PC MagNet. Eventually, our family got a subscription. It was still an expensive and scarce resource; I d call it only after hours when the long-distance rates were cheapest. Everyone had a numerical username separated by commas; mine was 71510,1421. CompuServe had forums, and files. Eventually I would use TapCIS to queue up things I wanted to do offline, to minimize phone usage online.
CompuServe eventually added a gateway to the Internet. For the sum of somewhere around $1 a message, you could send or receive an email from someone with an Internet email address! I remember the thrill of one time, as a kid of probably 11 years, sending a message to one of the editors of PC Magazine and getting a kind, if brief, reply back!
But inevitably I had
The Godzilla Phone Bill
Yes, one month I became lax in tracking my time online. I ran up my parents phone bill. I don t remember how high, but I remember it was hundreds of dollars, a hefty sum at the time. As I watched Jason Scott s BBS Documentary, I realized how common an experience this was. I think this was the end of CompuServe for me for awhile.
Toll-Free Numbers
I lived near a town with a population of 500. Not even IN town, but near town. The calling area included another town with a population of maybe 1500, so all told, there were maybe 2000 people total I could talk to with a local call though far fewer numbers, because remember, telephones were allocated by the household. There was, as far as I know, zero modems that were a local call (aside from one that belonged to a friend I met in around 1992). So basically everything was long-distance.
But there was a special feature of the telephone network: toll-free numbers. Normally when calling long-distance, you, the caller, paid the bill. But with a toll-free number, beginning with 1-800, the recipient paid the bill. These numbers almost inevitably belonged to corporations that wanted to make it easy for people to call. Sales and ordering lines, for instance. Some of these companies started to set up modems on toll-free numbers. There were few of these, but they existed, so of course I had to try them!
One of them was a company called PennyWise that sold office supplies. They had a toll-free line you could call with a modem to order stuff. Yes, online ordering before the web! I loved office supplies. And, because I lived far from a big city, if the local K-Mart didn t have it, I probably couldn t get it. Of course, the interface was entirely text, but you could search for products and place orders with the modem. I had loads of fun exploring the system, and actually ordered things from them and probably actually saved money doing so. With the first order they shipped a monster full-color catalog. That thing must have been 500 pages, like the Sears catalogs of the day. Every item had a part number, which streamlined ordering through the modem.
Inbound FAXes
By the 90s, a number of modems became able to send and receive FAXes as well. For those that don t know, a FAX machine was essentially a special modem. It would scan a page and digitally transmit it over the phone system, where it would at least in the early days be printed out in real time (because the machines didn t have the memory to store an entire page as an image). Eventually, PC modems integrated FAX capabilities.
There still wasn t anything useful I could do locally, but there were ways I could get other companies to FAX something to me. I remember two of them.
One was for US Robotics. They had an on demand FAX system. You d call up a toll-free number, which was an automated IVR system. You could navigate through it and select various documents of interest to you: spec sheets and the like. You d key in your FAX number, hang up, and US Robotics would call YOU and FAX you the documents you wanted. Yes! I was talking to a computer (of a sorts) at no cost to me!
The New York Times also ran a service for awhile called TimesFax. Every day, they would FAX out a page or two of summaries of the day s top stories. This was pretty cool in an era in which I had no other way to access anything from the New York Times. I managed to sign up for TimesFax I have no idea how, anymore and for awhile I would get a daily FAX of their top stories. When my family got its first laser printer, I could them even print these FAXes complete with the gothic New York Times masthead. Wow! (OK, so technically I could print it on a dot-matrix printer also, but graphics on a 9-pin dot matrix is a kind of pain that is a whole other article.)
My own phone line
Remember how I discussed that phone lines were allocated per household? This was a problem for a lot of reasons:
Anybody that tried to call my family while I was using my modem would get a busy signal (unable to complete the call)
If anybody in the house picked up the phone while I was using it, that would degrade the quality of the ongoing call and either mess up or disconnect the call in progress. In many cases, that could cancel a file transfer (which wasn t necessarily easy or possible to resume), prompting howls of annoyance from me.
Generally we all had to work around each other
So eventually I found various small jobs and used the money I made to pay for my own phone line and my own long distance costs. Eventually I upgraded to a 28.8Kbps US Robotics Courier modem even! Yes, you heard it right: I got a job and a bank account so I could have a phone line and a faster modem. Uh, isn t that why every teenager gets a job?
Now my local friend and I could call each other freely at least on my end (I can t remember if he had his own phone line too). We could exchange files using HS/Link, which had the added benefit of allowing split-screen chat even while a file transfer is in progress. I m sure we spent hours chatting to each other keyboard-to-keyboard while sharing files with each other.
Technology in Schools
By this point in the story, we re in the late 80s and early 90s. I m still using PC-style OSs at home; OS/2 in the later years of this period, DOS or maybe a bit of Windows in the earlier years. I mentioned that they let me work on programming at school starting in 5th grade. It was soon apparent that I knew more about computers than anybody on staff, and I started getting pulled out of class to help teachers or administrators with vexing school problems. This continued until I graduated from high school, incidentally often to my enjoyment, and the annoyance of one particular teacher who, I must say, I was fine with annoying in this way.
That s not to say that there was institutional support for what I was doing. It was, after all, a small school. Larger schools might have introduced BASIC or maybe Logo in high school. But I had already taught myself BASIC, Pascal, and C by the time I was somewhere around 12 years old. So I wouldn t have had any use for that anyhow.
There were programming contests occasionally held in the area. Schools would send teams. My school didn t really send anybody, but I went as an individual. One of them was run by a local college (but for jr. high or high school students. Years later, I met one of the professors that ran it. He remembered me, and that day, better than I did. The programming contest had problems one could solve in BASIC or Logo. I knew nothing about what to expect going into it, but I had lugged my computer and screen along, and asked him, Can I write my solutions in C? He was, apparently, stunned, but said sure, go for it. I took first place that day, leading to some rather confused teams from much larger schools.
The Netware network that the school had was, as these generally were, itself isolated. There was no link to the Internet or anything like it. Several schools across three local counties eventually invested in a fiber-optic network linking them together. This built a larger, but still closed, network. Its primary purpose was to allow students to be exposed to a wider variety of classes at high schools. Participating schools had an ITV room , outfitted with cameras and mics. So students at any school could take classes offered over ITV at other schools. For instance, only my school taught German classes, so people at any of those participating schools could take German. It was an early Zoom room. But alongside the TV signal, there was enough bandwidth to run some Netware frames. By about 1995 or so, this let one of the schools purchase some CD-ROM software that was made available on a file server and could be accessed by any participating school. Nice! But Netware was mainly about file and printer sharing; there wasn t even a facility like email, at least not on our deployment.
BBSs
My last hop before the Internet was the BBS. A BBS was a computer program, usually ran by a hobbyist like me, on a computer with a modem connected. Callers would call it up, and they d interact with the BBS. Most BBSs had discussion groups like forums and file areas. Some also had games. I, of course, continued to have that most vexing of problems: they were all long-distance.
There were some ways to help with that, chiefly QWK and BlueWave. These, somewhat like TapCIS in the CompuServe days, let me download new message posts for reading offline, and queue up my own messages to send later. QWK and BlueWave didn t help with file downloading, though.
BBSs get networked
BBSs were an interesting thing. You d call up one, and inevitably somewhere in the file area would be a BBS list. Download the BBS list and you ve suddenly got a list of phone numbers to try calling. All of them were long distance, of course. You d try calling them at random and have a success rate of maybe 20%. The other 80% would be defunct; you might get the dreaded this number is no longer in service or the even more dreaded angry human answering the phone (and of course a modem can t talk to a human, so they d just get silence for probably the nth time that week). The phone company cared nothing about BBSs and recycled their numbers just as fast as any others.
To talk to various people, or participate in certain discussion groups, you d have to call specific BBSs. That s annoying enough in the general case, but even more so for someone paying long distance for it all, because it takes a few minutes to establish a connection to a BBS: handshaking, logging in, menu navigation, etc.
But BBSs started talking to each other. The earliest successful such effort was FidoNet, and for the duration of the BBS era, it remained by far the largest. FidoNet was analogous to the UUCP that the institutional users had, but ran on the much cheaper PC hardware. Basically, BBSs that participated in FidoNet would relay email, forum posts, and files between themselves overnight. Eventually, as with UUCP, by hopping through this network, messages could reach around the globe, and forums could have worldwide participation asynchronously, long before they could link to each other directly via the Internet. It was almost entirely volunteer-run.
Running my own BBS
At age 13, I eventually chose to set up my own BBS. It ran on my single phone line, so of course when I was dialing up something else, nobody could dial up me. Not that this was a huge problem; in my town of 500, I probably had a good 1 or 2 regular callers in the beginning.
In the PC era, there was a big difference between a server and a client. Server-class software was expensive and rare. Maybe in later years you had an email client, but an email server would be completely unavailable to you as a home user. But with a BBS, I could effectively run a server. I even ran serial lines in our house so that the BBS could be connected from other rooms! Since I was running OS/2, the BBS didn t tie up the computer; I could continue using it for other things.
FidoNet had an Internet email gateway. This one, unlike CompuServe s, was free. Once I had a BBS on FidoNet, you could reach me from the Internet using the FidoNet address. This didn t support attachments, but then email of the day didn t really, either.
Various others outside Kansas ran FidoNet distribution points. I believe one of them was mgmtsys; my memory is quite vague, but I think they offered a direct gateway and I would call them to pick up Internet mail via FidoNet protocols, but I m not at all certain of this.
Pros and Cons of the Non-Microsoft World
As mentioned, Microsoft was and is the dominant operating system vendor for PCs. But I left that world in 1993, and here, nearly 30 years later, have never really returned. I got an operating system with more technical capabilities than the DOS and Windows of the day, but the tradeoff was a much smaller software ecosystem. OS/2 could run DOS programs, but it ran OS/2 programs a lot better. So if I were to run a BBS, I wanted one that had a native OS/2 version limiting me to a small fraction of available BBS server software. On the other hand, as a fully 32-bit operating system, there started to be OS/2 ports of certain software with a Unix heritage; most notably for me at the time, gcc. At some point, I eventually came across the RMS essays and started to be hooked.
Internet: The Hunt Begins
I certainly was aware that the Internet was out there and interesting. But the first problem was: how the heck do I get connected to the Internet?
Learning Link and Gopher
ISPs weren t really a thing; the first one in my area (though still a long-distance call) started in, I think, 1994. One service that one of my teachers got me hooked up with was Learning Link. Learning Link was a nationwide collaboration of PBS stations and schools, designed to build on the educational mission of PBS. The nearest Learning Link station was more than a 3-hour drive away but critically, they had a toll-free access number, and my teacher convinced them to let me use it. I connected via a terminal program and a modem, like with most other things. I don t remember much about it, but I do remember a very important thing it had: Gopher. That was my first experience with Gopher.
Learning Link was hosted by a Unix derivative (Xenix), but it didn t exactly give everyone a shell. I seem to recall it didn t have open FTP access either. The Gopher client had FTP access at some point; I don t recall for sure if it did then. If it did, then when a Gopher server referred to an FTP server, I could get to it. (I am unclear at this point if I could key in an arbitrary FTP location, or knew how, at that time.) I also had email access there, but I don t recall exactly how; probably Pine. If that s correct, that would have dated my Learning Link access as no earlier than 1992.
I think my access time to Learning Link was limited. And, since the only way to get out on the Internet from there was Gopher and Pine, I was somewhat limited in terms of technology as well. I believe that telnet services, for instance, weren t available to me.
Computer labs
There was one place that tended to have Internet access: colleges and universities. In 7th grade, I participated in a program that resulted in me being invited to visit Duke University, and in 8th grade, I participated in National History Day, resulting in a trip to visit the University of Maryland. I probably sought out computer labs at both of those. My most distinct memory was finding my way into a computer lab at one of those universities, and it was full of NeXT workstations. I had never seen or used NeXT before, and had no idea how to operate it. I had brought a box of floppy disks, unaware that the DOS disks probably weren t compatible with NeXT.
Closer to home, a small college had a computer lab that I could also visit. I would go there in summer or when it wasn t used with my stack of floppies. I remember downloading disk images of FLOSS operating systems: FreeBSD, Slackware, or Debian, at the time. The hash marks from the DOS-based FTP client would creep across the screen as the 1.44MB disk images would slowly download. telnet was also available on those machines, so I could telnet to things like public-access Archie servers and libraries though not Gopher. Still, FTP and telnet access opened up a lot, and I learned quite a bit in those years.
Continuing the Journey
At some point, I got a copy of the Whole Internet User s Guide and Catalog, published in 1994. I still have it. If it hadn t already figured it out by then, I certainly became aware from it that Unix was the dominant operating system on the Internet. The examples in Whole Internet covered FTP, telnet, gopher all assuming the user somehow got to a Unix prompt. The web was introduced about 300 pages in; clearly viewed as something that wasn t page 1 material. And it covered the command-line www client before introducing the graphical Mosaic. Even then, though, the book highlighted Mosaic s utility as a front-end for Gopher and FTP, and even the ability to launch telnet sessions by clicking on links. But having a copy of the book didn t equate to having any way to run Mosaic. The machines in the computer lab I mentioned above all ran DOS and were incapable of running a graphical browser. I had no SLIP or PPP (both ways to run Internet traffic over a modem) connectivity at home. In short, the Web was something for the large institutional users at the time.
CD-ROMs
As CD-ROMs came out, with their huge (for the day) 650MB capacity, various companies started collecting software that could be downloaded on the Internet and selling it on CD-ROM. The two most popular ones were Walnut Creek CD-ROM and Infomagic. One could buy extensive Shareware and gaming collections, and then even entire Linux and BSD distributions. Although not exactly an Internet service per se, it was a way of bringing what may ordinarily only be accessible to institutional users into the home computer realm.
Free Software Jumps In
As I mentioned, by the mid 90s, I had come across RMS s writings about free software most probably his 1992 essay Why Software Should Be Free. (Please note, this is not a commentary on the more recently-revealed issues surrounding RMS, but rather his writings and work as I encountered them in the 90s.) The notion of a Free operating system not just in cost but in openness was incredibly appealing. Not only could I tinker with it to a much greater extent due to having source for everything, but it included so much software that I d otherwise have to pay for. Compilers! Interpreters! Editors! Terminal emulators! And, especially, server software of all sorts. There d be no way I could afford or run Netware, but with a Free Unixy operating system, I could do all that. My interest was obviously piqued. Add to that the fact that I could actually participate and contribute I was about to become hooked on something that I ve stayed hooked on for decades.
But then the question was: which Free operating system? Eventually I chose FreeBSD to begin with; that would have been sometime in 1995. I don t recall the exact reasons for that. I remember downloading Slackware install floppies, and probably the fact that Debian wasn t yet at 1.0 scared me off for a time. FreeBSD s fantastic Handbook far better than anything I could find for Linux at the time was no doubt also a factor.
The de Raadt Factor
Why not NetBSD or OpenBSD? The short answer is Theo de Raadt. Somewhere in this time, when I was somewhere between 14 and 16 years old, I asked some questions comparing NetBSD to the other two free BSDs. This was on a NetBSD mailing list, but for some reason Theo saw it and got a flame war going, which CC d me. Now keep in mind that even if NetBSD had a web presence at the time, it would have been minimal, and I would have not all that unusually for the time had no way to access it. I was certainly not aware of the, shall we say, acrimony between Theo and NetBSD. While I had certainly seen an online flamewar before, this took on a different and more disturbing tone; months later, Theo randomly emailed me under the subject SLIME saying that I was, well, SLIME . I seem to recall periodic emails from him thereafter reminding me that he hates me and that he had blocked me. (Disclaimer: I have poor email archives from this period, so the full details are lost to me, but I believe I am accurately conveying these events from over 25 years ago)
This was a surprise, and an unpleasant one. I was trying to learn, and while it is possible I didn t understand some aspect or other of netiquette (or Theo s personal hatred of NetBSD) at the time, still that is not a reason to flame a 16-year-old (though he would have had no way to know my age). This didn t leave any kind of scar, but did leave a lasting impression; to this day, I am particularly concerned with how FLOSS projects handle poisonous people. Debian, for instance, has come a long way in this over the years, and even Linus Torvalds has turned over a new leaf. I don t know if Theo has.
In any case, I didn t use NetBSD then. I did try it periodically in the years since, but never found it compelling enough to justify a large switch from Debian. I never tried OpenBSD for various reasons, but one of them was that I didn t want to join a community that tolerates behavior such as Theo s from its leader.
Moving to FreeBSD
Moving from OS/2 to FreeBSD was final. That is, I didn t have enough hard drive space to keep both. I also didn t have the backup capacity to back up OS/2 completely. My BBS, which ran Virtual BBS (and at some point also AdeptXBBS) was deleted and reincarnated in a different form. My BBS was a member of both FidoNet and VirtualNet; the latter was specific to VBBS, and had to be dropped. I believe I may have also had to drop the FidoNet link for a time. This was the biggest change of computing in my life to that point. The earlier experiences hadn t literally destroyed what came before. OS/2 could still run my DOS programs. Its command shell was quite DOS-like. It ran Windows programs. I was going to throw all that away and leap into the unknown.
I wish I had saved a copy of my BBS; I would love to see the messages I exchanged back then, or see its menu screens again. I have little memory of what it looked like. But other than that, I have no regrets. Pursuing Free, Unixy operating systems brought me a lot of enjoyment and a good career.
That s not to say it was easy. All the problems of not being in the Microsoft ecosystem were magnified under FreeBSD and Linux. In a day before EDID, monitor timings had to be calculated manually and you risked destroying your monitor if you got them wrong. Word processing and spreadsheet software was pretty much not there for FreeBSD or Linux at the time; I was therefore forced to learn LaTeX and actually appreciated that. Software like PageMaker or CorelDraw was certainly nowhere to be found for those free operating systems either. But I got a ton of new capabilities.
I mentioned the BBS didn t shut down, and indeed it didn t. I ran what was surely a supremely unique oddity: a free, dialin Unix shell server in the middle of a small town in Kansas. I m sure I provided things such as pine for email and some help text and maybe even printouts for how to use it. The set of callers slowly grew over the time period, in fact.
And then I got UUCP.
Enter UUCP
Even throughout all this, there was no local Internet provider and things were still long distance. I had Internet Email access via assorted strange routes, but they were all strange. And, I wanted access to Usenet. In 1995, it happened.
The local ISP I mentioned offered UUCP access. Though I couldn t afford the dialup shell (or later, SLIP/PPP) that they offered due to long-distance costs, UUCP s very efficient batched processes looked doable. I believe I established that link when I was 15, so in 1995.
I worked to register my domain, complete.org, as well. At the time, the process was a bit lengthy and involved downloading a text file form, filling it out in a precise way, sending it to InterNIC, and probably mailing them a check. Well I did that, and in September of 1995, complete.org became mine. I set up sendmail on my local system, as well as INN to handle the limited Usenet newsfeed I requested from the ISP. I even ran Majordomo to host some mailing lists, including some that were surprisingly high-traffic for a few-times-a-day long-distance modem UUCP link!
The modem client programs for FreeBSD were somewhat less advanced than for OS/2, but I believe I wound up using Minicom or Seyon to continue to dial out to BBSs and, I believe, continue to use Learning Link. So all the while I was setting up my local BBS, I continued to have access to the text Internet, consisting of chiefly Gopher for me.
Switching to Debian
I switched to Debian sometime in 1995 or 1996, and have been using Debian as my primary OS ever since. I continued to offer shell access, but added the WorldVU Atlantis menuing BBS system. This provided a return of a more BBS-like interface (by default; shell was still an uption) as well as some BBS door games such as LoRD and TradeWars 2002, running under DOS emulation.
I also continued to run INN, and ran ifgate to allow FidoNet echomail to be presented into INN Usenet-like newsgroups, and netmail to be gated to Unix email. This worked pretty well. The BBS continued to grow in these days, peaking at about two dozen total user accounts, and maybe a dozen regular users.
Dial-up access availability
I believe it was in 1996 that dial up PPP access finally became available in my small town. What a thrill! FINALLY! I could now FTP, use Gopher, telnet, and the web all from home. Of course, it was at modem speeds, but still.
(Strangely, I have a memory of accessing the Web using WebExplorer from OS/2. I don t know exactly why; it s possible that by this time, I had upgraded to a 486 DX2/66 and was able to reinstall OS/2 on the old 25MHz 486, or maybe something was wrong with the timeline from my memories from 25 years ago above. Or perhaps I made the occasional long-distance call somewhere before I ditched OS/2.)
Gopher sites still existed at this point, and I could access them using Netscape Navigator which likely became my standard Gopher client at that point. I don t recall using UMN text-mode gopher client locally at that time, though it s certainly possible I did.
The city
Starting when I was 15, I took computer science classes at Wichita State University. The first one was a class in the summer of 1995 on C++. I remember being worried about being good enough for it I was, after all, just after my HS freshman year and had never taken the prerequisite C class. I loved it and got an A! By 1996, I was taking more classes.
In 1996 or 1997 I stayed in Wichita during the day due to having more than one class. So, what would I do then but enjoy the computer lab? The CS dept. had two of them: one that had NCD X terminals connected to a pair of SunOS servers, and another one running Windows. I spent most of the time in the Unix lab with the NCDs; I d use Netscape or pine, write code, enjoy the University s fast Internet connection, and so forth.
In 1997 I had graduated high school and that summer I moved to Wichita to attend college. As was so often the case, I shut down the BBS at that time. It would be 5 years until I again dealt with Internet at home in a rural community.
By the time I moved to my apartment in Wichita, I had stopped using OS/2 entirely. I have no memory of ever having OS/2 there. Along the way, I had bought a Pentium 166, and then the most expensive piece of computing equipment I have ever owned: a DEC Alpha, which, of course, ran Linux.
ISDN
I must have used dialup PPP for a time, but I eventually got a job working for the ISP I had used for UUCP, and then PPP. While there, I got a 128Kbps ISDN line installed in my apartment, and they gave me a discount on the service for it. That was around 3x the speed of a modem, and crucially was always on and gave me a public IP. No longer did I have to use UUCP; now I got to host my own things! By at least 1998, I was running a web server on www.complete.org, and I had an FTP server going as well.
Even Bigger Cities
In 1999 I moved to Dallas, and there got my first broadband connection: an ADSL link at, I think, 1.5Mbps! Now that was something! But it had some reliability problems. I eventually put together a server and had it hosted at an acquantaince s place who had SDSL in his apartment. Within a couple of years, I had switched to various kinds of proper hosting for it, but that is a whole other article.
In Indianapolis, I got a cable modem for the first time, with even tighter speeds but prohibitions on running servers on it. Yuck.
Challenges
Being non-Microsoft continued to have challenges. Until the advent of Firefox, a web browser was one of the biggest. While Netscape supported Linux on i386, it didn t support Linux on Alpha. I hobbled along with various attempts at emulators, old versions of Mosaic, and so forth. And, until StarOffice was open-sourced as Open Office, reading Microsoft file formats was also a challenge, though WordPerfect was briefly available for Linux.
Over the years, I have become used to the Linux ecosystem. Perhaps I use Gimp instead of Photoshop and digikam instead of well, whatever somebody would use on Windows. But I get ZFS, and containers, and so much that isn t available there.
Yes, I know Apple never went away and is a thing, but for most of the time period I discuss in this article, at least after the rise of DOS, it was niche compared to the PC market.
Back to Kansas
In 2002, I moved back to Kansas, to a rural home near a different small town in the county next to where I grew up. Over there, it was back to dialup at home, but I had faster access at work. I didn t much care for this, and thus began a 20+-year effort to get broadband in the country. At first, I got a wireless link, which worked well enough in the winter, but had serious problems in the summer when the trees leafed out. Eventually DSL became available locally highly unreliable, but still, it was something. Then I moved back to the community I grew up in, a few miles from where I grew up. Again I got DSL a bit better. But after some years, being at the end of the run of DSL meant I had poor speeds and reliability problems. I eventually switched to various wireless ISPs, which continues to the present day; while people in cities can get Gbps service, I can get, at best, about 50Mbps. Long-distance fees are gone, but the speed disparity remains.
Concluding Reflections
I am glad I grew up where I did; the strong community has a lot of advantages I don t have room to discuss here. In a number of very real senses, having no local services made things a lot more difficult than they otherwise would have been. However, perhaps I could say that I also learned a lot through the need to come up with inventive solutions to those challenges. To this day, I think a lot about computing in remote environments: partially because I live in one, and partially because I enjoy visiting places that are remote enough that they have no Internet, phone, or cell service whatsoever. I have written articles like Tools for Communicating Offline and in Difficult Circumstances based on my own personal experience. I instinctively think about making protocols robust in the face of various kinds of connectivity failures because I experience various kinds of connectivity failures myself.
(Almost) Everything Lives On
In 2002, Gopher turned 10 years old. It had probably been about 9 or 10 years since I had first used Gopher, which was the first way I got on live Internet from my house. It was hard to believe. By that point, I had an always-on Internet link at home and at work. I had my Alpha, and probably also at least PCMCIA Ethernet for a laptop (many laptops had modems by the 90s also). Despite its popularity in the early 90s, less than 10 years after it came on the scene and started to unify the Internet, it was mostly forgotten.
And it was at that moment that I decided to try to resurrect it. The University of Minnesota finally released it under an Open Source license. I wrote the first new gopher server in years, pygopherd, and introduced gopher to Debian. Gopher lives on; there are now quite a few Gopher clients and servers out there, newly started post-2002. The Gemini protocol can be thought of as something akin to Gopher 2.0, and it too has a small but blossoming ecosystem.
Archie, the old FTP search tool, is dead though. Same for WAIS and a number of the other pre-web search tools. But still, even FTP lives on today.
And BBSs? Well, they didn t go away either. Jason Scott s fabulous BBS documentary looks back at the history of the BBS, while Back to the BBS from last year talks about the modern BBS scene. FidoNet somehow is still alive and kicking. UUCP still has its place and has inspired a whole string of successors. Some, like NNCP, are clearly direct descendents of UUCP. Filespooler lives in that ecosystem, and you can even see UUCP concepts in projects as far afield as Syncthing and Meshtastic. Usenet still exists, and you can now run Usenet over NNCP just as I ran Usenet over UUCP back in the day (which you can still do as well). Telnet, of course, has been largely supplanted by ssh, but the concept is more popular now than ever, as Linux has made ssh be available on everything from Raspberry Pi to Android.
And I still run a Gopher server, looking pretty much like it did in 2002.
This post also has a permanent home on my website, where it may be periodically updated.
Since the first week of April 2022 I have (finally!) changed my company car from
a plug-in hybrid to a fully electic car. My new ride, for the next two years, is
a BMW i4 M50 in Aventurine Red metallic.
An ellegant car with very deep and
memorable color, insanely powerful (544 hp/795 Nm), sub-4 second 0-100 km/h, large
84 kWh battery (80 kWh usable), charging up to 210 kW, top speed of 225 km/h
and also very efficient (which came out best in this trip) with WLTP range of 510 km
and EVDB real range of 435 km. The car
also has performance tyres (Hankook Ventus S1 evo3 245/45R18 100Y XL in front and
255/45R18 103Y XL in rear all at recommended 2.5 bar) that have reduced efficiency.
So I wanted to document and describe how was it for me to travel ~2000 km (one way)
with this, electric, car from south of Germany to north of Latvia. I have done
this trip many times before since I live in Germany now and travel back to my
relatives in Latvia 1-2 times per year. This was the first time I made this trip in
an electric car. And as this trip includes both travelling in Germany (where BEV
infrastructure is best in the world) and across Eastern/Northen Europe, I believe
that this can be interesting to a few people out there.
Normally when I travelled this trip with a gasoline/diesel car I would normally drive
for two days with an intermediate stop somewhere around Warsaw with about 12 hours
of travel time in each day. This would normally include a couple bathroom stops in each
day, at least one longer lunch stop and 3-4 refueling stops on top of that. Normally
this would use at least 6 liters of fuel per 100 km on average with total usage of about
270 liters for the whole trip (or about 540 just in fuel costs, nowadays). My
(personal) quirk is that both fuel and recharging of my (business) car inside Germany
is actually paid by my employer, so it is useful
for me to charge up (or fill up) at the last station in Gemany before driving on.
The plan for this trip was made in a similar way as when travelling with a gasoline car:
travelling as fast as possible on German Autobahn network to last chargin stop on the A4
near G rlitz, there charging up as much as reasonable and then travelling to a hotel
in Warsaw, charging there overnight and travelling north towards Ionity chargers in
Lithuania from where reaching the final target in north of Latvia should be possible.
How did this plan meet the reality?
Travelling inside Germany with an electric car was basically perfect. The most efficient
way would involve driving fast and hard with top speed of even 180 km/h (where possible
due to speed limits and traffic). BMW i4 is very efficient at high speeds with consumption
maxing out at 28 kWh/100km when you actually drive at this speed all the time. In real
situation in this trip we saw consumption of 20.8-22.2 kWh/100km in the first legs of the trip.
The more traffic there is, the more speed limits and roadworks, the lower is the average
speed and also the lower the consumption. With this kind of consumption we could comfortably
drive 2 hours as fast as we could and then pick any fast charger along the route and in
26 minutes at a charger (50 kWh charged total) we'd be ready to drive for another 2 hours.
This lines up very well with recommended rest stops for biological reasons (bathroom, water
or coffee, a bit of movement to get blood circulating) and very close to what I had to do
anyway with a gasoline car. With a gasoline car I had to refuel first, then park, then go to
bathroom and so on. With an electric car I can do all of that while the car is charging and
in the end the total time for a stop is very similar. Also not that there was a crazy heat
wave going on and temperature outside was at about 34C minimum the whole day and hitting
40C at one point of the trip, so a lot of power was used for cooling. The car has a heat pump
standard, but it still was working hard to keep us cool in the sun.
The car was able to plan a charging route with all the charging stops required and had all
the good options (like multiple intermediate stops) that many other cars (hi Tesla) and
mobile apps (hi Google and Apple) do not have yet. There are a couple bugs with charging
route and display of current route guidance, those are already fixed and will be delivered
with over the air update with July 2022 update. Another good alterantive is the ABRP (A
Better Route Planner) that was specifically designed for electric car routing along the
best route for charging. Most phone apps (like Google Maps) have no idea about your specific
electric car - it has no idea about the battery capacity, charging curve and is missing key
live data as well - what is the current consumption and remaining energy in the battery. ABRP
is different - it has data and profiles for almost all electric cars and can also be linked to
live vehicle data, either via a OBD dongle or via a new Tronity cloud service. Tronity reads
data from vehicle-specific cloud service, such as MyBMW service, saves it, tracks history and
also re-transmits it to ABRP for live navigation planning. ABRP allows for options and settings
that no car or app offers, for example, saying that you want to stop at a particular place for
an hour or until battery is charged to 90%, or saying that you have specific charging cards and
would only want to stop at chargers that support those. Both the car and the ABRP also support
alternate routes even with multiple intermediate stops. In comparison, route planning by Google
Maps or Apple Maps or Waze or even Tesla does not really come close.
After charging up in the last German fast charger, a more interesting part of the trip started.
In Poland the density of high performance chargers (HPC) is much lower than in Germany. There are
many chargers (west of Warsaw), but vast majority of them are (relatively) slow 50kW chargers.
And that is a difference between putting 50kWh into the car in 23-26 minutes or in 60 minutes. It
does not seem too much, but the key bit here is that for 20 minutes there is easy to find stuff
that should be done anyway, but after that you are done and you are just waiting for the car and
if that takes 4 more minutes or 40 more minutes is a big, perceptual, difference. So using
HPC is much, much preferable. So we put in the Ionity charger near Lodz as our intermediate target
and the car suggested an intermediate stop at a Greenway charger by Katy Wroclawskie. The location
is a bit weird - it has 4 charging stations with 150 kW each. The weird bits are that each station
has two CCS connectors, but only one parking place (and the connectors share power, so if two cars
were to connect, each would get half power). Also from the front of the location one can only see
two stations, the otehr two are semi-hidden around a corner. We actually missed them on the way
to Latvia and one person actually waited for the charger behind us for about 10 minutes. We only
discovered the other two stations on the way back. With slower speeds in Poland the consumption
goes down to 18 kWh/100km which translates to now up to 3 hours driving between stops.
At the end of the first day we drove istarting from Ulm from 9:30 in the morning until about 23:00 in the evening with
total distance of about 1100 km, 5 charging stops, starting with 92% battery, charging for
26 min (50 kWh), 33 min (57 kWh + lunch), 17 min (23 kWh), 12 min (17 kWh) and 13 min (37 kW).
In the last two chargers you can see the difference between a good and fast 150 kW charger at high
battery charge level and a really fast Ionity charger at low battery charge level, which makes
charging faster still.
Arriving to hotel with 23% of battery. Overnight the car charged from a Porsche Destination
Charger to 87% (57 kWh). That was a bit less than I would expect from a full power 11kW charger,
but good enough. Hotels should really install 11kW Type2 chargers for their guests, it is a really
significant bonus that drives more clients to you.
The road between Warsaw and Kaunas is the most difficult part of the trip for both driving itself
and also for charging. For driving the problem is that there will be a new highway going from
Warsaw to Lithuanian border, but it is actually not fully ready yet. So parts of the way one drives
on the new, great and wide highway and parts of the way one drives on temporary roads or on old
single lane undivided roads. And the most annoying part is navigating between parts as signs are
not always clear and the maps are either too old or too new. Some maps do not have the new roads and
others have on the roads that have not been actually build or opened to traffic yet. It's really easy
to loose ones way and take a significant detour. As far as charging goes, basically there is only
the slow 50 kW chargers between Warsaw and Kaunas (for now). We chose to charge on the last charger
in Poland, by Suwalki Kaufland. That was not a good idea - there is only one 50 kW CCS and many people
decide the same, so there can be a wait. We had to wait 17 minutes before we could charge for
30 more minutes just to get 18 kWh into the battery. Not the best use of time. On the way back we chose
a different charger in Lomza where would have a relaxed dinner while the car was charging. That
was far more relaxing and a better use of time.
We also tried charging at an Orlen charger that was not recommended by our car and we found out why.
Unlike all other chargers during our entire trip, this charger did not accept our universal BMW Charging
RFID card. Instead it demanded that we download their own Orlen app and register there. The app is only
available in some countries (and not in others) and on iPhone it is only available in Polish. That is a
bad exception to the rule and a bad example. This is also how most charging works in USA. Here in Europe
that is not normal. The normal is to use a charging card - either provided from the car maker or from
another supplier (like PlugSufring or Maingau Energy). The providers then make roaming arrangements with
all the charging networks, so the cards just work everywhere. In the end the user gets the prices and the
bills from their card provider as a single monthly bill. This also saves all any credit card charges for
the user. Having a clear, separate RFID card also means that one can easily choose how to pay for each
charging session. For example, I have a corporate RFID card that my company pays for (for charging in
Germany) and a private BMW Charging card that I am paying myself for (for charging abroad). Having the
car itself authenticate direct with the charger (like Tesla does) removes the option to choose how to pay.
Having each charge network have to use their own app or token bring too much chaos and takes too much setup.
The optimum is having one card that works everywhere and having the option to have additional card
or cards for specific purposes.
Reaching Ionity chargers in Lithuania is again a breath of fresh air - 20-24 minutes to charge 50 kWh is
as expected. One can charge on the first Ionity just enough to reach the next one and then on the second
charger one can charge up enough to either reach the Ionity charger in Adazi or the final target in Latvia.
There is a huge number of CSDD (Road Traffic and Safety Directorate) managed chargers all over Latvia,
but they are 50 kW chargers. Good enough for local travel, but not great for long distance trips. BMW i4
charges at over 50 kW on a HPC even at over 90% battery state of charge (SoC). This means that it is always
faster to charge up in a HPC than in a 50 kW charger, if that is at all possible. We also tested the CSDD
chargers - they worked without any issues. One could pay with the BMW Charging RFID card, one could use
the CSDD e-mobi app or token and one could also use Mobilly - an app that you can use in Latvia for
everything from parking to public transport tickets or museums or car washes.
We managed to reach our final destination near Aluksne with 17% range remaining after just 3 charging stops:
17+30 min (18 kWh), 24 min (48 kWh), 28 min (36 kWh). Last stop we charged to 90% which took a few extra
minutes that would have been optimal.
For travel around in Latvia we were charging at our target farmhouse from a normal 3 kW Schuko EU socket.
That is very slow. We charged for 33 hours and went from 17% to 94%, so not really full. That was perfectly
fine for our purposes. We easily reached Riga, drove to the sea and then back to Aluksne with 8% still
in reserve and started charging again for the next trip. If it were required to drive around more and charge
faster, we could have used the normal 3-phase 440V connection in the farmhouse to have a red CEE 16A plug
installed (same as people use for welders). BMW i4 comes standard with a new BMW Flexible Fast Charger
that has changable socket adapters. It comes by default with a Schucko connector in Europe, but for 90
one can buy an adapter for blue CEE plug (3.7 kW) or red CEE 16A or 32A plugs (11 kW). Some public charging
stations in France actually use the blue CEE plugs instead of more common Type2 electric car charging stations.
The CEE plugs are also common in camping parking places.
On the way back the long distance BEV travel was already well understood and did not cause us any problem. From
our destination we could easily reach the first Ionity in Lithuania, on the Panevezhis bypass road where
in just 8 minutes we got 19 kWh and were ready to drive on to Kaunas, there a longer 32 minute stop before
the charging desert of Suwalki Gap that gave us 52 kWh to 90%. That brought us to a shopping mall in Lomzha
where we had some food and charged up 39 kWh in lazy 50 minutes. That was enough to bring us to our return hotel
for the night - Hotel 500W in Strykow by Lodz that has a 50kW charger on site, while we were having late
dinner and preparing for sleep, the car easily recharged to full (71 kWh in 95 minutes), so I just moved
it from charger to a parking spot just before going to sleep. Really easy and well flowing day.
Second day back went even better as we just needed an 18 minute stop at the same Katy Wroclawskie charger
as before to get 22 kWh and that was enough to get back to Germany. After that we were again flying on the
Autobahn and charging as needed, 15 min (31 kWh), 23 min (48 kWh) and 31 min (54 kWh + food). We started the
day on about 9:40 and were home at 21:40 after driving just over 1000 km on that day. So less than 12 hours
for 1000 km travelled, including all charging, bio stops, food and some traffic jams as well. Not bad.
Now let's take a look at all the apps and data connections that a technically minded customer can have
for their car. Architecturally the car is a network of computers by itself, but it is very secured and
normally people do not have any direct access. However, once you log in into the car with your BMW account
the car gets your profile info and preferences (seat settings, navigation favorites, ...) and the car then
also can start sending information to the BMW backend about its status. This information is then available
to the user over multiple different channels. There is no separate channel for each of those data flow.
The data only goes once to the backend and then all other communication of apps happens with the backend.
First of all the MyBMW app.
This is the go-to for everything about the car - seeing its current status and location (when not driving),
sending commands to the car (lock, unlock, flash lights, pre-condition, ...) and also monitor and control
charging processes. You can also plan a route or destination in the app in advance and then just send it over
to the car so it already knows where to drive to when you get to the car. This can also integrate with calendar
entries, if you have locations for appointments, for example. This also shows full charging history and
allows a very easy export of that data, here I exported all charging sessions from June and then
trimmed it back to only sessions relevant to the trip and cut off some design elements to have the data more
visible.
So one can very easily see when and where we were charging, how much power we got at each spot and
(if you set prices for locations) can even show costs.
I've already mentioned the Tronity service and its ABRP integration, but it also saves the information that
it gets from the car and gathers that data over time. It has nice aspects, like showing the driven routes
on a map, having ways to do business trip accounting and having good calendar view. Sadly it does not correctly
capture the data for charging sessions (the amounts are incorrect).
Update: after talking to Tronity support, it looks like the bug was in the incorrect value for the usable
battery capacity for my car. They will look into getting th eright values there by default, but as a workaround
one can edit their car in their system (after at least one charging session) and directly set the expected
battery capacity (usable) in the car properties on the Tronity web portal settings.
One other fun way to see data from your BMW is using the BMW integration in Home Assistant.
This brings the car as a device in your own smart home. You can read all the variables from the car current status
(and Home Asisstant makes cute historical charts) and you can even see
interesting trends, for example for remaining range shows much
higher value in Latvia as its prediction is adapted to Latvian road speeds and during the trip it adapts to Polish
and then to German road speeds and thus to higher consumption and thus lower maximum predicted remaining range.
Having the car attached to the Home Assistant also allows you to attach the car to automations, both as data and event
source (like detecting when car enters the "Home" zone) and also as target, so you could flash car lights or even
unlock or lock it when certain conditions are met.
So, what in the end was the most important thing - cost of the trip? In total we charged up 863 kWh, so that would
normally cost one about 290 , which is close to half what this trip would have costed with a gasoline car. Out of
that 279 kWh in Germany (paid by my employer) and 154 kWh in the farmhouse (paid by our wonderful relatives :D) so
in the end the charging that I actually need to pay adds up to 430 kWh or about 150 . Typically, it took about 400
in fuel that I had to pay to get to Latvia and back. The difference is really nice!
In the end I believe that there are three different ways of charging:
incidental charging - this is wast majority of charging in the normal day-to-day life. The car gets charged when
and where it is convinient to do so along the way. If we go to a movie or a shop and there is a chance to leave
the car at a charger, then it can charge up. Works really well, does not take extra time for charging from us.
fast charging - charging up at a HPC during optimal charging conditions - from relatively low level to no more
than 70-80% while you are still doing all the normal things one would do in a quick stop in a long travel
process: bio things, cleaning the windscreen, getting a coffee or a snack.
necessary charging - charging from a whatever charger is available just enough to be able to reach the next
destination or the next fast charger.
The last category is the only one that is really annoying and should be avoided at all costs. Even by shifting
your plans so that you find something else useful to do while necessary charging is happening and thus, at least
partially, shifting it over to incidental charging category. Then you are no longer just waiting for the car,
you are doing something else and the car magically is charged up again.
And when one does that, then travelling with an electric car becomes no more annoying than travelling with
a gasoline car. Having more breaks in a trip is a good thing and makes the trips actually easier and less
stressfull - I was more relaxed during and after this trip than during previous trips. Having the car air
conditioning always be on, even when stopped, was a godsend in the insane heat wave of 30C-38C that we were
driving trough.
Final stats: 4425 km driven in the trip. Average consumption: 18.7 kWh/100km. Time driving: 2 days and 3 hours.
Car regened 152 kWh. Charging stations recharged 863 kWh.
Questions? You can use this i4talk forum thread or this Twitter thread to ask them to me.
Welcome to the March 2022 report from the Reproducible Builds project! In our monthly reports we outline the most important things that we have been up to over the past month.
The in-toto project was accepted as an incubating project within the Cloud Native Computing Foundation (CNCF). in-toto is a framework that protects the software supply chain by collecting and verifying relevant data. It does so by enabling libraries to collect information about software supply chain actions and then allowing software users and/or project managers to publish policies about software supply chain practices that can be verified before deploying or installing software. CNCF foundations hosts a number of critical components of the global technology infrastructure under the auspices of the Linux Foundation. (View full announcement.)
Herv Boutemy posted to our mailing list with an announcement that the Java Reproducible Central has hit the milestone of 500 fully reproduced builds of upstream projects . Indeed, at the time of writing, according to the nightly rebuild results, 530 releases were found to be fully reproducible, with 100% reproducible artifacts.
GitBOM is relatively new project to enable build tools trace every source file that is incorporated into build artifacts. As an experiment and/or proof-of-concept, the GitBOM developers are rebuilding Debian to generate side-channel build metadata for versions of Debian that have already been released. This only works because Debian is (partial) reproducible, so one can be sure that that, if the case where build artifacts are identical, any metadata generated during these instrumented builds applies to the binaries that were built and released in the past. More information on their approach is available in README file in the bomsh repository.
Ludovic Courtes has published an academic paper discussing how the performance requirements of high-performance computing are not (as usually assumed) at odds with reproducible builds. The received wisdom is that vendor-specific libraries and platform-specific CPU extensions have resulted in a culture of local recompilation to ensure the best performance, rendering the property of reproducibility unobtainable or even meaningless. In his paper, Ludovic explains how Guix has:
[ ] implemented what we call package multi-versioning for C/C++ software that lacks function multi-versioning and run-time dispatch [ ]. It is another way to ensure that users do not have to trade reproducibility for performance. (full PDF)
Kit Martin posted to the FOSSA blog a post titled The Three Pillars of Reproducible Builds. Inspired by the shock of infiltrated or intentionally broken NPM packages, supply chain attacks, long-unnoticed backdoors , the post goes on to outline the high-level steps that lead to a reproducible build:
It is one thing to talk about reproducible builds and how they strengthen software supply chain security, but it s quite another to effectively configure a reproducible build. Concrete steps for specific languages are a far larger topic than can be covered in a single blog post, but today we ll be talking about some guiding principles when designing reproducible builds. []
Events
There will be an in-person Debian Reunion in Hamburg, Germany later this year, taking place from 23 30 May. Although this is a Debian event, there will be some folks from the broader Reproducible Builds community and, of course, everyone is welcome. Please see the event page on the Debian wiki for more information.
Bernhard M. Wiedemann posted to our mailing list about a meetup for Reproducible Builds folks at the openSUSE conference in Nuremberg, Germany.
It was also recently announced that DebConf22 will take place this year as an in-person conference in Prizren, Kosovo. The pre-conference meeting (or Debcamp ) will take place from 10 16 July, and the main talks, workshops, etc. will take place from 17 24 July.
Johannes Schauer Marin Rodrigues posted to the debian-devel list mentioning that he exploited the property of reproducibility within Debian to demonstrate that automatically converting a large number of packages to a new internal source version did not change the resulting packages. The proposed change could therefore be applied without causing breakage:
So now we have 364 source packages for which we have a patch and for which we can show that this patch does not change the build output. Do you agree that with those two properties, the advantages of the 3.0 (quilt) format are sufficient such that the change shall be implemented at least for those 364? []
Tooling
diffoscope is our in-depth and content-aware diff utility. Not only can it locate and diagnose reproducibility issues, it can provide human-readable diffs from many kinds of binary formats. This month, Chris Lamb prepared and uploaded versions 207, 208 and 209 to Debian unstable, as well as made the following changes to the code itself:
Update minimum version of Black to prevent test failure on Ubuntu jammy. []
Brent Spillner also worked on adding graceful handling for UNIX sockets and named pipes to diffoscope. [][][]. Vagrant Cascadian also updated the diffoscope package in GNU Guix. [][]
reprotest is the Reproducible Build s project end-user tool to build the same source code twice in widely different environments and checking whether the binaries produced by the builds have any differences. This month, Santiago Ruano Rinc n added a new --append-build-command option [], which was subsequently uploaded to Debian unstable by Holger Levsen.
Upstream patches
The Reproducible Builds project detects, dissects and attempts to fix as many currently-unreproducible packages as possible. We endeavour to send all of our patches upstream where appropriate. This month, we wrote a large number of such patches, including:
Testing framework
The Reproducible Builds project runs a significant testing framework at tests.reproducible-builds.org, to check packages and other artifacts for reproducibility. This month, the following changes were made:
Holger Levsen:
Replace a local copy of the dsa-check-running-kernel script with a packaged version. []
Don t hide the status of offline hosts in the Jenkins shell monitor. []
Detect undefined service problems in the node health check. []
Update the sources.lst file for our mail server as its still running Debian buster. []
Add our mail server to our node inventory so it is included in the Jenkins maintenance processes. []
Remove the debsecan package everywhere; it got installed accidentally via the Recommends relation. []
Document the usage of the osuosl174 host. []
Regular node maintenance was also performed by Holger Levsen [], Vagrant Cascadian [][][] and Mattia Rizzolo.
If you are interested in contributing to the Reproducible Builds project, please visit our Contribute page on our website. However, you can get in touch with us via: