Search Results: "Gunnar Wolf"

31 January 2025

Gunnar Wolf: ChatGPT is bullshit

This post is an unpublished review for ChatGPT is bullshit
As people around the world understand how LLMs behave, more and more people wonder as to why these models hallucinate, and what can be done about to reduce it. This provocatively named article by Michael Townsen Hicks, James Humphries and Joe Slater bring is an excellent primer to better understanding how LLMs work and what to expect from them. As humans carrying out our relations using our language as the main tool, we are easily at awe with the apparent ease with which ChatGPT (the first widely available, and to this day probably the best known, LLM-based automated chatbot) simulates human-like understanding and how it helps us to easily carry out even daunting data aggregation tasks. It is common that people ask ChatGPT for an answer and, if it gets part of the answer wrong, they justify it by stating that it s just a hallucination. Townsen et al. invite us to switch from that characterization to a more correct one: LLMs are bullshitting. This term is formally presented by Frankfurt [1]. To Bullshit is not the same as to lie, because lying requires to know (and want to cover) the truth. A bullshitter not necessarily knows the truth, they just have to provide a compelling description, regardless of what is really aligned with truth. After introducing Frankfurt s ideas, the authors explain the fundamental ideas behind LLM-based chatbots such as ChatGPT; a Generative Pre-trained Transformer (GPT) s have as their only goal to produce human-like text, and it is carried out mainly by presenting output that matches the input s high-dimensional abstract vector representation, and probabilistically outputs the next token (word) iteratively with the text produced so far. Clearly, a GPT s ask is not to seek truth or to convey useful information they are built to provide a normal-seeming response to the prompts provided by their user. Core data are not queried to find optimal solutions for the user s requests, but are generated on the requested topic, attempting to mimic the style of document set it was trained with. Erroneous data emitted by a LLM is, thus, not equiparable with what a person could hallucinate with, but appears because the model has no understanding of truth; in a way, this is very fitting with the current state of the world, a time often termed as the age of post-truth [2]. Requesting an LLM to provide truth in its answers is basically impossible, given the difference between intelligence and consciousness: Following Harari s definitions [3], LLM systems, or any AI-based system, can be seen as intelligent, as they have the ability to attain goals in various, flexible ways, but they cannot be seen as conscious, as they have no ability to experience subjectivity. This is, the LLM is, by definition, bullshitting its way towards an answer: their goal is to provide an answer, not to interpret the world in a trustworthy way. The authors close their article with a plea for literature on the topic to adopt the more correct bullshit term instead of the vacuous, anthropomorphizing hallucination . Of course, being the word already loaded with a negative meaning, it is an unlikely request. This is a great article that mixes together Computer Science and Philosophy, and can shed some light on a topic that is hard to grasp for many users. [1] Frankfurt, Harry (2005). On Bullshit. Princeton University Press. [2] Zoglauer, Thomas (2023). Constructed truths: truth and knowledge in a post-truth world. Springer. [3] Harari, Yuval Noah (2023. Nexus: A Brief History of Information Networks From the Stone Age to AI. Random House.

17 December 2024

Gunnar Wolf: The science of detecting LLM-generated text

This post is a review for Computing Reviews for The science of detecting LLM-generated text , a article published in Communications of the ACM
While artificial intelligence (AI) applications for natural language processing (NLP) are no longer something new or unexpected, nobody can deny the revolution and hype that started, in late 2022, with the announcement of the first public version of ChatGPT. By then, synthetic translation was well established and regularly used, many chatbots had started attending users requests on different websites, voice recognition personal assistants such as Alexa and Siri had been widely deployed, and complaints of news sites filling their space with AI-generated articles were already commonplace. However, the ease of prompting ChatGPT or other large language models (LLMs) and getting extensive answers its text generation quality is so high that it is often hard to discern whether a given text was written by an LLM or by a human has sparked significant concern in many different fields. This article was written to present and compare the current approaches to detecting human- or LLM-authorship in texts. The article presents several different ways LLM-generated text can be detected. The first, and main, taxonomy followed by the authors is whether the detection can be done aided by the LLM s own functions ( white-box detection ) or only by evaluating the generated text via a public application programming interface (API) ( black-box detection ). For black-box detection, the authors suggest training a classifier to discern the origin of a given text. Although this works at first, this task is doomed from its onset to be highly vulnerable to new LLMs generating text that will not follow the same patterns, and thus will probably evade recognition. The authors report that human evaluators find human-authored text to be more emotional and less objective, and use grammar to indicate the tone of the sentiment that should be used when reading the text a trait that has not been picked up by LLMs yet. Human-authored text also tends to have higher sentence-level coherence, with less term repetition in a given paragraph. The frequency distribution for more and less common words is much more homogeneous in LLM-generated texts than in human-written ones. White-box detection includes strategies whereby the LLMs will cooperate in identifying themselves in ways that are not obvious to the casual reader. This can include watermarking, be it rule based or neural based; in this case, both processes become a case of steganography, as the involvement of a LLM is explicitly hidden and spread through the full generated text, aiming at having a low detectability and high recoverability even when parts of the text are edited. The article closes by listing the authors concerns about all of the above-mentioned technologies. Detecting an LLM, be it with or without the collaboration of the LLM s designers, is more of an art than a science, and methods deemed as robust today will not last forever. We also cannot assume that LLMs will continue to be dominated by the same core players; LLM technology has been deeply studied, and good LLM engines are available as free/open-source software, so users needing to do so can readily modify their behavior. This article presents itself as merely a survey of methods available today, while also acknowledging the rapid progress in the field. It is timely and interesting, and easy to follow for the informed reader coming from a different subfield.

9 December 2024

Gunnar Wolf: Some tips for those who still administer Drupal7-based sites

A bit of history: Drupal at my workplace (and in Debian) My main day-to-day responsibility in my workplace is, and has been for 20 years, to take care of the network infrastructure for UNAM s Economics Research Institute. One of the most visible parts of this responsibility is to ensure we have a working Web presence, and that it caters for the needs of our academic community. I joined the Institute in January 2005. Back then, our designer pushed static versions of our webpage, completely built in her computer. This was standard practice at the time, and lasted through some redesigns, but I soon started advocating for the adoption of a Content Management System. After evaluating some alternatives, I recommended adopting Drupal. It took us quite a bit to do the change: even though I clearly recall starting work toward adopting it as early as 2006, according to the Internet Archive, we switched to a Drupal-backed site around June 2010. We started using it somewhere in the version 6 s lifecycle. As for my Debian work, by late 2012 I started getting involved in the maintenance of the drupal7 package, and by April 2013 I became its primary maintainer. I kept the drupal7 package up to date in Debian until 2018; the supported build methods for Drupal 8 are not compatible with Debian (mainly, bundling third-party libraries and updating them without coordination with the rest of the ecosystem), so towards the end of 2016, I announced I would not package Drupal 8 for Debian. By March 2016, we migrated our main page to Drupal 7. By then, we already had several other sites for our academics projects, but my narrative follows our main Web site. I did manage to migrate several Drupal 6 (D6) sites to Drupal 7 (D7); it was quite involved process, never transparent to the user, and we did have the backlash of long downtimes (or partial downtimes, with sites half-available only) with many of our users. For our main site, we took the opportunity to do a complete redesign and deployed a fully new site. You might note that March 2016 is after the release of D8 (November 2015). I don t recall many of the specifics for this decision, but if I m not mistaken, building the new site was a several months long process not only for the technical work of setting it up, but for the legwork of getting all of the needed information from the different areas that need to be represented in the Institute. Not only that: Drupal sites often include tens of contributed themes and modules; the technological shift the project underwent between its 7 and 8 releases was too deep, and modules took a long time (if at all many themes and modules were outright dumped) to become available for the new release. Naturally, the Drupal Foundation wanted to evolve and deprecate the old codebase. But the pain to migrate from D7 to D8 is too big, and many sites have remained under version 7 Eight years after D8 s release, almost 40% of Drupal installs are for version 7, and a similar proportion runs a currently-supported release (10 or 11). And while the Drupal Foundation made a great job at providing very-long-term support for D7, I understand the burden is becoming too much, so close to a year ago (and after pushing several times the D7, they finally announced support will finish this upcoming January 5.

Drupal 7 must go! I found the following usage graphs quite interesting: the usage statistics for all Drupal versions follows a very positive slope, peaking around 2014 during the best years of D7, and somewhat stagnating afterwards, staying since 2015 at the 25000 28000 sites mark (I m very tempted to copy the graphs, but builtwith s terms of use are very clear in not allowing it). There is a sharp drop in the last year I attribute it to the people that are leaving D7 for other technologies after its end-of-life announcement. This becomes clearer looking only at D7 s usage statistics: D7 peaks at 15000 installs in 2016 stays there for close to 5 years, and has a sharp drop to under 7500 sites in the span of one year. D8 has a more regular rise, peak and fall peaking at ~8500 between 2020 and 2021, and down to close to 2500 for some months already; D9 has a very brief peak of almost 9000 sites in 2023 and is now close to half of it. Currently, the Drupal king appears to be D10, still on a positive slope and with over 9000 sites. Drupal 11 is still just a blip in builtwith s radar, with 3 registered sites as of September 2024 :- After writing this last paragraph, I came across the statistics found in the Drupal webpage; the methodology for acquiring its data is completely different: while builtwith s methodology is their trade secret, you can read more about how Drupal s data is gathered (and agree or disagree with it , but at least you have a page detailing 12 years so far of reported data, producing the following graph (which can be shared under the CC BY-SA license ): Drupal usage statistics by version 2013 2024 This graph is disgregated into minor versions, and I don t want to come up with yet another graph for it but it supports (most of) the narrative I presented above although I do miss the recent drop builtwith reported in D7 s numbers!

And what about Backdrop? During the D8 release cycle, a group of Drupal developers were not happy with the depth of the architectural changes that were being adopted, particularly the transition to the Symfony PHP component framework, and forked the D7 codebase to create the Backdrop CMS, a modern version of Drupal, without dropping the known and tested architecture it had. The Backdrop developers keep working closely together with the Drupal community, and although its usage numbers are way smaller than Drupal s, seems to be sustainable and lively. Of course, as I presented their numbers in the previous section, you can see Backdrop s numbers in builtwith are way, way lower. I have found it to be a very warm and welcoming community, eager to receive new members. And, thanks to its contributed D2B Migrate module, I found it is quite easy to migrate a live site from Drupal 7 to Backdrop.

Migration by playbook! So Well, I m an academic. And (if it s not obvious to you after reading so far ), one of the things I must do in my job is to write. So I decided to write an article to invite my colleagues to consider Backdrop for their D7 sites in Cuadernos T cnicos Universitarios de la DGTIC, a young journal in our university for showcasing technical academical work. And now that my article got accepted and published, I m happy to share it with you of course, if you can read Spanish But anyway Given I have several sites to migrate, and that I m trying to get my colleagues to follow suite, I decided to automatize the migration by writing an Ansible playbook to do the heavy lifting. Of course, the playbook s users will probably need to tweak it a bit to their personal needs. I m also far from an Ansible expert, so I m sure there is ample room fo improvement in my style. But it works. Quite well, I must add.

But with this size of database I did stumble across a big pebble, though. I am working on the migration of one of my users sites, and found that its database is huge. I checked the mysqldump output, and it got me close to 3GB of data. And given the D2B_migrate is meant to work via a Web interface (my playbook works around it by using a client I wrote with Perl s WWW::Mechanize), I repeatedly stumbled with PHP s maximum POST size, maximum upload size, maximum memory size I asked for help in Backdrop s Zulip chat site, and my attention was taken off fixing PHP to something more obvious: Why is the database so large? So I took a quick look at the database (or rather: my first look was at the database server s filesystem usage). MariaDB stores each table as a separate file on disk, so I looked for the nine largest tables:
# ls -lhS head
total 3.8G
-rw-rw---- 1 mysql mysql 2.4G Dec 10 12:09 accesslog.ibd
-rw-rw---- 1 mysql mysql 224M Dec  2 16:43 search_index.ibd
-rw-rw---- 1 mysql mysql 220M Dec 10 12:09 watchdog.ibd
-rw-rw---- 1 mysql mysql 148M Dec  6 14:45 cache_field.ibd
-rw-rw---- 1 mysql mysql  92M Dec  9 05:08 aggregator_item.ibd
-rw-rw---- 1 mysql mysql  80M Dec 10 12:15 cache_path.ibd
-rw-rw---- 1 mysql mysql  72M Dec  2 16:39 search_dataset.ibd
-rw-rw---- 1 mysql mysql  68M Dec  2 13:16 field_revision_field_idea_principal_articulo.ibd
-rw-rw---- 1 mysql mysql  60M Dec  9 13:19 cache_menu.ibd
A single table, the access log, is over 2.4GB long. The three following tables are, cache tables. I can perfectly live without their data in our new site! But I don t want to touch the slightest bit of this site until I m satisfied with the migration process, so I found a way to exclude those tables in a non-destructive way: given D2B_migrate works with a mysqldump output, and given that mysqldump locks each table before starting to modify it and unlocks it after its job is done, I can just do the following:
$ perl -e '$output = 1; while (<>)   $output=0 if /^LOCK TABLES  (accesslog search_index watchdog cache_field cache_path) /; $output=1 if /^UNLOCK TABLES/; print if $output ' < /tmp/d7_backup.sql  > /tmp/d7_backup.eviscerated.sql; ls -hl /tmp/d7_backup.sql /tmp/d7_backup.eviscerated.sql
-rw-rw-r-- 1 gwolf gwolf 216M Dec 10 12:22 /tmp/d7_backup.eviscerated.sql
-rw------- 1 gwolf gwolf 2.1G Dec  6 18:14 /tmp/d7_backup.sql
Five seconds later, I m done! The database is now a tenth of its size, and D2B_migrate is happy to take it. And I m a big step closer to finishing my reliance on (this bit of) legacy code for my highly-visible sites

11 November 2024

Gunnar Wolf: Why academics under-share research data - A social relational theory

This post is a review for Computing Reviews for Why academics under-share research data - A social relational theory , a article published in Journal of the Association for Information Science and Technology
As an academic, I have cheered for and welcomed the open access (OA) mandates that, slowly but steadily, have been accepted in one way or another throughout academia. It is now often accepted that public funds means public research. Many of our universities or funding bodies will demand that, with varying intensities sometimes they demand research to be published in an OA venue, sometimes a mandate will only prefer it. Lately, some journals and funder bodies have expanded this mandate toward open science, requiring not only research outputs (that is, articles and books) to be published openly but for the data backing the results to be made public as well. As a person who has been involved with free software promotion since the mid 1990s, it was natural for me to join the OA movement and to celebrate when various universities adopt such mandates. Now, what happens after a university or funder body adopts such a mandate? Many individual academics cheer, as it is the right thing to do. However, the authors observe that this is not really followed thoroughly by academics. What can be observed, rather, is the slow pace or feet dragging of academics when they are compelled to comply with OA mandates, or even an outright refusal to do so. If OA and open science are close to the ethos of academia, why aren t more academics enthusiastically sharing the data used for their research? This paper finds a subversive practice embodied in the refusal to comply with such mandates, and explores an hypothesis based on Karl Marx s productive worker theory and Pierre Bourdieu s ideas of symbolic capital. The paper explains that academics, as productive workers, become targets for exploitation: given that it s not only the academics sharing ethos, but private industry s push for data collection and industry-aligned research, they adapt to technological changes and jump through all kinds of hurdles to create more products, in a result that can be understood as a neoliberal productivity measurement strategy. Neoliberalism assumes that mechanisms that produce more profit for academic institutions will result in better research; it also leads to the disempowerment of academics as a class, although they are rewarded as individuals due to the specific value they produce. The authors continue by explaining how open science mandates seem to ignore the historical ways of collaboration in different scientific fields, and exploring different angles of how and why data can be seen as under-shared, failing to comply with different aspects of said mandates. This paper, built on the social sciences tradition, is clearly a controversial work that can spark interesting discussions. While it does not specifically touch on computing, it is relevant to Computing Reviews readers due to the relatively high percentage of academics among us.

31 October 2024

Gunnar Wolf: Do you have a minute..?

Do you have a minute...? to talk about the so-called Intellectual Property ?

10 October 2024

Gunnar Wolf: Started a guide to writing FUSE filesystems in Python

As DebConf22 was coming to an end, in Kosovo, talking with Eeveelweezel they invited me to prepare a talk to give for the Chicago Python User Group. I replied that I m not really that much of a Python guy But would think about a topic. Two years passed. I meet Eeveelweezel again for DebConf24 in Busan, South Korea. And the topic came up again. I had thought of some ideas, but none really pleased me. Again, I do write some Python when needed, and I teach using Python, as it s the language I find my students can best cope with. But delivering a talk to ChiPy? On the other hand, I have long used a very simplistic and limited filesystem I ve designed as an implementation project at class: FIUnamFS (for Facultad de Ingenier a, Universidad Nacional Aut noma de M xico : the Engineering Faculty for Mexico s National University, where I teach. Sorry, the link is in Spanish but you will find several implementations of it from the students ). It is a toy filesystem, with as many bad characteristics you can think of, but easy to specify and implement. It is based on contiguous file allocation, has no support for sub-directories, and is often limited to the size of a 1.44MB floppy disk. As I give this filesystem as a project to my students (and not as a mere homework), I always ask them to try and provide a good, polished, professional interface, not just the simplistic menu I often get. And I tell them the best possible interface would be if they provide support for FIUnamFS transparently, usable by the user without thinking too much about it. With high probability, that would mean: Use FUSE. Python FUSE But, in the six semesters I ve used this project (with 30-40 students per semester group), only one student has bitten the bullet and presented a FUSE implementation. Maybe this is because it s not easy to understand how to build a FUSE-based filesystem from a high-level language such as Python? Yes, I ve seen several implementation examples and even nice web pages (i.e. the examples shipped with thepython-fuse module Stavros passthrough filesystem, Dave Filesystem based upon, and further explaining, Stavros , and several others) explaining how to provide basic functionality. I found a particularly useful presentation by Matteo Bertozzi presented ~15 years ago at PyCon4 But none of those is IMO followable enough by itself. Also, most of them are very old (maybe the world is telling me something that I refuse to understand?). And of course, there isn t a single interface to work from. In Python only, we can find python-fuse, Pyfuse, Fusepy Where to start from? So I setup to try and help. Over the past couple of weeks, I have been slowly working on my own version, and presenting it as a progressive set of tasks, adding filesystem calls, and being careful to thoroughly document what I write (but maybe my documentation ends up obfuscating the intent? I hope not and, read on, I ve provided some remediation). I registered a GitLab project for a hand-holding guide to writing FUSE-based filesystems in Python. This is a project where I present several working FUSE filesystem implementations, some of them RAM-based, some passthrough-based, and I intend to add to this also filesystems backed on pseudo-block-devices (for implementations such as my FIUnamFS). So far, I have added five stepwise pieces, starting from the barest possible empty filesystem, and adding system calls (and functionality) until (so far) either a read-write filesystem in RAM with basicstat() support or a read-only passthrough filesystem. I think providing fun or useful examples is also a good way to get students to use what I m teaching, so I ve added some ideas I ve had: DNS Filesystem, on-the-fly markdown compiling filesystem, unzip filesystem and uncomment filesystem. They all provide something that could be seen as useful, in a way that s easy to teach, in just some tens of lines. And, in case my comments/documentation are too long to read, uncommentfs will happily strip all comments and whitespace automatically! So I will be delivering my talk tomorrow (2024.10.10, 18:30 GMT-6) at ChiPy (virtually). I am also presenting this talk virtually at Jornadas Regionales de Software Libre in Santa Fe, Argentina, next week (virtually as well). And also in November, in person, at nerdear.la, that will be held in Mexico City for the first time. Of course, I will also share this project with my students in the next couple of weeks And hope it manages to lure them into implementing FUSE in Python. At some point, I shall report! Update: After delivering my ChiPy talk, I have uploaded it to YouTube: A hand-holding guide to writing FUSE-based filesystems in Python, and after presenting at Jornadas Regionales, I present you the video in Spanish here: Aprendiendo y ense ando a escribir sistemas de archivo en espacio de usuario con FUSE y Python.

21 September 2024

Gunnar Wolf: 50 years of queries

This post is a review for Computing Reviews for 50 years of queries , a article published in Communications of the ACM
The relational model is probably the one innovation that brought computers to the mainstream for business users. This article by Donald Chamberlin, creator of one of the first query languages (that evolved into the ubiquitous SQL), presents its history as a commemoration of the 50th anniversary of his publication of said query language. The article begins by giving background on information processing before the advent of today s database management systems: with systems storing and processing information based on sequential-only magnetic tapes in the 1950s, adopting a record-based, fixed-format filing system was far from natural. The late 1960s and early 1970s saw many fundamental advances, among which one of the best known is E. F. Codd s relational model. The first five pages (out of 12) present the evolution of the data management community up to the 1974 SIGFIDET conference. This conference was so important in the eyes of the author that, in his words, it is the event that starts the clock on 50 years of relational databases. The second part of the article tells about the growth of the structured English query language (SEQUEL) eventually renamed SQL including the importance of its standardization and its presence in commercial products as the dominant database language since the late 1970s. Chamberlin presents short histories of the various implementations, many of which remain dominant names today, that is, Oracle, Informix, and DB2. Entering the 1990s, open-source communities introduced MySQL, PostgreSQL, and SQLite. The final part of the article presents controversies and criticisms related to SQL and the relational database model as a whole. Chamberlin presents the main points of controversy throughout the years: 1) the SQL language lacks orthogonality; 2) SQL tables, unlike formal relations, might contain null values; and 3) SQL tables, unlike formal relations, may contain duplicate rows. He explains the issues and tradeoffs that guided the language design as it unfolded. Finally, a section presents several points that explain how SQL and the relational model have remained, for 50 years, a winning concept, as well as some thoughts regarding the NoSQL movement that gained traction in the 2010s. This article is written with clear language and structure, making it easy and pleasant to read. It does not drive a technical point, but instead is a recap on half a century of developments in one of the fields most important to the commercial development of computing, written by one of the greatest authorities on the topic.

2 September 2024

Gunnar Wolf: Free and open source software and other market failures

This post is a review for Computing Reviews for Free and open source software and other market failures , a article published in Communications of the ACM
Understanding the free and open-source software (FOSS) movement has, since its beginning, implied crossing many disciplinary boundaries. This article describes FOSS s history, explaining its undeniable success throughout the 1990s, and why the movement today feels in a way as if it were on autopilot, lacking the steam it once had. The author presents several examples of different industries where, as it happened with FOSS in computing, fundamental innovations happened not because the leading companies of each field are attentive to customers needs, but to a certain degree, despite them not even considering those needs, it is typically due to the hubris that comes from being a market leader. Kemp exemplifies his hypothesis by presenting the messy landscape of the commercial, mutually incompatible systems of Unix in the 1980s. Different companies had set out to implement their particular flavor of open Unix computers, but with clear examples of vendor lock-in techniques. He speculates that, if we had been able to buy a reasonably priced and solid Unix for our 32-bit PCs nobody would be running FreeBSD or Linux today, except possibly as an obscure hobby. He states that the FOSS movement was born out of the utter market failure of the different Unix vendors. The focus of the article shifts then to the FOSS movement itself: 25 years ago, as FOSS systems slowly gained acceptance and then adoption in the serious market and at the center of the dot-com boom of the early 2000s, Linux user groups (LUGs) with tens of thousands of members bloomed throughout the world; knowing this history, why have all but a few of them vanished into oblivion? Kemp suggests that the strength and vitality that LUGs had ultimately reflects the anger that prompted technical users to take the situation into their own hands and fix it; once the software industry was forced to change, the strongly cohesive FOSS movement diluted. The frustrations and anger of [information technology, IT] in 2024, Kamp writes, are entirely different from those of 1991. As an example, the author closes by citing the difficulty of maintaining despite having the resources to do so an aging legacy codebase that needs to continue working year after year.

18 August 2024

Gunnar Wolf: The social media my blog as well as some other sites I publish in is pushed to will soon stop receiving updates

For many years, I have been using the dlvr.it service to echo my online activity to where more people can follow it. Namely, I write in the following sources: Via dlvr.it s services, all those posts are echoed to Gwolfwolf in X (Twitter) and to the Gunnarwolfi page in Facebook. I use neither platform as a human (that is, I never log in there). Anyway, dlvr.it sent me a mail stating they would be soon (as in, the next few weeks) cutting their free tier. And, although I value their services and am thankfulfor their value so far, I am not going to pay for my personal stuff to be reposted to social media. So, this post s mission is twofold:
  1. If you follow me via any of those media, you will soon not be following me anymore
  2. If you know of any service that would fill the space left by dlvr.it, I will be very grateful. Extra gratefulness points if the option you suggest is able to post to accounts in less-propietary media (i.e. the Fediverse). Please tell me by mail (gwolf@gwolf.org).
Oh! Forgot to mention: Of course, my blog will continue to be appear in Planet Debian, Blograf a, and any decent aggregator that consumes my RSS.

24 July 2024

Gunnar Wolf: DebConf24 Continuous Key-Signing Party

Yay, party!

Yay, crypto! DebCamp has started, and in a couple of days, we will fully be in DebConf24 mode! As most of you know, an important part that binds Debian together is our cryptographic identity assurance, and that is in good measure tightened by the Continuous Key-Signing Parties we hold at DebConfs and other Debian and Free Software gatherings. As I have done during (most of) the past DebConfs, I have prepared a set of pseudo-social maps to help you find where you are in the OpenPGP mesh of our conference. Naturally, Web-of-Trust maps should be user-centered, so find your own at: https://people.debian.org/~gwolf/dc24_ksp/ The list is now final and it will not receive any modifications (I asked for them some days ago); if your name still appears on the list and you don t want to be linked to the DC24 KSP in the future, tell me and I ll remove it from future versions of the list (but it is part of the final DC24 file, as its checksum is already final) Speaking of which! If you are to be a part of the keysigning, get the final DC24 file and, on a device you trust, check its SHA256 by running:
$ sha256sum dc24_fprs.txt
11daadc0e435cb32f734307b091905d4236cdf82e3b84f43cde80ef1816370a5  dc24_fprs.txt
Make sure the resulting number matches the one I m presenting. If it doesn t, ensure your copy of the file is not corrupted (i.e. download again). If it still doess not match, notify me immediately. Does any of the above confuse you? Please come to (or at least, follow the stream for) my session on DebConf opening day, Continuous Key-Signing Party introduction, 10:30 Korean time; I will do my best to explain the details to you. PS- I will soon provide a simple, short PDF that will probably be mass-printed at FrontDesk so that you can easily track your KSP progress.

17 July 2024

Gunnar Wolf: Script for weather reporting in Waybar

While I was living in Argentina, we (my family) found ourselves checking for weather forecasts almost constantly weather there can be quite unexpected, much more so that here in Mexico. So it took me a bit of tinkering to come up with a couple of simple scripts to show the weather forecast as part of my Waybar setup. I haven t cared to share with anybody, as I believe them to be quite trivial and quite dirty. But today, V ctor was asking for some slightly-related things, so here I go. Please do remember I warned: Dirty. Forecast I am using OpenWeather s open API. I had to register to get an APPID, and it allows me for up to 1,000 API calls per day, more than plenty for my uses, even if I am logged in at my desktops at three different computers (not an uncommon situation). Having that, I set up a file named /etc/get_weather/, that currently reads:
# Home, Mexico City
LAT=19.3364
LONG=-99.1819
# # Home, Paran , Argentina
# LAT=-31.7208
# LONG=-60.5317
# # PKNU, Busan, South Korea
# LAT=35.1339
#LONG=129.1055
APPID=SomeLongRandomStringIAmNotSharing
Then, I have a simple script, /usr/local/bin/get_weather, that fetches the current weather and the forecast, and stores them as /run/weather.json and /run/forecast.json:
#!/usr/bin/bash
CONF_FILE=/etc/get_weather
if [ -e "$CONF_FILE" ]; then
    . "$CONF_FILE"
else
    echo "Configuration file $CONF_FILE not found"
    exit 1
fi
if [ -z "$LAT" -o -z "$LONG" -o -z "$APPID" ]; then
    echo "Configuration file must declare latitude (LAT), longitude (LONG) "
    echo "and app ID (APPID)."
    exit 1
fi
CURRENT=/run/weather.json
FORECAST=/run/forecast.json
wget -q "https://api.openweathermap.org/data/2.5/weather?lat=$ LAT &lon=$ LONG &units=metric&appid=$ APPID " -O "$ CURRENT "
wget -q "https://api.openweathermap.org/data/2.5/forecast?lat=$ LAT &lon=$ LONG &units=metric&appid=$ APPID " -O "$ FORECAST "
This script is called by the corresponding systemd service unit, found at /etc/systemd/system/get_weather.service:
[Unit]
Description=Get the current weather
[Service]
Type=oneshot
ExecStart=/usr/local/bin/get_weather
And it is run every 15 minutes via the following systemd timer unit, /etc/systemd/system/get_weather.timer:
[Unit]
Description=Get the current weather every 15 minutes
[Timer]
OnCalendar=*:00/15:00
Unit=get_weather.service
[Install]
WantedBy=multi-user.target
(yes, it runs even if I m not logged in, wasting some of my free API calls but within reason) Then, I declare a "custom/weather" module in the desired position of my ~/.config/waybar/waybar.config, and define it as:
"custom/weather":  
    "exec": "while true;do /home/gwolf/bin/parse_weather.rb;sleep 10; done",
"return-type": "json",
 ,
This script basically morphs a generic weather JSON description into another set of JSON bits that display my weather in the way I prefer to have it displayed as:
#!/usr/bin/ruby
require 'json'
Sources =  :weather => '/run/weather.json',
           :forecast => '/run/forecast.json'
           
Icons =  '01d' => ' ', # d   day
         '01n' => ' ', # n   night
         '02d' => ' ',
         '02n' => ' ',
         '03d' => ' ',
         '03n' => ' ',
         '04d'  => ' ',
         '04n' => ' ',
         '09d' => ' ',
         '10n' =>  '  ',
         '10d' => ' ',
         '13d' => ' ',
         '50d' => ' '
         
ret =  'text': nil, 'tooltip': nil, 'class': 'weather', 'percentage': 100 
# Current weather report: Main text of the module
begin
  weather = JSON.parse(open(Sources[:weather],'r').read)
  loc_name = weather['name']
  icon = Icons[weather['weather'][0]['icon']]   ' ' + f['weather'][0]['icon'] + f['weather'][0]['main']
  temp = weather['main']['temp']
  sens = weather['main']['feels_like']
  hum = weather['main']['humidity']
  wind_vel = weather['wind']['speed']
  wind_dir = weather['wind']['deg']
  portions =  
  portions[:loc] = loc_name
  portions[:temp] = '%s  %2.2f C (%2.2f)' % [icon, temp, sens]
  portions[:hum] = '  %2d%%' % hum
  portions[:wind] = ' %2.2fm/s %d ' % [wind_vel, wind_dir]
  ret['text'] = [:loc, :temp, :hum, :wind].map   p  portions[p] .join(' ')
rescue => err
  ret['text'] = 'Could not process weather file (%s   %s: %s)' % [Sources[:weather], err.class, err.to_s]
end
# Weather prevision for the following hours/days
begin
  cast = []
  forecast = JSON.parse(open(Sources[:forecast], 'r').read)
  min = ''
  max = ''
  day=Time.now.strftime('%Y.%m.%d')
  by_day =  
  forecast['list'].each_with_index do  f,i 
    by_day[day]  = []
    time = Time.at(f['dt'])
    time_lbl = '%02d:%02d' % [time.hour, time.min]
    icon = Icons[f['weather'][0]['icon']]   ' ' + f['weather'][0]['icon'] + f['weather'][0]['main']
    by_day[day] << f['main']['temp']
    if time.hour == 0
      min = '%2.2f' % by_day[day].min
      max = '%2.2f' % by_day[day].max
      cast << '          min: <b>%s C</b> max: <b>%s C</b>' % [min, max]
      day = time.strftime('%Y.%m.%d')
      cast << '        <b>%04d.%02d.%02d</b>  ' %
              [time.year, time.month, time.day]
    end
    cast << '%s   %2.2f C    %2d%%   %s %s' % [time_lbl,
                                                f['main']['temp'],
                                                f['main']['humidity'],
                                                icon,
                                                f['weather'][0]['description']
                                               ]
  end
  cast << '          min: <b>%s</b> C max: <b>%s C</b>' % [min, max]
  ret['tooltip'] = cast.join("\n")
  
rescue => err
  ret['tooltip'] = 'Could not process forecast file (%s   %s)' % [Sources[:forecast], err.class, err.to_s]
end
# Print out the result for Waybar to process
puts ret.to_json
The end result? Nothing too stunning, but definitively something I find useful and even nicely laid out: Screenshot Do note that it seems OpenWeather will return the name of the closest available meteorology station with (most?) recent data for my home, I often get Ciudad Universitaria, but sometimes Coyoac n or even San ngel Inn.

Gunnar Wolf: Scholarly spam Wulfenia

I just got one of those utterly funny spam messages And yes, I recognize everybody likes building a name for themselves. But some spammers are downright silly. I just got the following mail:
From: Hermine Wolf <hwolf850@gmail.com>
To: me, obviously  
Date: Mon, 15 Jul 2024 22:18:58 -0700
Subject: Make sure that your manuscript gets indexed and showcased in the prestigious Scopus database soon.
Message-ID: <CAEZZb3XCXSc_YOeR7KtnoSK4i3OhD=FH7u+A5xSMsYvhQZojQA@mail.gmail.com>
This message has visual elements included. If they don't display, please   
update your email preferences.
*Dear Esteemed Author,*
Upon careful examination of your recent research articles available online,
we are excited to invite you to submit your latest work to our esteemed    
journal, '*WULFENIA*'. Renowned for upholding high standards of excellence 
in peer-reviewed academic research spanning various fields, our journal is 
committed to promoting innovative ideas and driving advancements in        
theoretical and applied sciences, engineering, natural sciences, and social
sciences. 'WULFENIA' takes pride in its impressive 5-year impact factor of 
*1.000* and is highly respected in prestigious databases including the     
Science Citation Index Expanded (ISI Thomson Reuters), Index Copernicus,   
Elsevier BIOBASE, and BIOSIS Previews.                                     
                                                                           
*Wulfenia submission page:*                                                
[image: research--check.png][image: scrutiny-table-chat.png][image:        
exchange-check.png][image: interaction.png]                                
.                                                                          
Please don't reply to this email                                           
                                                                           
We sincerely value your consideration of 'WULFENIA' as a platform to       
present your scholarly work. We eagerly anticipate receiving your valuable 
contributions.                                                             
*Best regards,*                                                            
Professor Dr. Vienna S. Franz                                              
Scholarly spam Who cares what Wulfenia is about? It s about you, my stupid Wolf cousin!

26 June 2024

Gunnar Wolf: Many terabytes for students to play with. Thanks Debian!

LIDSOL students receiving their new hard drives My students at LIDSOL (Laboratorio de Investigaci n y Desarrollo de Software Libre, Free Software Research and Development Lab) at Facultad de Ingenier a, UNAM asked me to help them get the needed hardware to set up a mirror for various free software projects. We have some decent servers (not new servers, but mirrors don t require to be top-performance), so A couple of weeks ago, I approached the Debian Project Leader (DPL) and suggested we should buy two 16TBhard drives for this project, as it is the most reasonable cost per byte point I found. He agreed, and I bought the drives. Today we had a lab meeting, and I handed them over the hardware. Of course, they are very happy and thankful with the Debian project In the last couple of weeks, they have already set up an Archlinux mirror (https://archlinux.org/mirrors/fi-b.unam.mx), and now that they have heaps of storage space, plans are underway to set up various other mirrors (of course, a Debian mirror will be among the first).

25 June 2024

Gunnar Wolf: Find my device - Whether you like it or not

I received a mail today from Google (noreply-findmydevice@google.com) notifying me that they would unconditionally enable the Find my device functionality I have been repeatedly marking as unwanted in my Android phone. The mail goes on to explain this functionality works even when the device is disconnected, by Bluetooth signals (aha, so turn off Bluetooth will no longer turn off Bluetooth? Hmmm ) Of course, the mail hand-waves that only I can know the location of my device. Google cannot see or use it for other ends . First, should we trust this blanket statement? Second, the fact they don t do it now means they won t ever? Not even if law enforcement requires them to? The devices will be generating this information whether we want it or not, so it s just a matter of opening the required window. Targetting an individual in a crowd Of course, it is a feature many people will appreciate and find useful. And it s not only for finding lost (or stolen) phones, but the mail also mentions tags can be made available to store in your wallet, bike, keys or whatever. But it should be opt-in. As it is, it seems it s not even to opt out of it.

21 June 2024

Gunnar Wolf: A new RISC-V toy... requiring almost no tinkering

Shortly before coming back from Argentina, I got news of a very interesting set of little machines, the MilkV Duo. The specs looked really interesting and fun to play with, particularly those of the bigger model, Milk-V DUO S Some of the highlights: Milk-V Duo S Naturally, for close to only US$12 (plus shipping) for the configuration I wanted I bought one, and got it delivered in early May. The little box sat on my desk for close to six weeks until I had time to start tinkering with it I must say I am surprised. Not only the little bugger delivers what it promises, but it is way more mature than what I expected: It can be used right away without much tinkering with! I mean, I have played with it for less than an hour by now, and I ve even managed to get (almost) regular Debian working. Milk-V distributes a simple, 58.9MB compressed Linux image, based on Buildroot, a simple Linux image generator mostly used for embedded applications, as well as its source tree. I thought that would be a good starting point to work on setting up a minimal Debian filesystem, as I did with the CuBox-i4Pro ten years ago, and maybe even to grow towards a more official solution, akin to what we currently have for the Raspberry Pi family Until I discovered what looks like a friendly and very active online community of Milk-V users! I haven t yet engaged in it, but I stumbled across a thread announcing the availability of Debian images for the Milk-V family. And yes, it feels like a very normal Debian system. /etc/apt/sources.list does point to a third-party repository, but it s for only four packages, all related to pinmux controlfor CVITEK chips. It does feel like a completely normal Debian system! It is not as snappy and fast to load as Buildroot, but given Debian s generality, that s completely as expected. Even the wireless network, one of the usual pain points, works just out of the box! The Debian images can be built or downloaded from this Git repository. In case you wonder how is this system booting or what hardware it detects, I captured two boot logs:

25 May 2024

Gunnar Wolf: How computers make books from graphics rendering, search algorithms, and functional programming to indexing and typesetting

This post is a review for Computing Reviews for How computers make books from graphics rendering, search algorithms, and functional programming to indexing and typesetting , a book published in Manning
If we look at the age-old process of creating books, how many different areas can a computer help us with? And how can each of them be used to teach computer science (CS) fundamentals to a nontechnical audience? This is the premise of John Whitington s enticing book and the result is quite amazing. The book immediately drew my attention when looking at the titles available for review. After all, my initiation into computing as a kid was learning the LaTeX typesetting system while my father worked on his first book on scientific language and typography [1]. Whitington picks 11 different technical aspects of book production, from how dots of ink are transferred to a white page and how they are made into controllable, recognizable shapes, all the way to forming beautiful typefaces and the nuances of properly addressing white-space to present aesthetically pleasing paragraphs, building it all into specific formats aimed at different ends. But if we dig beyond just the chapter titles, we will find a very interesting book on CS that, without ever using technical language or notation, presents aspects as varied as anti-aliasing, vector and raster images, character sets such as ASCII and Unicode, an introduction to programming, input methods for different writing systems, efficient encoding (compression) methods, both for text and images, lossless and lossy, and recursion and dithering methods. To my absolute surprise, while the author thankfully spared the reader the syntax usually associated with LISP-related languages, the programming examples clearly stem from the LISP school, presenting solutions based on tail recursion. Of course, it is no match for Donald Knuth s classic book on this same topic [2], but could very well be a primer for readers to approach it. The book is light and easy to read, and keeps a very informal, nontechnical tone throughout. My only complaint relates to reading it in PDF format; the topic of this book, and the care with which the images were provided by the author, warrant high resolution. The included images are not only decorative but an integral part of the book. Maybe this is specific to my review copy, but all of the raster images were in very low resolution. This book is quite different from what readers may usually expect, as it introduces several significant topics in the field. CS professors will enjoy it, of course, but also readers with a humanities background, students new to the field, or even those who are just interested in learning a bit more.

References
  1. S nchez y G ndara, A.; Magari os Lamas, F.; Wolf, K. B., Manual de lenguaje y tipograf a cient fica en castellano. Trillas, Mexico City, Mexico, 1986, https://www.fis.unam.mx/~bwolf/manual.html
  2. Knuth, D. E. Digital typographyCSLI Lecture Notes: CSLI Lecture Notes. CSLI Publications, Stanford, CA, 1999, https://www-cs-faculty.stanford.edu/~knuth/dt.html

9 May 2024

Gunnar Wolf: Hacks, leaks, and revelations The art of analyzing hacked and leaked data

This post is a review for Computing Reviews for Hacks, leaks and revelations The art of analyzing hacked and leaked data , a book published in No Starch Press
Imagine you ve come across a trove of files documenting a serious deed and you feel the need to blow the whistle. Or maybe you are an investigative journalist and this whistleblower trusts you and wants to give you said data. Or maybe you are a technical person, trusted by said journalist to help them do things right not only to help them avoid being exposed while leaking the information, but also to assist them in analyzing the contents of the dataset. This book will be a great aid for all of the above tasks. The author, Micah Lee, is both a journalist and a computer security engineer. The book is written entirely from his experience handling important datasets, and is organized in a very logical and sound way. Lee organized the 14 chapters in five parts. The first part the most vital to transmitting the book s message, in my opinion begins by talking about the care that must be taken when handling a sensitive dataset: how to store it, how to communicate it to others, sometimes even what to redact (exclude) so the information retains its strength but does not endanger others (or yourself). The first two chapters introduce several tools for encrypting information and keeping communication anonymous, not getting too deep into details and keeping it aimed at a mostly nontechnical audience. Something that really sets this book apart from others like it is that Lee s aim is not only to tell stories about the hacks and leaks he has worked with, or to present the technical details on how he analyzed them, but to teach readers how to do the work. From Part 2 onward the book adopts a tutorial style, teaching the reader numerous tools for obtaining and digging information out of huge and very timely datasets. Lee guides the reader through various data breaches, all of them leaked within the last five years: BlueLeaks, Oath Keepers email dumps, Heritage Foundation, Parler, Epik, and Cadence Health. He guides us through a tutorial on using the command line (mostly targeted at Linux, but considering MacOS and Windows as well), running Docker containers, learning the basics of Python, parsing and filtering structured data, writing small web applications for getting at the right bits of data, and working with structured query language (SQL) databases. The book does an excellent job of fulfilling its very ambitious aims, and this is even more impressive given the wide range of professional profiles it is written for; that being said, I do have a couple critiques. First, the book is ideologically loaded: the datasets all exhibit the alt-right movement that has gained strength in the last decade. Lee takes the reader through many instances of COVID deniers, rioters for Donald Trump during the January 2021 attempted coup, attacks against Black Lives Matter activists, and other extremism research; thus this book could alienate right-wing researchers, who might also be involved in handling important whistleblowing cases. Second, given the breadth of the topic and my 30-plus years of programming experience, I was very interested in the first part of each chapter but less so in the tutorial part. I suppose a journalist reading through the same text might find the sections about the importance of data handling and source protection to be similarly introductory. This is unavoidable, of course, given the nature of this work. However, while Micah Lee is an excellent example of a journalist with the appropriate technical know-how to process the types of material he presents as examples, expecting any one person to become a professional in both fields is asking too much. All in all, this book is excellent. The writing style is informal and easy to read, the examples are engaging, and the analysis is very good. It will certainly teach you something, no matter your background, and it might very well complement your professional skills.

9 April 2024

Gunnar Wolf: Think outside the box Welcome Eclipse!

Now that we are back from our six month period in Argentina, we decided to adopt a kitten, to bring more diversity into our lives. Perhaps this little girl will teach us to think outside the box! Yesterday we witnessed a solar eclipse Mexico City was not in the totality range (we reached ~80%), but it was a great experience to go with the kids. A couple dozen thousand people gathered for a massive picnic in las islas, the main area inside our university campus. Afterwards, we went briefly back home, then crossed the city to fetch the little kitten. Of course, the kids were unanimous: Her name is Eclipse.

18 March 2024

Gunnar Wolf: After miniDebConf Santa Fe

Last week we held our promised miniDebConf in Santa Fe City, Santa Fe province, Argentina just across the river from Paran , where I have spent almost six beautiful months I will never forget. Around 500 Kilometers North from Buenos Aires, Santa Fe and Paran are separated by the beautiful and majestic Paran river, which flows from Brazil, marks the Eastern border of Paraguay, and continues within Argentina as the heart of the litoral region of the country, until it merges with the Uruguay river (you guessed right the river marking the Eastern border of Argentina, first with Brazil and then with Uruguay), and they become the R o de la Plata. This was a short miniDebConf: we were lent the APUL union s building for the weekend (thank you very much!); during Saturday, we had a cycle of talks, and on sunday we had more of a hacklab logic, having some unstructured time to work each on their own projects, and to talk and have a good time together. We were five Debian people attending: santiago debacle eamanu dererk gwolf @debian.org. My main contact to kickstart organization was Mart n Bayo. Mart n was for many years the leader of the Technical Degree on Free Software at Universidad Nacional del Litoral, where I was also a teacher for several years. Together with Leo Mart nez, also a teacher at the tecnicatura, they contacted us with Guillermo and Gabriela, from the APUL non-teaching-staff union of said university. We had the following set of talks (for which there is a promise to get electronic record, as APUL was kind enough to record them! of course, I will push them to our usual conference video archiving service as soon as I get them)
Hour Title (Spanish) Title (English) Presented by
10:00-10:25 Introducci n al Software Libre Introduction to Free Software Mart n Bayo
10:30-10:55 Debian y su comunidad Debian and its community Emanuel Arias
11:00-11:25 Por qu sigo contribuyendo a Debian despu s de 20 a os? Why am I still contributing to Debian after 20 years? Santiago Ruano
11:30-11:55 Mi identidad y el proyecto Debian: Qu es el llavero OpenPGP y por qu ? My identity and the Debian project: What is the OpenPGP keyring and why? Gunnar Wolf
12:00-13:00 Explorando las masculinidades en el contexto del Software Libre Exploring masculinities in the context of Free Software Gora Ortiz Fuentes - Jos Francisco Ferro
13:00-14:30 Lunch
14:30-14:55 Debian para el d a a d a Debian for our every day Leonardo Mart nez
15:00-15:25 Debian en las Raspberry Pi Debian in the Raspberry Pi Gunnar Wolf
15:30-15:55 Device Trees Device Trees Lisandro Dami n Nicanor Perez Meyer (videoconferencia)
16:00-16:25 Python en Debian Python in Debian Emmanuel Arias
16:30-16:55 Debian y XMPP en la medici n de viento para la energ a e lica Debian and XMPP for wind measuring for eolic energy Martin Borgert
As it always happens DebConf, miniDebConf and other Debian-related activities are always fun, always productive, always a great opportunity to meet again our decades-long friends. Lets see what comes next!

7 March 2024

Gunnar Wolf: Constructed truths truth and knowledge in a post-truth world

This post is a review for Computing Reviews for Constructed truths truth and knowledge in a post-truth world , a book published in Springer Link
Many of us grew up used to having some news sources we could implicitly trust, such as well-positioned newspapers and radio or TV news programs. We knew they would only hire responsible journalists rather than risk diluting public trust and losing their brand s value. However, with the advent of the Internet and social media, we are witnessing what has been termed the post-truth phenomenon. The undeniable freedom that horizontal communication has given us automatically brings with it the emergence of filter bubbles and echo chambers, and truth seems to become a group belief. Contrary to my original expectations, the core topic of the book is not about how current-day media brings about post-truth mindsets. Instead it goes into a much deeper philosophical debate: What is truth? Does truth exist by itself, objectively, or is it a social construct? If activists with different political leanings debate a given subject, is it even possible for them to understand the same points for debate, or do they truly experience parallel realities? The author wrote this book clearly prompted by the unprecedented events that took place in 2020, as the COVID-19 crisis forced humanity into isolation and online communication. Donald Trump is explicitly and repeatedly presented throughout the book as an example of an actor that took advantage of the distortions caused by post-truth. The first chapter frames the narrative from the perspective of information flow over the last several decades, on how the emergence of horizontal, uncensored communication free of editorial oversight started empowering the netizens and created a temporary information flow utopia. But soon afterwards, algorithmic gatekeepers started appearing, creating a set of personalized distortions on reality; users started getting news aligned to what they already showed interest in. This led to an increase in polarization and the growth of narrative-framing-specific communities that served as echo chambers for disjoint views on reality. This led to the growth of conspiracy theories and, necessarily, to the science denial and pseudoscience that reached unimaginable peaks during the COVID-19 crisis. Finally, when readers decide based on completely subjective criteria whether a scientific theory such as global warming is true or propaganda, or question what most traditional news outlets present as facts, we face the phenomenon known as fake news. Fake news leads to post-truth, a state where it is impossible to distinguish between truth and falsehood, and serves only a rhetorical function, making rational discourse impossible. Toward the end of the first chapter, the tone of writing quickly turns away from describing developments in the spread of news and facts over the last decades and quickly goes deep into philosophy, into the very thorny subject pursued by said discipline for millennia: How can truth be defined? Can different perspectives bring about different truth values for any given idea? Does truth depend on the observer, on their knowledge of facts, on their moral compass or in their honest opinions? Zoglauer dives into epistemology, following various thinkers ideas on what can be understood as truth: constructivism (whether knowledge and truth values can be learnt by an individual building from their personal experience), objectivity (whether experiences, and thus truth, are universal, or whether they are naturally individual), and whether we can proclaim something to be true when it corresponds to reality. For the final chapter, he dives into the role information and knowledge play in assigning and understanding truth value, as well as the value of second-hand knowledge: Do we really own knowledge because we can look up facts online (even if we carefully check the sources)? Can I, without any medical training, diagnose a sickness and treatment by honestly and carefully looking up its symptoms in medical databases? Wrapping up, while I very much enjoyed reading this book, I must confess it is completely different from what I expected. This book digs much more into the abstract than into information flow in modern society, or the impact on early 2020s politics as its editorial description suggests. At 160 pages, the book is not a heavy read, and Zoglauer s writing style is easy to follow, even across the potentially very deep topics it presents. Its main readership is not necessarily computing practitioners or academics. However, for people trying to better understand epistemology through its expressions in the modern world, it will be a very worthy read.

Next.