Search Results: "nomeata"

7 September 2021

Joachim Breitner: A Candid explainer: Opt is special

This is the third post in a series about the interface description language Candid.

The record extension problem Initially, the upgrade rules of Candid were a straight-forward application of the canonical subtyping rules. This worked and was sound, but it forbid one very commonly requested use case: Extending records in argument position. The subtyping rule for records say that
   record   old_field : nat; new_field : int  
<: record   old_field : nat  
or, in words, that subtypes can have more field than supertypes. This makes intuitive sense: A user of a such a record value that only expects old_field to be there is not bothered if suddenly a new_field appears. But a user who expects new_field to be there is thrown off if it suddenly isn t anymore. Therefore it is ok to extend the records returned by a service with new fields, but not to extend the record in your methods s argument types. But users want to extend their record types over time, also in argument position! In fact, they often want to extend them in both positions. Consider a service with the following interface, where the CUser record appears both in argument and result position:
type CUser = record   login : text; name : text  ;
service C :  
  register_user : (CUser) -> ();
  get_user_data : (text) -> (CUser);
 
It seems quite natural to want to extend the record with a new field (say, the age of the user). But simply changing the definition of CUser to
type CUser = record   login : text; name : text; age : nat  
is not ok, because now register_user requires the age field, but old clients don t provide it. So how can we allow developers to make changes like this (because they really really want that), while keeping the soundness guarantees made by Candid (because we really really want that)? This question has bothered us for over two years, and we even created a meta issue that records the dozen approached we considered, discussed and eventually ditched.

Dynamic subtyping checks in opt I will spare you the history lesson, though, and explain the solution we eventually came up with. In the example above it seems unreasonable to let the developer add a field age of type nat. Since there may be old clients around, the service unavoidably has to deal with records that don t have an age field. If the code expects an age value, what should it do in that case? But one can argue that changing the record type to
type CUser = record   login : text; name : text; age : opt nat  
could work: If the age field is present, use the value. If the field is absent, inject a null value during decoding. In the first-order case, this rather obvious solution would work just fine, and we d be done. But Candid aims to solve the higher order case, and I said things get tricky here, didn t I? Consider another, independent service D with the following interface:
type DUser = record   login : text; name : text  ;
service D :  
  on_user_added : (func (DUser) -> ()) -> ()
 
This service has a method that takes a method reference, presumably with the intention of calling it whenever a new user was added to service D. And because the types line up nicely, we can compose these two services, by once calling D.on_user_added(C.register_user), so that from now on D calls C.register_user( ). These kind of service mesh-ups are central to the vision of the Internet Computer! But what happens if the services now evolve their types in different ways, e.g. towards
type CUser = record   login : text; name : text; age : opt nat  
and
type DUser = record   login : text; name : text; age : opt variant   under_age; adult  
Individually, these are type changes that we want to allow. But now the call from D to C may transmit a value of record ; age = opt variant under_age when C expects a natural number! This is precisely what we want to prevent by introducing types, and it seems we have lost soundness. The best solution we could find is to make the opt type somewhat special, and apply these extra rules when decoding at an expected type of opt t.
  • If this was a record field, and the provided record value doesn t even have a field of that name, don t fail but instead treat this as null. This handles the first-order example above.
  • If there is a value, look at its type (which, in Candid, is part of the message).
    • If it is opt t' and t' <: t, then decode as normal.
    • If it is opt t' but t' <: t does not hold, then treat that as null. This should only happen in these relatively obscure higher-order cases where services evolved differently and incompatibly, and makes sure that the calls that worked before the upgrades will continue to work afterwards. It is not advisable to actually use that rule when changing your service s interface. Tools that assist the developer with an upgrade check should prevent or warn about the use of this rule.
  • Not strictly required here, but since we are making opt special anyways: If its type t' is not of the form opt , pretend it was opt t', and also pretend that the given value was wrapped in opt. This allows services to make record field in arguments that were required in the old version optional in a new version, e.g. as a way to deprecated them. So it is mildly useful, although I can report that it makes the meta-theory and implementation rather complex, in particular together with equirecursion and beasts like type O = opt O. See this discussion for a glimpse of that.
In the above I stress that we look at the type of the provided value, and not just the value. For example, if the sender sends the value opt vec at type opt vec nat, and the recipient expects opt vec bool, then this will decode as null, even though one could argue that the value opt vec could easily be understood at type opt vec bool. We initially had that, but then noticed that this still breaks soundness when there are references inside, and we have to do a full subtyping check in the decoder. This is very unfortunate, because it makes writing a Candid decoder a noticeably harder task that requires complicated graph algorithms with memoization (which I must admit Motoko has not implemented yet), but it s the least bad solution we could find so far.

Isn t that just dynamic typing? You might have noticed that with these rules, t <: opt t' holds for all types t and t'. In other words, every opt is a top type (like reserved), thus all optional types are equivalent. One could argue that we are throwing all of the benefits of static typing over board this way. But it s not quite as bad: It s true that decoding Candid values now involves a dynamic check inserting null values in certain situations, but as long as everyone plays by the rules (i.e. upgrades their services according to the Candid safe upgrade rules, and heeds the warning alluded to above), these certain situations will only happen in the obscurest of cases. It is, by the way, not material to this solution that the special subtyping behavior is applied to the opt type, and a neighboring point in the design space would be a dedicated upgraded t type constructor. That would allow developers to use the canonical opt type with the usual subtyping rules and communicate their intent ( clients should consider this field optional vs. old clients don t know about this field, but new clients should use it ) more cleanly at the expense of having more non-canonical types in the type system. To see why it is convenient if Candid has mostly just the normal types, read the next post, which will describe how Candid can be integrated into a host language.

30 August 2021

Joachim Breitner: A Candid explainer: Safe higher-order upgrades

This is the second post in a series about the interface description language Candid.

Safe upgrades A central idea behind Candid is that services evolve over time, and so also their interfaces evolve. As they do, it is desirable to keep the interface usable by clients who have not been updated. In particular on a blockchainy platform like the Internet Computer, where some programs are immutable and cannot be changed to accommodate changes in the interface of the services they use, this is of importance. Therefore, Candid defines which changes to an interface are guaranteed to be backward compatible. Of course it s compatible to add new methods to a service, but some changes to a method signature can also be ok. For example, changing
service A1 :  
  get_value : (variant   current; previous : nat  )
    -> (record   value : int; last_change : nat  )
 
to
service A2 :  
  get_value : (variant   current; previous : nat; default  )
    -> (record   value : int; last_change : nat; committed : bool  )
 
is fine: It doesn t matter that clients that still use the old interface don t know about the new constructor of the argument variant. And the extra field in the result record will silently be ignored by old clients. In the Candid spec, this relation is written as A2 <: A1 (note the order!), and the formal footing this builds on is subtyping. We thus say that it is safe to upgrade a service to a subtype , and that A2 s interface is a subtype of A1 s. In small examples, I often use nat and int, because every nat is also a int, but some int values are not not nat values, namely the negative ones. We say nat is a subtype of int, nat <: int. So a function that in the first returns a int can in the new version return a nat without breaking old clients. But the other direction doesn t work: If the old version s method had a return type of nat, then changing that to int would be a breaking change, as old clients would not be prepared to handle negative numbers. Note that arguments of function types follow different rules. In fact, the rules are simply turned around, and in argument position (also called negative position ), we can evolve the interface to accept supertypes. Concretely, a function that originally expected an nat can be changed to expect an int, but the other direction doesn t work, because there might still be old clients around that send negative numbers. This reversing-of-the-rules is called contravariance. The vision is that the developer s tooling will warn the developer before they upgrade the code of a running service to a type that is not a subtype of the old interface. Let me stress, though, that all this is about not breaking existing clients that use the old interface. It does not mean that a client developer who fetches the new interface for your service won t have to change their code to make his programs compile again.

Higher order composition So far, we considered the simple case of one service with a bunch of clients, i.e. the first order situation. Things get much more interesting if we have multiple services that are composed, and that pass around references to each other, and we still want everything to be nicely typed and never fail even as we upgrade services. Therefore, Candid s type system can express that a service s method expects or a returns a reference to another service or method, and the type that this service or method should have. For example
service B :   add_listener : (text, func (int) -> ()) -> ()  
says that the service with interface B has a method called add_listener which takes two arguments, a plain value of type text and a reference to a function that itself expects a int-typed argument. The contravariance of the subtyping rules explained above also applies to the types of these function reference. And because the int in the above type is on the left of two function arrows, the subtyping rule direction flips twice, and it is therefore safe to upgrade the service to accept the following interface:
service B :   add_listener : (text, func (nat) -> ()) -> ()  

Soundness theorem and Coq proof The combination of these higher order features and the safe upgrade mechanism is maybe the unique selling point of Candid, but also what made its design quite tricky sometimes. And although the conventional subtyping rules work well, we wanted to do some less conventional things (more on that below), and more than once thought we had a good solution, only to notice that it did not hold water in complicated higher-order cases. But what does it mean to hold water? I felt the need to precisely define a soundness criteria for higher order interface description languages, which you can find in the document An IDL Soundness Proposition. This is a general framework which you can instantiate with your concrete IDL language and rules, and then it tells you what you have to prove to consider your language to be sound. Intuitively, it simulates all possible ways in which services can be defined and upgraded, and in which they can pass around references to each other, and the soundness property is that then that for all messages sent between services, they can always be understood. The document also shows, in general, that any system that builds on canonical subtyping in sound, as expected. That is still a generic theorem that you can instantiate with a concrete system, but now there is less to do. Because these proofs can get tricky in corner cases, it is valuable to mechanize them in an interactive theorem prover and have the computer check the proofs. So I have created a Coq formalization that defines the soundness criteria, including the reduction to canonical subtyping. It also defines MiniCandid, a greatly simplified variant of Candid, proves various properties about it (transitivity etc.) and finally instantiates the soundness theorem. I am particularly fond of the use of coinduction to model the equirecursive types, and the use of named cases, as we know them from Isabelle, to make the proofs a bit more readable and maintainable. I am less fond of how much work it seems to be to extend MiniCandid with more of Candid s types. Already adding vec was more work than it s worth it, and I defensively blame Coq s not-so-great support for nested recursion. The soundness relies on subtyping, and that is all fine and well as long as we stick to canonical rules. Unfortunately, practical considerations force us to deviate from these a bit, as we see in the next post of this series.

29 August 2021

Joachim Breitner: A Candid explainer: The rough idea

One of the technologies that was created by DFINITY as we built the Internet Computer is Candid. Candid is In this in-depth blog post series I want to shed some light onto these aspects of Candid. The target audience is mainly anyone who wants to deeply understand Candid, e.g. researchers, implementors of Candid tools, anyone who wants to builds a better alternative, but no particular prior Candid knowledge is expected. Also, much of what is discussed here is independent of the Internet Computer. Some posts may be more theoretical or technical than others; if you are lost in one I hope you ll rejoin for the subsequent post Much in these posts is not relevant for developers who just want to use Candid in their projects, and also much that they want to know is missing. The Internet Computer dev docs will hopefully cater to that audience.

Blog post series overview
  1. The rough idea (this post) Announcing the blog post series, and very briefly outlining what an Interface Description Language even is.
  2. Safe higher-order upgrades Every can do first-order interface description languages. What makes Candid special is its support for higher-order service composition. This post also contains some general meta-theory and a Coq proof!
  3. Opt is special Extending records in argument position is surprisingly tricky. Lean why, and how we solved it.
  4. Language integration The point of Candid is inter-op, so let s look the various patterns of connecting Candid to your favorite programming language.
  5. Quirks The maybe most interesting post in this series: Candid has a bunch of quirks (hashed field names, no tuples but sequnces, references that nobody used, and more). I ll explain some of them, why they are there, and what else we could have done. Beware: This post will be opinionated.

The rough idea The idea of an interface description language is easy to begin with. Something like
service A :  
  add : (int) -> ();
  get : () -> (int);
 
clearly communicates that a service called A provides two methods add and get, that add takes an integer as an argument, and that get returns such a number. Of course, this only captures the outer shape of the service, but not what it actually does we might assume it implements a counter of sorts, but that is not part of the Candid interface description. In order to now use the service, we also need a (low-level) transport mechanism. Candid itself does not specify that, but merely assumes that it exists, and is able to transport the name of the invoked method and a raw sequence of bytes to the service, and a response (again a raw sequence of bytes) back. Already on the Internet Computer we have two distinct transport mechanisms used are external calls via an HTTP-based RPC interface and on-chain inter-canister calls. Both are handled by the service in the same way. Here we can see that Candid succeeds in abstracting over differences in the low-level transport mechanism. Because of this abstraction, it is possible to use Candid over other transportation mechanisms as well (conventional HTTPS, E-Mail, avian carrier). I think Candid has potential there as well, so even if you are not interested in the Internet Computer, this post may be interesting to you. The translation of argument and result values (e.g. numbers of type int) to the raw sequence of bytes is specified by Candid, defining a wire format. For example, the type int denotes whole numbers of arbitrary size, and they are encoded using the LEB128 scheme. The Candid wire format also contains a magic number (DIDL, which is 0x4449444c in hex) and a type description (more on that later). So when passing the value 2342 to the add, the raw bytes transported will be 0x4449444c00017da612. Of course we want to transfer more data than just integers, and thus Candid supports a fairly complete set of common basic types (nat, int, nat8, nat16, nat32, nat64, int8, int16, int32, int64, float32, float64, bool, text, blob) and composite types (vectors, optional values , records and variants). The design goal is to provide a canonical set of types enough to express most data that you might want to pass, but no more than needed, so that different host languages can support these types easily. The Candid type system is structural. This means that two types are the same when they are defined the same way. Imagine two different services defining the same
type User = record   name : text; user_id : nat  
then although User is defined in two places, it s still the same type. In other words, these name type definitions are always just simple aliases, and what matters is their right-hand side. Because Candid types can be recursive (e.g. type Peano = opt Peano), this means we have an equirecursive type system, which makes some things relatively hard. So far, nothing too special about Candid compared to other interface definition languages (although a structural equirecursive type system is already something). In the next post we look at reference types, and how such higher order features make Candid an interesting technology.

22 August 2021

Joachim Breitner: Sweet former employee appreciation

Two months ago was my last day of working as an employee for DFINITY, and as I mentioned in the previous blog post, one of my main contributions was the introduction and maintenance of the Interface Specification.
The Interface Specification as a bookThe Interface Specification as a book
Hence I was especially happy to find that my former colleague Hans Larsen has, as a farewell gift and token of appreciation, created a hard copy of the document, with 107 pages of dry technical content that, and a very sweet dedication on the back. This is especially valuable as it came from Lars personally (i.e. it s not just routine HR work to keep former employees happy, which I could imagine to be a thing in big and mature corporations), and despite the fact that he himself has left DFINITY since. I also take this as further confirmation that this specification-driven design process, despite the initial resistance and daily hurdles, is a good one. A rough list of guiding principles would be: The similarity to the concept of deep specifications from the DeepSpec project is indeed striking. One of my hopes at DFINITY was that these principles would catch on and that other components (e.g the NNS, the nodes or the ICP ledger canister) would follow suite. That did not happen, it seems. Or rather, it did not happen yet PS, in case you are wondering how: The rendering of the document that s shown at at https://sdk.dfinity.org/docs/interface-spec/index.html is not great; the internal website had a different style with made navigating this large document more possible, as it was a dedicated site with the document s nested table of content on the left.

19 June 2021

Joachim Breitner: Leaving DFINITY

Last Thursday was my last working day at DFINITY. There are various reasons why I felt that after almost three years the DFINITY Foundation isn t quite the right place for me anymore, and this plan has been in the making for a while. Primarily, there are personal pull factors that strongly suggest that I ll take a break from full time employment, so I decided to see the launch of the Internet Computer through and then leave. DFINITY has hired some amazing people, and it was a great pleasure to work with them. I learned a lot (some Rust, a lot of Nix, and just how merciless Conway s law is), and I dare say I had the opportunity to do some good work, contributing my part to make the Internet Computer a reality. I am especially proud of the Interface Specification and the specification-driven design principles behind it. It even comes with a model reference implementation and acceptance test suite, and although we didn t quite get to do formalization, those familiar with the DeepSpec project will recognize some influence of their concept of deep specifications . Besides that, there is of course my work on the Motoko programming language, where I build the backend,a the Candid interoperability layer, where I helped with the formalism, formulated the a generic soundness criterion for Interface Description Languages in a higher order settings and formally verified that in Coq. Fortunately, all of this work is now Free Software or at least Open Source. With so much work poured into this project, I continue to care about it, and you ll see me post on the the developer forum and hack on Motoko. As the Internet Computer becomes gradually more open, I hope I can be gradually more involved again. But even without me contributing full-time I am sure that DFINITY and the Internet Computer will do well; when I left there were still plenty of smart, capable and enthusiastic people forging ahead. So what s next? So far, I have rushed every professional transition in my life: When starting my PhD, when starting my postdoc, when starting my job at DFINITY, and every time I regretted it. So this time, I will take a proper break and will explore the world a bit (as far as that is possible given the pandemic). I will stresslessly contribute to various open source projects. I also hope to do more public outreach and teaching, writing more blog posts again, recording screencasts and giving talks and lectures. If you want to invite me to your user group/seminar/company/workshop, please let me know! Also, I might be up for small interesting projects in a while. Beyond these, I have no concrete plans and am looking forward to the inspiration I might get from hiking through the Scandinavian wilderness. If you happen to stumble across my tent, please stop for a tea.

2 June 2021

Joachim Breitner: Verifying the code of the Internet Identity service

The following post was meant to be posted at https://forum.dfinity.org/, but that discourse instance didn t like it; maybe too much inline code, so I m posting it here instead. To my regular blog audience, please excuse the lack of context. Please comment at the forum post. The text was later also posted on the DFINITY medium blog You probably have used https://identity.ic0.app/ to log into various applications (the NNS UI, OpenChat etc.) before, and if you do that, you are trusting this service to take good care of your credentials. Furthermore, you might want to check that the Internet Identity is really not tracking you. So you want to know: Is this really running the code we claim it to run? Of course the following applies to other canisters as well, but I ll stick to the Internet Identity in this case. I ll walk you through the steps of verifying that:

Find out what is running A service on the Internet Computer, i.e. a canister, is a WebAssembly module. The Internet Computer does intentionally not allow you to just download the Wasm code of any canisters, because maybe some developer wants to keep their code private. But it does expose a hash of the Wasm module. The easiest way to get it is using dfx:
$ dfx canister --no-wallet --network ic info rdmx6-jaaaa-aaaaa-aaadq-cai
Controller: r7inp-6aaaa-aaaaa-aaabq-cai
Module hash: 0xd4af9277f3e8d26fd8cdc7874a9f47b6456587fbb2a64d61b6b6880d144d3c04
The controller here is the canister id of the governance canister. This tells you that the Internet Identity is controlled by the Network Nervous System (NNS), and its code can only be changed via proposals that are voted on. This is good; if the controller was just, say, me, I could just change the code of the Internet Identity and take over all your identities. The Module hash is the SHA-256 hash of the .wasm that was deployed. So let s follow that trace.

Finding the right commit Since upgrades to the Internet Identity are done via proposals to the NNS, we should find a description of such a proposal in the https://github.com/ic-association/nns-proposals repository, in the proposals/network_canister_management directory.
Github s list of recent NNS proposalsGithub s list of recent NNS proposals
We have to find the latest proposal upgrading the Internet Identity. The folder unfortunately contains proposals for many canisters, and the file naming isn t super helpful. I usually go through the list from bottom and look at the second column, which contains the title of the latest commit creating or modifying a file. In this case, the second to last is the one we care about: https://github.com/ic-association/nns-proposals/blob/main/proposals/network_canister_management/20210527T2203Z.md. This file lists rationales, gives an overview of changes and, most importantly, says that bd51eab is the commit we are upgrading to. The file also says that the wasm hash is d4a c04, which matches what we saw above. This is good: it seems we really found the youngest proposal upgrading the Internet Identity, and that the proposal actually went through. WARNING: If you are paranoid, don t trust this file. There is nothing preventing a proposal proposer to create a file pointing to one revision, but actually including other code in the proposal. That s why the next steps are needed.

Getting the source Now that we have the revision, we can get the source and check out revision bd51eab:
/tmp $ git clone https://github.com/dfinity/internet-identity
Klone nach 'internet-identity' ...
remote: Enumerating objects: 3959, done.
remote: Counting objects: 100% (344/344), done.
remote: Compressing objects: 100% (248/248), done.
remote: Total 3959 (delta 161), reused 207 (delta 92), pack-reused 3615
Empfange Objekte: 100% (3959/3959), 6.05 MiB   3.94 MiB/s, Fertig.
L se Unterschiede auf: 100% (2290/2290), Fertig.
/tmp $ cd internet-identity/
/tmp/internet-identity $ git checkout bd51eab
/tmp/internet-identity $ git log --oneline -n 1
bd51eab (HEAD, tag: mainnet-20210527T2203Z) Registers the seed phrase before showing it (#301)
In the last line you see that the Internet Identity team has tagged that revision with a tag name that contains the proposal description file name. Very tidy!

Reproducing the build The README.md has the following build instructions:

Official build The official build should ideally be reproducible, so that independent parties can validate that we really deploy what we claim to deploy. We try to achieve some level of reproducibility using a Dockerized build environment. The following steps should build the official Wasm image
docker build -t internet-identity-service .
docker run --rm --entrypoint cat internet-identity-service /internet_identity.wasm > internet_identity.wasm
sha256sum internet_identity.wasm
The resulting internet_identity.wasm is ready for deployment as rdmx6-jaaaa-aaaaa-aaadq-cai, which is the reserved principal for this service.

It actually suffices to run the first command, as it also prints the hash (we don t need to copy the .wasm out of the Docker canister):
/tmp/internet-identity $ docker build -t internet-identity-service .
 
Step 26/26 : RUN sha256sum internet_identity.wasm
 ---> Running in 1a04644b544c
d4af9277f3e8d26fd8cdc7874a9f47b6456587fbb2a64d61b6b6880d144d3c04  internet_identity.wasm
Removing intermediate container 1a04644b544c
 ---> bfe6a63a7980
Successfully built bfe6a63a7980
Successfully tagged internet-identity-service:latest
Success! The hashes match. You don t believe me? Try it yourself (and let us know if you get a different hash, maybe I got hacked). This may fail if you have too little RAM configured for Docker, 8GB should be enough. At this point you have a trust path from the code sitting in front of you to the Internet Identity running at https://identity.ic0.app, including the front-end code, and you can start auditing the source code.

What about the canister id? If you payed close attention you might have noticed that we got the module has for canister rdmx6-jaaaa-aaaaa-aaadq-cai, but we are accessing a web application at https://identity.ic0.app. So where is this connection? In the future, I expect some form of a DNS-like nice host name registry on the Internet Computer that stores a mapping from nice names to canister ids, and that you will be able to query that to for which canister serves rdmx6-jaaaa-aaaaa-aaadq-cai in a secure way (e.g. using certified variables). But since we don t have that yet, but still want you to be able to use a nice name for the Internet Identity (and not have to change the name later, which would cause headaches), we have hard-coded this mapping for now. The relevant code here is the Certifying Service Worker that your browser downloads when accessing any *.ic0.app URL. This piece of code will then intercept all requests to that domain, map it to an query call, and then use certified variables to validate the response. And indeed, the mapping is in the code there:
const hostnameCanisterIdMap: Record<string, [string, string]> =  
  'identity.ic0.app': ['rdmx6-jaaaa-aaaaa-aaadq-cai', 'ic0.app'],
  'nns.ic0.app': ['qoctq-giaaa-aaaaa-aaaea-cai', 'ic0.app'],
  'dscvr.ic0.app': ['h5aet-waaaa-aaaab-qaamq-cai', 'ic0.page'],
 ;

What about other canisters? In principle, the same approach works for other canisters, whether it s OpenChat, the NNS canisters etc. But the details will differ, as every canister developer might have their own way of
  • communicating the location and revision of the source for their canisters
  • building the canisters
In particular, without a reproducible way of building the canister, this will fail, and that s why projects like https://reproducible-builds.org/ are so important in general.

22 December 2020

Joachim Breitner: Don t think, just defunctionalize

TL;DR: CPS-conversion and defunctionalization can help you to come up with a constant-stack algorithm. Update: Turns out I inadvertedly plagiarized the talk The Best Refactoring You ve Never Heard Of by James Koppel. Please consider this a form of sincere flattery.

The starting point Today, I ll take you on a another little walk through the land of program transformations. Let s begin with a simple binary tree, with value of unknown type in the leaves, as well as the canonical map function:
data T a = L a   B (T a) (T a)
map1 :: (a -> b) -> T a -> T b
map1 f (L x) = L (f x)
map1 f (B t1 t2) = B (map1 f t1) (map1 f t2)
As you can see, this map function is using the program stack as it traverses the tree. Our goal is now to come up with a map function that does not use the stack! Why? Good question! In Haskell, there wouldn t be a strong need for this, as the Haskell stack is allocated on the heap, just like your normal data, so there is plenty of stack space. But in other languages or environments, the stack space may have a hard limit, and it may be advised to not use unbounded stack space. That aside, it s a fun exercise, and that s sufficient reason for me. (In the following, I assume that tail-calls, i.e. those where a function end with another function call, but without modifying its result, do not actually use stack space. Once all recursive function calls are tail calls, the code is equivalent to an imperative loop, as we will see.)

Think? We could now just stare at the problem (rather the code), and try to come up with a solution directly. We d probably think ok, as I go through the tree, I have to remember all the nodes above me so I need a list of those nodes and for each of these nodes, I also need to remember whether I am currently processing the left child, and yet have to look at the right one, or whether I am done with the left child so what do I have to remember about the current node ? ah, my brain spins already. Maybe eventually I figure it out, but why think when we can derive the solution? So let s start with above map1, and rewrite it, in several, mechanical, steps into a stack-less, tail-recursive solution.

Go! Before we set out, let me rewrite the map function using a local go helper, as follows:
map2 :: forall a b. (a -> b) -> T a -> T b
map2 f t = go t
  where
    go :: T a -> T b
    go (L x) = L (f x)
    go (B t1 t2) = B (go t1) (go t2)
This transformation (effectively the static argument transformation ) has the nice advantage that we do not have to pass f around all the time, and that when we copy the function, I only have to change the top-level name, but not the names of the inner functions. Also, I find it more aesthetically pleasing.

CPS A blunt, effective tool to turn code that is not yet using tail-calls into code that only uses tail-calls is use continuation-passing style. If we have a function of type -> t, we turn it into a function of type -> (t -> r) -> r, where r is the type of the result we want at the very end. This means the function now receives an extra argument, often named k for continuation, and instead of returning some x, the function calls k x. We can apply this to our go function. Here, both t and r happen to be T b; the type of finished trees:
map3 :: forall a b. (a -> b) -> T a -> T b
map3 f t = go t (\r -> r)
  where
    go :: T a -> (T b -> T b) -> T b
    go (L x) k  = k (L (f x))
    go (B t1 t2) k  = go t1 (\r1 -> go t2 (\r2 -> k (B r1 r2)))
Note that when initially call go, we pass the identity function (\r -> r) as the initial continuation. Alas, suddenly all function calls are in tail position, and this codes does not use stack space! Technically, we are done, although it is not quite satisfying; all these lambdas floating around obscure the meaning of the code, are maybe a bit slow to execute, and also, we didn t really learn much yet. This is certainly not the code we would have writing after thinking hard .

Defunctionalization So let s continue rewriting the code to something prettier, simpler. Something that does not use lambdas like this. Again, there is a mechanical technique that can help it. It likely won't make the code prettier, but it will get rid of the lambdas, so let s do that an clean up later. The technique is called defunctionalization (because it replaces functional values by plain data values), and can be seen as a form of refinement. Note that we pass around vales of type (T b -> T b), but we certainly don t mean the full type (T b -> T b). Instead, only very specific values of that type occur in our program, So let us replace (T b -> T b) with a data type that contains representatives of just the values we actually use.
  1. We find at all values of type (T b -> T b). These are:
    • (\r -> r)
    • (\r1 -> go t2 (\r2 -> k (B r1 r2)))
    • (\r2 -> k (B r1 r2))
  2. We create a datatype with one constructor for each of these:
     data K = I   K1   K2
    (This is not complete yet.)
  3. We introduce an interpretation function that turns a K back into a (T b -> T b):
    eval :: K -> (T b -> T b)
    eval = (* TBD *)
  4. In the function go, instead of taking a parameter of type (T b -> T b), we take a K. And when we actually use the continuation, we have to turn the K back to the function using eval:
    go :: T a -> K a b -> T b
    go (L x) k  = eval k (L (f x))
    go (B t1 t2) k = go t1 K1
    We also do this to the code fragments identified in the first step; these become:
    • (\r -> r)
    • (\r1 -> go t2 K2)
    • (\r2 -> eval k (B r1 r2))
  5. Now we complete the eval function: For each constructor, we simply map it to the corresponding lambda from step 1:
    eval :: K -> (T b -> T b)
    eval I = (\r -> r)
    eval K1 = (\r1 -> go t2 K2)
    eval K2 = (\r2 -> eval k (B r1 r2))
  6. This doesn t quite work yet: We have variables on the right hand side that are not bound (t2, r1, k). So let s add them to the constructors K1 and K2 as needed. This also changes the type K itself; it now needs to take type parameters.
This leads us to the following code:
data K a b
  = I
    K1 (T a) (K a b)
    K2 (T b) (K a b)
map4 :: forall a b. (a -> b) -> T a -> T b
map4 f t = go t I
  where
    go :: T a -> K a b -> T b
    go (L x) k  = eval k (L (f x))
    go (B t1 t2) k  = go t1 (K1 t2 k)
    eval :: K a b -> (T b -> T b)
    eval I = (\r -> r)
    eval (K1 t2 k) = (\r1 -> go t2 (K2 r1 k))
    eval (K2 r1 k) = (\r2 -> eval k (B r1 r2))
Not really cleaner or prettier, but everything is still tail-recursive, and we are now working with plain data.

We like lists To clean it up a little bit, we can notice that the K data type really is just a list of values, where the values are either T a or T b. We do not need a custom data type for this! Instead of our K, we can just use the following, built from standard data types:
type K' a b = [Either (T a) (T b)]
Now I replace I with [], K1 t2 k with Left t2 : k and K2 r1 k with Right r1 : k. I also, very suggestively, rename go to down and eval to up:
map5 :: forall a b. (a -> b) -> T a -> T b
map5 f t = down t []
  where
    down :: T a -> K' a b -> T b
    down (L x) k  = up k (L (f x))
    down (B t1 t2) k  = down t1 (Left t2 : k)
    up :: K' a b -> T b -> T b
    up [] r = r
    up (Left  t2 : k) r1 = down t2 (Right r1 : k)
    up (Right r1 : k) r2 = up k (B r1 r2)
At this point, the code suddenly makes more sense again. In fact, I can try to verbalize it:
As we traverse the tree, we have to remember for all parent nodes, whether there is still something Left to do when we come back to it (so we remember a T a), or if we are done with that (so we have a T b). This is the list K' a b. We begin to go down the left of the tree (noting that the right siblings are still left to do), until we hit a leaf. We transform the leaf, and then go up. If we go up and hit the root, we are done. Else, if we go up and there is something Left to do, we remember the subtree that we just processed (as that is already in the Right form), and go down the other subtree. But if we go up and there is nothing Left to do, we put the two subtrees together and continue going up.
Quite neat!

The imperative loop At this point we could stop: the code is pretty, makes sense, and has the properties we want. But let s turn the dial a bit further and try to make it an imperative loop. We know that if we have a single tail-recursive function, then that s equivalent to a loop, with the function s parameter turning into mutable variables. But we have two functions! It turns out that if you have two functions a -> r and b -> r that have the same return type (which they necessarily have here, since we CPS-converted them further up), then those two functions are equivalent to a single function taking a or b , i.e. Either a b -> r. This really nothing else than the high-school level algebra rule of ra rb = ra + b. So (after reordering the arguments of down to put T b first) we can rewrite the code as
map6 :: forall a b. (a -> b) -> T a -> T b
map6 f t = go (Left t) []
  where
    go :: Either (T a) (T b) -> K' a b -> T b
    go (Left (L x))     k        = go (Right (L (f x))) k
    go (Left (B t1 t2)) k        = go (Left t1) (Left t2 : k)
    go (Right r)  []             = r
    go (Right r1) (Left  t2 : k) = go (Left t2) (Right r1 : k)
    go (Right r2) (Right r1 : k) = go (Right (B r1 r2)) k
Do you see the loop yet? If not, maybe it helps to compare it with the following equivalent imperative looking pseudo-code:
mapLoop :: forall a b. (a -> b) -> T a -> T b
mapLoop f t  
  var node = Left t;
  var parents = [];
  while (true)  
    switch (node)  
      Left (L x) -> node := Right (L (f x))
      Left (B t1 t2) -> node := Left t1; parents.push(Left t2)
      Right r1 ->  
        if (parents.len() == 0)  
          return r1;
          else  
          switch (parents.pop())  
            Left t2  -> node := Left t2; parents.push(Right r1);
            Right r2 -> node := Right (B r1 r2)
           
         
       
     
   
 

Conclusion I find it enlightening to see how apparently very different approaches to a problem (recursive, lazy functions and imperative loops) are connected by a series of rather mechanical transformations. When refactoring code, it is helpful to see if one can conceptualize the refactoring as one of those mechanical steps (refinement, type equivalences, defunctionalization, cps conversion etc.) If you liked this post, you might enjoy my talk The many faces of isOrderedTree, which I have presented at MuniHac 2019 and Haskell Love 2020.

5 December 2020

Joachim Breitner: Named goals in Coq

TL;DR: Some mildly tricky tricks allow you to select subgoals by name, like in Isabelle. This leads to more readable proofs, and proofs that are more robust against changes.

The need for case names Consider the following Coq proof, right out of the respected Software Foundations book, Chapter Inductively Defined Propositions (and slightly simplified):
Lemma star_app: forall T (s1 s2 : list T) (re : reg_exp T),
  s1 =~ Star re ->
  s2 =~ Star re ->
  s1 ++ s2 =~ Star re.
Proof.
  intros T s1 s2 re H1.
  remember (Star re) as re'.
   generalize dependent s2.
  induction H1.
  - (* MEmpty *) discriminate.
  - (* MChar *) discriminate.
  - (* MApp *) discriminate.
  - (* MUnionL *) discriminate.
  - (* MUnionR *) discriminate.
  - (* MStar0 *)
    injection Heqre' as Heqre''. intros s H. apply H.
  - (* MStarApp *)
    injection Heqre' as Heqre''.
    intros s3 H1. rewrite <- app_assoc.
    apply MStarApp.
    + apply H1_.
    + apply IHexp_match2.
      * rewrite Heqre''. reflexivity.
      * apply H1.
Qed.
This is a very typical example for a proof by induction over an inductively defined relation (in this case, the relations s =~ re which indicates that the string s matches the regular expression re): After some preparation, the invocation of a tactic like induction (could also be inversion or others) creates a number of new goals, one of for each rule of the relation. Then the modern(!) Coq user uses proof bullets to focus the goals, one after another, in order, to solve them. We can see from this example that proof bullets (which were a big improvement) are not quite satisfactory, and the author of this proof felt the need to annotate each subproof with the case it corresponds to. But these are mere comments! This is unsatisfactory, as there is nothing keeping them correct as my proof changes. And anyone who came to Coq from Isabelle is just flabbergasted that the state of art in Coq is still at that level So, wouldn t it be great if
  • I could focus goals by their name (e.g. MEmpty)
  • Focus them in any order (so that refactoring of my data types do not break my proofs)
  • This even worked when simultaneously inducting over or inverting multiple relations?

Simulating named goals in Coq Well, with a few tricks we can! It just takes three simple steps:
  1. I load a small file full of hacks (explained below):
    Require Import NamedCases.
  2. This allows me to prefix the type of the constructors of the inductive relations (i.e. the rules of the relation) with case <name>, , as follows:
    Reserved Notation "s =~ re" (at level 80).
    Inductive exp_match  T  : list T -> reg_exp T -> Prop :=
        MEmpty:
        case empty,
        [] =~ EmptyStr
        MChar:
        case char,
        forall x,
        [x] =~ (Char x)
        MApp:
        case app,
        forall s1 re1 s2 re2
        (H1 : s1 =~ re1)
        (H2 : s2 =~ re2),
        (s1 ++ s2) =~ (App re1 re2)
        MUnionL:
        case union_l,
        forall s1 re1 re2
        (H1 : s1 =~ re1),
        s1 =~ (Union re1 re2)
        MUnionR:
        case union_r,
        forall re1 s2 re2
        (H2 : s2 =~ re2),
        s2 =~ (Union re1 re2)
        MStar0:
        case star0,
        forall re,
        [] =~ (Star re)
        MStarApp:
        case star_app,
        forall s1 s2 re
        (H1 : s1 =~ re)
        (H2 : s2 =~ (Star re)),
        (s1 ++ s2) =~ (Star re)
      where "s =~ re" := (exp_match s re).
    This is the definition from the Software Foundations book; only the case <name>, lines are added.
  3. Now I can change the proof: After the induction tactic I use ; name_cases to name the cases, and then I can use the relatively new named goal focusing feature ([name]: , which I suggested two years ago, and was implemented by Th o Zimmermann) to focus a specific case:
    Lemma star_app2: forall T (s1 s2 : list T) (re : reg_exp T),
      s1 =~ Star re ->
      s2 =~ Star re ->
      s1 ++ s2 =~ Star re.
    Proof.
      intros T s1 s2 re H1.
      remember (Star re) as re'.
       generalize dependent s2.
      induction H1; name_cases.
      [empty]:  
         discriminate.
       
      [char]:  
         discriminate.
       
      [app]:  
         discriminate.
       
      [unionL]:  
         discriminate.
       
      [unionR]:  
        discriminate.
       
      [star0]:  
        injection Heqre' as Heqre''. intros s H. apply H.
       
      [starApp]:  
        injection Heqre' as Heqre''.
        intros s3 H1. rewrite <- app_assoc.
        apply MStarApp; clear_names.
        + apply H1_.
        + apply IHexp_match2.
          * rewrite Heqre''. reflexivity.
          * apply H1.
        
    Qed.
    The comments have turned into actual names!
Goal focusing is now just much more reliable. For example, let me try to discharge the boring cases by using ; try discriminate after the induction. This means there are suddenly only two goals left. With bullets, the first bullet would now suddenly focus a different goal, and my proof script would likely fail somewhere. With named goals, I get a helpful error message No such goal: empty and I can t even enter the first subproof. So let me delete these goals and, just to show off, focus the remaining goals in the wrong order:
Lemma star_app: forall T (s1 s2 : list T) (re : reg_exp T),
  s1 =~ Star re ->
  s2 =~ Star re ->
  s1 ++ s2 =~ Star re.
Proof.
  intros T s1 s2 re H1.
  remember (Star re) as re'.
   generalize dependent s2.
  induction H1; name_cases; try discriminate.
  [starApp]:  
    injection Heqre' as Heqre''.
    intros s3 H1. rewrite <- app_assoc.
    apply MStarApp; clear_names.
    + apply H1_.
    + apply IHexp_match2.
      * rewrite Heqre''. reflexivity.
      * apply H1.
    
  [star0]:  
    injection Heqre' as Heqre''. intros s H. apply H.
   
Qed.
I did not explain the clear_names tactic yet that I had to use after apply MStarApp. What I am showing here is a hack, and it shows these case names are extra hypothesis (of a trivially true type CaseName), and this tactic gets rid of it.

Even better than Isabelle The above even works when there are multiple inductions/inversions involved. To give an example, let s introduce another inductive predicate, and begin proving a lemma that one can do by induction on one and inversion on the other predicate.
Inductive Palindrome  T  : list T -> Prop :=
  EmptyPalin:
  case emptyP,
  Palindrome []
  SingletonPalin:
  case singletonP,
  forall x,
  Palindrome [x]
  GrowPalin:
  case growP,
  forall l x,
  Palindrome l -> Palindrome ([x] ++ l ++ [x])
.
Lemma palindrome_star_of_two:
  forall T (s : list T) x y,
  Palindrome s -> s =~ Star (App (Char x) (Char y)) ->
  s = [] \/ x = y.
Proof.
  intros T s x y HPalin HRe.
  induction HPalin; inversion HRe; name_cases.
At this point we have four open goals. Can you tell why there are four, and which ones they are? Luckily, Show Existentials will tell us their names (and Coq 8.13 will just show them in the proof state window). These names not only help us focus the goal in a reliable way; they even assist us in understanding what goal we have to deal with. For example, the last goal arose from the GrowP rule and the StarApp rule. Neat!
  [emptyP_star0]:  
    left. reflexivity.
   
  [emptyP_starApp]:  
     left. reflexivity.
   
  [singletonP_starApp]:  
    inversion HRe; clear HRe.
    inversion H5; clear H5.
    inversion H10; clear H10.
    inversion H11; clear H11.
    subst.
    inversion H3.
   
 [growP_starApp]:  
    admit. (* I did not complete the proof *)
  

Under the hood (the hack) The above actually works, but it works with a hack, and I wish that Coq s induction tactic would just do the right thing here. I hope eventually this will be the case, and this section will just be curiosity but until, it may actually be helpful.
  1. The trick1 is to leave markers in the proof state. We use a dedicate type CaseName, and a hypothesis name : CaseName indicates that this goal arose from the that case.
    Inductive CaseName := CaseNameI.
    With this, we can write StarApp: forall (starApp : CaseName), to name this case.
  2. For a little bit of polish, we can introduce notation for that:
    Notation "'case' x , t" := (forall  x : CaseName , t) (at level 200).
    This allows us to use the syntax you saw above.
  3. To easily discharge these CaseName assumptions when using the introduction forms, I wrote these tactics:
    Ltac clear_names := try exact CaseNameI.
    Ltac named_constructor  := constructor; [ exact CaseNameI   idtac .. ].
    Ltac named_econstructor := econstructor; [ exact CaseNameI   idtac .. ].
  4. Finally I wrote the tactic name_cases. What it does is relatively simple:
    For each subgoal: Take all hypotheses of type CaseName, concatenate their names, change the name of the current goal to that, and clear these hypotheses
    Unfortunately, my grasp of Coq tactic programming is rather rudimentary, and the only way I manged to solve this is a horrible monster of Ltac2, including hand-rolled imperative implementations of string concatenation and a little nested Ltac1 tactic within. You have been warned:
    From Ltac2 Require Import Ltac2.
    From Ltac2 Require Option.
    Set Default Proof Mode "Classic".
    Ltac name_cases := ltac2:(
      (* Horribly manual string manipulations. Does this mean I should
         go to the Ocaml level?
      *)
      let strcpy s1 s2 o :=
        let rec go := fun n =>
          match Int.lt n (String.length s2) with
            true => String.set s1 (Int.add o n) (String.get s2 n); go (Int.add n 1)
            false => ()
          end
        in go 0
      in
      let concat := fun s1 s2 =>
        let l := Int.add (Int.add (String.length s1) (String.length s2)) 1 in
        let s := String.make l (Char.of_int 95) in
        strcpy s s1 0;
        strcpy s s2 (Int.add (String.length s1) 1);
        s
      in
      Control.enter (let rec go () :=
        lazy_match! goal with
          [ h1 : CaseName, h2 : CaseName  - _ ] =>
          (* Multiple case names? Combine them (and recurse) *)
          let h := concat (Ident.to_string h1) (Ident.to_string h2) in
          Std.clear [h1; h2];
          let h := Option.get (Ident.of_string h) in
          assert ($h := CaseNameI);
          go ()
          [ _ : CaseName  - _ ] =>
          (* A single case name? Set current goal name accordingly. *)
          ltac1:(
            (* How to do this in ltac2? *)
            lazymatch goal with
              [ H : CaseName  - _ ] => refine ?[H]; clear H
            end
          )
          [  - _ ] =>
          Control.backtrack_tactic_failure "Did not find any CaseName hypotheses"
        end
      in go)
    ).
    If someone feels like re-implementing this in a way that is more presentable, I ll be happy to update this code (and learn something along the way
That s all it takes!

Conclusion With some mild hackery, we can get named cases in Coq, which I find indispensable in managing larger, evolving proofs. I wonder why this style is not more common, and available out of the box, but maybe it is just not known well enough? Maybe this blog post can help a bit here.

  1. I have vague recollections of seeing at least parts of this trick on some blog some years ago, and I do not claim originality here.

9 November 2020

Joachim Breitner: Distributing Haskell programs in a multi-platform zip file

My maybe most impactful piece of code is tttool and the surrounding project, which allows you to create your own content for the Ravensburger Tiptoi platform. The program itself is a command line tool, and in this blog post I want to show how I go about building that program for Linux (both normal and static builds), Windows (cross-compiled from Linux), OSX (only on CI), all combined into and released as a single zip file. Maybe some of it is useful or inspiring to my readers, or can even serve as a template. This being a blob post, though, note that it may become obsolete or outdated.

Ingredients I am building on the these components: Without the nix build system and package manger I probably woudn t even attempt to pull of complex tasks that may, say, require a patched ghc. For many years I resisted learning about nix, but when I eventually had to, I didn t want to go back. This project provides an alternative Haskell build infrastructure for nix. While this is not crucial for tttool, it helps that they tend to have some cross-compilation-related patches more than the official nixpkgs. I also like that it more closely follows the cabal build work-flow, where cabal calculates a build plan based on your projects dependencies. It even has decent documentation (which is a new thing compared to two years ago). Niv is a neat little tool to keep track of your dependencies. You can quickly update them with, say niv update nixpkgs. But what s really great is to temporarily replace one of your dependencies with a local checkout, e.g. via NIV_OVERRIDE_haskellNix=$HOME/build/haskell/haskell.nix nix-instantiate -A osx-exe-bundle There is a Github action that will keep your niv-managed dependencies up-to-date. This service (proprietary, but free to public stuff up to 10GB) gives your project its own nix cache. This means that build artifacts can be cached between CI builds or even build steps, and your contributors. A cache like this is a must if you want to use nix in more interesting ways where you may end up using, say, a changed GHC compiler. Comes with GitHub actions integration.
  • CI via Github actions
Until recently, I was using Travis, but Github actions are just a tad easier to set up and, maybe more important here, the job times are high enough that you can rebuild GHC if you have to, and even if your build gets canceled or times out, cleanup CI steps still happen, so that any new nix build products will still reach your nix cache.

The repository setup All files discussed in the following are reflected at https://github.com/entropia/tip-toi-reveng/tree/7020cde7da103a5c33f1918f3bf59835cbc25b0c. We are starting with a fairly normal Haskell project, with a single .cabal file (but multi-package projects should work just fine). To make things more interesting, I also have a cabal.project which configures one dependency to be fetched via git from a specific fork. To start building the nix infrastructure, we can initialize niv and configure it to use the haskell.nix repo:
niv init
niv add input-output-hk/haskell.nix -n haskellNix
This creates nix/sources.json (which you can also edit by hand) and nix/sources.nix (which you can treat like a black box). Now we can start writing the all-important default.nix file, which defines almost everything of interest here. I will just go through it line by line, and explain what I am doing here.
  checkMaterialization ? false  :
This defines a flag that we can later set when using nix-build, by passing --arg checkMaterialization true, and which is off by default. I ll get to that flag later.
let
  sources = import nix/sources.nix;
  haskellNix = import sources.haskellNix  ;
This imports the sources as defined niv/sources.json, and loads the pinned revision of the haskell.nix repository.
  # windows crossbuilding with ghc-8.10 needs at least 20.09.
  # A peek at https://github.com/input-output-hk/haskell.nix/blob/master/ci.nix can help
  nixpkgsSrc = haskellNix.sources.nixpkgs-2009;
  nixpkgsArgs = haskellNix.nixpkgsArgs;
  pkgs = import nixpkgsSrc nixpkgsArgs;
Now we can define pkgs, which is our version of the nixpkgs package set, extended with the haskell.nix machinery. We rely on haskell.nix to pin of a suitable revision of the nixpkgs set (see how we are using their niv setup). Here we could our own configuration, overlays, etc to nixpkgsArgs. In fact, we do in
  pkgs-osx = import nixpkgsSrc (nixpkgsArgs //   system = "x86_64-darwin";  );
to get the nixpkgs package set of an OSX machine.
  # a nicer filterSource
  sourceByRegex =
    src: regexes: builtins.filterSource (path: type:
      let relPath = pkgs.lib.removePrefix (toString src + "/") (toString path); in
      let match = builtins.match (pkgs.lib.strings.concatStringsSep " " regexes); in
      ( type == "directory"  && match (relPath + "/") != null
        match relPath != null)) src;
Next I define a little helper that I have been copying between projects, and which allows me to define the input to a nix derivation (i.e. a nix build job) with a set of regexes. I ll use that soon.
  tttool-exe = pkgs: sha256:
    (pkgs.haskell-nix.cabalProject  
The cabalProject function takes a cabal project and turns it into a nix project, running cabal v2-configure under the hood to let cabal figure out a suitable build plan. Since we want to have multiple variants of the tttool, this is so far just a function of two arguments pkgs and sha256, which will be explained in a bit.
      src = sourceByRegex ./. [
          "cabal.project"
          "src/"
          "src/.*/"
          "src/.*.hs"
          ".*.cabal"
          "LICENSE"
        ];
The cabalProject function wants to know the source of the Haskell projects. There are different ways of specifying this; in this case I went for a simple whitelist approach. Note that cabal.project.freze (which exists in the directory) is not included.
      # Pinning the input to the constraint solver
      compiler-nix-name = "ghc8102";
The cabal solver doesn t find out which version of ghc to use, that is still my choice. I am using GHC-8.10.2 here. It may require a bit of experimentation to see which version works for your project, especially when cross-compiling to odd targets.
      index-state = "2020-11-08T00:00:00Z";
I want the build to be deterministic, and not let cabal suddenly pick different package versions just because something got uploaded. Therefore I specify which snapshot of the Hackage package index it should consider.
      plan-sha256 = sha256;
      inherit checkMaterialization;
Here we use the second parameter, but I ll defer the explanation for a bit.
      modules = [ 
        # smaller files
        packages.tttool.dontStrip = false;
       ] ++
These modules are essentially configuration data that is merged in a structural way. Here we say that we want the tttool binary to be stripped (saves a few megabyte).
      pkgs.lib.optional pkgs.hostPlatform.isMusl  
        packages.tttool.configureFlags = [ "--ghc-option=-static" ];
Also, when we are building on the musl platform, that s when we want to produce a static build, so let s pass -static to GHC. This seems to be enough in terms of flags to produce static binaries. It helps that my project is using mostly pure Haskell libraries; if you link against C libraries you might have to jump through additional hoops to get static linking going. The haskell.nix documentation has a section on static building with some flags to cargo-cult.
        # terminfo is disabled on musl by haskell.nix, but still the flag
        # is set in the package plan, so override this
        packages.haskeline.flags.terminfo = false;
       ;
This (again only used when the platform is musl) seems to be necessary to workaround what might be a big in haskell.nix.
     ).tttool.components.exes.tttool;
The cabalProject function returns a data structure with all Haskell packages of the project, and for each package the different components (libraries, tests, benchmarks and of course executables). We only care about the tttool executable, so let s project that out.
  osx-bundler = pkgs: tttool:
   pkgs.stdenv.mkDerivation  
      name = "tttool-bundle";
      buildInputs = [ pkgs.macdylibbundler ];
      builder = pkgs.writeScript "tttool-osx-bundler.sh" ''
        source $ pkgs.stdenv /setup
        mkdir -p $out/bin/osx
        cp $ tttool /bin/tttool $out/bin/osx
        chmod u+w $out/bin/osx/tttool
        dylibbundler \
          -b \
          -x $out/bin/osx/tttool \
          -d $out/bin/osx \
          -p '@executable_path' \
          -i /usr/lib/system \
          -i $ pkgs.darwin.Libsystem /lib
      '';
     ;
This function, only to be used on OSX, takes a fully build tttool, finds all the system libraries it is linking against, and copies them next to the executable, using the nice macdylibbundler. This way we can get a self-contained executable. A nix expert will notice that this probably should be written with pkgs.runCommandNoCC, but then dylibbundler fails because it lacks otool. This should work eventually, though.
in rec  
  linux-exe      = tttool-exe pkgs
     "0rnn4q0gx670nzb5zp7xpj7kmgqjmxcj2zjl9jqqz8czzlbgzmkh";
  windows-exe    = tttool-exe pkgs.pkgsCross.mingwW64
     "01js5rp6y29m7aif6bsb0qplkh2az0l15nkrrb6m3rz7jrrbcckh";
  static-exe     = tttool-exe pkgs.pkgsCross.musl64
     "0gbkyg8max4mhzzsm9yihsp8n73zw86m3pwvlw8170c75p3vbadv";
  osx-exe        = tttool-exe pkgs-osx
     "0rnn4q0gx670nzb5zp7xpj7kmgqjmxcj2zjl9jqqz8czzlbgzmkh";
Time to create the four versions of tttool. In each case we use the tttool-exe function from above, passing the package set (pkgs, ) and a SHA256 hash. The package set is either the normal one, or it is one of those configured for cross compilation, building either for Windows or for Linux using musl, or it is the OSX package set that we instantiated earlier. The SHA256 hash describes the result of the cabal plan calculation that happens as part of cabalProject. By noting down the expected result, nix can skip that calculation, or fetch it from the nix cache etc. How do we know what number to put there, and when to change it? That s when the --arg checkMaterialization true flag comes into play: When that is set, cabalProject will not blindly trust these hashes, but rather re-calculate them, and tell you when they need to be updated. We ll make sure that CI checks them.
  osx-exe-bundle = osx-bundler pkgs-osx osx-exe;
For OSX, I then run the output through osx-bundler defined above, to make it independent of any library paths in /nix/store. This is already good enough to build the tool for the various systems! The rest of the the file is related to packaging up the binaries, to tests, and various other things, but nothing too essentially. So if you got bored, you can more or less stop now.
  static-files = sourceByRegex ./. [
    "README.md"
    "Changelog.md"
    "oid-decoder.html"
    "example/.*"
    "Debug.yaml"
    "templates/"
    "templates/.*\.md"
    "templates/.*\.yaml"
    "Audio/"
    "Audio/digits/"
    "Audio/digits/.*\.ogg"
  ];
  contrib = ./contrib;
The final zip file that I want to serve to my users contains a bunch of files from throughout my repository; I collect them here.
  book =  ;
The project comes with documentation in the form of a Sphinx project, which we build here. I ll omit the details, because they are not relevant for this post (but of course you can peek if you are curious).
  os-switch = pkgs.writeScript "tttool-os-switch.sh" ''
    #!/usr/bin/env bash
    case "$OSTYPE" in
      linux*)   exec "$(dirname "''$ BASH_SOURCE[0] ")/linux/tttool" "$@" ;;
      darwin*)  exec "$(dirname "''$ BASH_SOURCE[0] ")/osx/tttool" "$@" ;;
      msys*)    exec "$(dirname "''$ BASH_SOURCE[0] ")/tttool.exe" "$@" ;;
      cygwin*)  exec "$(dirname "''$ BASH_SOURCE[0] ")/tttool.exe" "$@" ;;
      *)        echo "unsupported operating system $OSTYPE" ;;
    esac
  '';
The zipfile should provide a tttool command that works on all systems. To that end, I implement a simple platform switch using bash. I use pks.writeScript so that I can include that file directly in default.nix, but it would have been equally reasonable to just save it into nix/tttool-os-switch.sh and include it from there.
  release = pkgs.runCommandNoCC "tttool-release"  
    buildInputs = [ pkgs.perl ];
    ''
    # check version
    version=$($ static-exe /bin/tttool --help perl -ne 'print $1 if /tttool-(.*) -- The swiss army knife/')
    doc_version=$(perl -ne "print \$1 if /VERSION: '(.*)'/" $ book /book.html/_static/documentation_options.js)
    if [ "$version" != "$doc_version" ]
    then
      echo "Mismatch between tttool version \"$version\" and book version \"$doc_version\""
      exit 1
    fi
Now the derivation that builds the content of the release zip file. First I double check that the version number in the code and in the documentation matches. Note how $ static-exe refers to a path with the built static Linux build, and $ book the output of the book building process.
    mkdir -p $out/
    cp -vsr $ static-files /* $out
    mkdir $out/linux
    cp -vs $ static-exe /bin/tttool $out/linux
    cp -vs $ windows-exe /bin/* $out/
    mkdir $out/osx
    cp -vsr $ osx-exe-bundle /bin/osx/* $out/osx
    cp -vs $ os-switch  $out/tttool
    mkdir $out/contrib
    cp -vsr $ contrib /* $out/contrib/
    cp -vsr $ book /* $out
  '';
The rest of the release script just copies files from various build outputs that we have defined so far. Note that this is using both static-exe (built on Linux) and osx-exe-bundle (built on Mac)! This means you can only build the release if you either have setup a remote osx builder (a pretty nifty feature of nix, which I unfortunately can t use, since I don't have access to a Mac), or the build product must be available in a nix cache (which it is in my case, as I will explain later). The output of this derivation is a directory with all the files I want to put in the release.
  release-zip = pkgs.runCommandNoCC "tttool-release.zip"  
    buildInputs = with pkgs; [ perl zip ];
    ''
    version=$(bash $ release /tttool --help perl -ne 'print $1 if /tttool-(.*) -- The swiss army knife/')
    base="tttool-$version"
    echo "Zipping tttool version $version"
    mkdir -p $out/$base
    cd $out
    cp -r $ release /* $base/
    chmod u+w -R $base
    zip -r $base.zip $base
    rm -rf $base
  '';
And now these files are zipped up. Note that this automatically determines the right directory name and basename for the zipfile. This concludes the step necessary for a release.
  gme-downloads =  ;
  tests =  ;
These two definitions in default.nix are related to some simple testing, and again not relevant for this post.
  cabal-freeze = pkgs.stdenv.mkDerivation  
    name = "cabal-freeze";
    src = linux-exe.src;
    buildInputs = [ pkgs.cabal-install linux-exe.env ];
    buildPhase = ''
      mkdir .cabal
      touch .cabal/config
      rm cabal.project # so that cabal new-freeze does not try to use HPDF via git
      HOME=$PWD cabal new-freeze --offline --enable-tests   true
    '';
    installPhase = ''
      mkdir -p $out
      echo "-- Run nix-shell -A check-cabal-freeze to update this file" > $out/cabal.project.freeze
      cat cabal.project.freeze >> $out/cabal.project.freeze
    '';
   ;
Above I mentioned that I still would like to be able to just run cabal, and ideally it should take the same library versions that the nix-based build does. But pinning the version of ghc in cabal.project is not sufficient, I also need to pin the precise versions of the dependencies. This is best done with a cabal.project.freeze file. The above derivation runs cabal new-freeze in the environment set up by haskell.nix and grabs the resulting cabal.project.freeze. With this I can run nix-build -A cabal-freeze and fetch the file from result/cabal.project.freeze and add it to the repository.
  check-cabal-freeze = pkgs.runCommandNoCC "check-cabal-freeze"  
      nativeBuildInputs = [ pkgs.diffutils ];
      expected = cabal-freeze + /cabal.project.freeze;
      actual = ./cabal.project.freeze;
      cmd = "nix-shell -A check-cabal-freeze";
      shellHook = ''
        dest=$ toString ./cabal.project.freeze 
        rm -f $dest
        cp -v $expected $dest
        chmod u-w $dest
        exit 0
      '';
      ''
      diff -r -U 3 $actual $expected  
          echo "To update, please run"; echo "nix-shell -A check-cabal-freeze"; exit 1;  
      touch $out
    '';
But generated files in repositories are bad, so if that cannot be avoided, at least I want a CI job that checks if they are up to date. This job does that. What s more, it is set up so that if I run nix-shell -A check-cabal-freeze it will update the file in the repository automatically, which is much more convenient than manually copying. Lately, I have been using this pattern regularly when adding generated files to a repository: * Create one nix derivation that creates the files * Create a second derivation that compares the output of that derivation against the file in the repo * Create a derivation that, when run in nix-shell, updates that file. Sometimes that derivation is its own file (so that I can just run nix-shell nix/generate.nix), or it is merged into one of the other two. This concludes the tour of default.nix.

The CI setup The next interesting bit is the file .github/workflows/build.yml, which tells Github Actions what to do:
name: "Build and package"
on:
  pull_request:
  push:
Standard prelude: Run the jobs in this file upon all pushes to the repository, and also on all pull requests. Annoying downside: If you open a PR within your repository, everything gets built twice. Oh well.
jobs:
  build:
    strategy:
      fail-fast: false
      matrix:
        include:
        - target: linux-exe
          os: ubuntu-latest
        - target: windows-exe
          os: ubuntu-latest
        - target: static-exe
          os: ubuntu-latest
        - target: osx-exe-bundle
          os: macos-latest
    runs-on: $  matrix.os  
The build job is a matrix job, i.e. there are four variants, one for each of the different tttool builds, together with an indication of what kind of machine to run this on.
    - uses: actions/checkout@v2
    - uses: cachix/install-nix-action@v12
We begin by checking out the code and installing nix via the install-nix-action.
    - name: "Cachix: tttool"
      uses: cachix/cachix-action@v7
      with:
        name: tttool
        signingKey: '$  secrets.CACHIX_SIGNING_KEY  '
Then we configure our Cachix cache. This means that this job will use build products from the cache if possible, and it will also push new builds to the cache. This requires a secret key, which you get when setting up your Cachix cache. See the nix and Cachix tutorial for good instructions.
    - run: nix-build --arg checkMaterialization true -A $  matrix.target  
Now we can actually run the build. We set checkMaterialization to true so that CI will tell us if we need to update these hashes.
    # work around https://github.com/actions/upload-artifact/issues/92
    - run: cp -RvL result upload
    - uses: actions/upload-artifact@v2
      with:
        name: tttool ($  matrix.target  )
        path: upload/
For convenient access to build products, e.g. from pull requests, we store them as Github artifacts. They can then be downloaded from Github s CI status page.
  test:
    runs-on: ubuntu-latest
    needs: build
    steps:
    - uses: actions/checkout@v2
    - uses: cachix/install-nix-action@v12
    - name: "Cachix: tttool"
      uses: cachix/cachix-action@v7
      with:
        name: tttool
        signingKey: '$  secrets.CACHIX_SIGNING_KEY  '
    - run: nix-build -A tests
The next job repeats the setup, but now runs the tests. Because of needs: build it will not start before the builds job has completed. This also means that it should get the actual tttool executable to test from our nix cache.
  check-cabal-freeze:
    runs-on: ubuntu-latest
    steps:
    - uses: actions/checkout@v2
    - uses: cachix/install-nix-action@v12
    - name: "Cachix: tttool"
      uses: cachix/cachix-action@v7
      with:
        name: tttool
        signingKey: '$  secrets.CACHIX_SIGNING_KEY  '
    - run: nix-build -A check-cabal-freeze
The same, but now running the check-cabal-freeze test mentioned above. Quite annoying to repeat the setup instructions for each job
  package:
    runs-on: ubuntu-latest
    needs: build
    steps:
    - uses: actions/checkout@v2
    - uses: cachix/install-nix-action@v12
    - name: "Cachix: tttool"
      uses: cachix/cachix-action@v7
      with:
        name: tttool
        signingKey: '$  secrets.CACHIX_SIGNING_KEY  '
    - run: nix-build -A release-zip
    - run: unzip -d upload ./result/*.zip
    - uses: actions/upload-artifact@v2
      with:
        name: Release zip file
        path: upload
Finally, with the same setup, but slightly different artifact upload, we build the release zip file. Again, we wait for build to finish so that the built programs are in the nix cache. This is especially important since this runs on linux, so it cannot build the OSX binary and has to rely on the cache. Note that we don t need to checkMaterialization again. Annoyingly, the upload-artifact action insists on zipping the files you hand to it. A zip file that contains just a zipfile is kinda annoying, so I unpack the zipfile here before uploading the contents.

Conclusion With this setup, when I do a release of tttool, I just bump the version numbers, wait for CI to finish building, run nix-build -A release-zip and upload result/tttool-n.m.zip. A single file that works on all target platforms. I have not yet automated making the actual release, but with one release per year this is fine. Also, when trying out a new feature, I can easily create a branch or PR for that and grab the build products from Github s CI, or ask people to try them out (e.g. to see if they fixed their bugs). Note, though, that you have to sign into Github before being able to download these artifacts. One might think that this is a fairly hairy setup finding the right combinations of various repertories so that cross-compilation works as intended. But thanks to nix s value propositions, this does work! The setup presented here was a remake of a setup I did two years ago, with a much less mature haskell.nix. Back then, I committed a fair number of generated files to git, and juggled more complex files but once it worked, it kept working for two years. I was indeed insulated from upstream changes. I expect that this setup will also continue to work reliably, until I choose to upgrade it again. Hopefully, then things are even more simple, and require less work-around or manual intervention.

27 September 2020

Joachim Breitner: Learn Haskell on CodeWorld writing Sokoban

Two years ago, I held the CIS194 minicourse on Haskell at the University of Pennsylvania. In that installment of the course, I changed the first four weeks to teach the basics of Haskell using the online Haskell environment CodeWorld, and lead the students towards implementing the game Sokoban. As it is customary for CIS194, I put my lecture notes and exercises online, and this has been used as a learning resources by people from all over the world. But since I have left the University of Pennsylvania, I lost the ability to update the text, and as the CodeWorld API has evolved, some of the examples and exercises no longer work. Some recent complains about that, in bug reports against CodeWorld and in unrealistically flattering tweets ( Shame, this was the best Haskell course ever!!! ) motivated me to extract that material and turn it into an updated stand-alone tutorial that I can host myself. So if you feel like learning Haskell without worrying about local installation, and while creating a reasonably fun game, head over to https://haskell-via-sokoban.nomeata.de/ and get started! Improvements can now also be contributed at https://github.com/nomeata/haskell-via-sokoban. Credits go to Brent Yorgey, Richard Eisenberg and Noam Zilberstein, who held the previous installments of the course, and Chris Smith for creating the CodeWorld environment.

1 July 2020

Joachim Breitner: Template Haskell recompilation

I was wondering: What happens if I have a Haskell module with Template Haskell that embeds some information from the environment (time, environment variables). Will such a module be reliable recompiled? And what if it gets recompiled, but the source code produced by Template Haskell is actually unchanged (e.g., because the environment variable has not changed), will all depending modules be recompiled (which would be bad)? Here is a quick experiment, using GHC-8.8:
/tmp/th-recom-test $ cat Foo.hs
 -# LANGUAGE TemplateHaskell #- 
 -# OPTIONS_GHC -fforce-recomp #- 
module Foo where
import Language.Haskell.TH
import Language.Haskell.TH.Syntax
import System.Process
theMinute :: String
theMinute = $(runIO (readProcess "date" ["+%M"] "") >>= stringE)
[jojo@kirk:2] Mi, der 01.07.2020 um 17:18 Uhr  
/tmp/th-recom-test $ cat Main.hs
import Foo
main = putStrLn theMinute
Note that I had to set -# OPTIONS_GHC -fforce-recomp #- by default, GHC will not recompile a module, even if it uses Template Haskell and runIO. If you are reading from a file you can use addDependentFile to tell the compiler about that depenency, but that does not help with reading from the environment. So here is the test, and we get the desired behaviour: The Foo module is recompiled every time, but unless the minute has changed (see my prompt), Main is not recomipled:
/tmp/th-recom-test $ ghc --make -O2 Main.hs -o test
[1 of 2] Compiling Foo              ( Foo.hs, Foo.o )
[2 of 2] Compiling Main             ( Main.hs, Main.o )
Linking test ...
[jojo@kirk:2] Mi, der 01.07.2020 um 17:20 Uhr  
/tmp/th-recom-test $ ghc --make -O2 Main.hs -o test
[1 of 2] Compiling Foo              ( Foo.hs, Foo.o )
Linking test ...
[jojo@kirk:2] Mi, der 01.07.2020 um 17:20 Uhr  
/tmp/th-recom-test $ ghc --make -O2 Main.hs -o test
[1 of 2] Compiling Foo              ( Foo.hs, Foo.o )
[2 of 2] Compiling Main             ( Main.hs, Main.o ) [Foo changed]
Linking test ...
So all well! Update: It seems that while this works with ghc --make, the -fforce-recomp does not cause cabal build to rebuild the module. That s unfortunate.

7 June 2020

Joachim Breitner: Managed by an eleven year old

This weekend I had some pretty good uncle time. My eleven year old nephew (after beating me in a fair match of tennis) wanted to play with his Lego Mindstorms set. He never even touches it unless I am around to help him, but then he quiet enjoys. As expected, when I ask him what we should try to build, he tends to come up with completely unrealistic or impossible ideas, and we have to somehow reduce the scope to something doable. This time round, inspired by their autonomous vacuum cleaner, he wanted to build something like that. That was convenient, because now I could sell him what I wanted to try to do, namely make the robot follow a wall at more or less constant distance, as a useful first step. We mounted the infra red distance sensor at an 45 angle to the front-left, and then I built a very simple control loop measure the distance, subtract 40, apply a factor, and feed that into the steering input of the drive action, and repeat. I must admit that I was pretty proud to see just how well that very simple circuit worked: The little robot turned sharp corners in both directions, and otherwise drove nicely parallel and with constant distance to the wall. I was satisfied with that achievement and would have happily ended it here before we might be disappointed by more ambitious and then failing goals. But my nephew had not forgotten about the vacuum cleaner, and casually asked if I could make the robot draw an outline of its path, and thus of the room, on the display. Ok, where do I begin to explain just how unfeasible that is yes, it seemed one can draw to the display. But how should the robot know where it is? How far it turns? How far it went? This is programming by connecting boxes, and I would not expect such an interface to allow for complex logic (right?). And I would need trigonometry and stuff. But he didn t quite believe it, and thus got me thinking indeed, the arithmetic block can evaluate more complex formulas, involving sin and cos so maybe if I can somehow keep track of where the robot is heading well, let s give it a try. So by dragging boxes and connecting wires, I implemented a simple logic (three variables, x and y for current position, alpha for current heading; in each loop add the steering input onto alpha, and add (sin(alpha),cos(alpha)) onto the current position; throw in some linear factors to calibrate; draw pixel at (x, y)). And, to my nephew s joy and my astonishment, the robot was drawing a curvy line that clearly correlates with the taken path! It wasn t respecting the angles perfectly, a square might not properly close, but half an hour earlier I would have actively bet against that we would pull this off!
After a few turns

After a few turns

I was, I must admit, a bit proud. But not only about the technical nerding, but also about my uncleing: That night, according to his parents, my nephew said that he can t remember ever being so happy!

6 April 2020

Joachim Breitner: A Telegram bot in Haskell on Amazon Lambda

I just had a weekend full of very successful serious geekery. On a whim I thought: Wouldn't it be nice if people could interact with my game Kaleidogen also via a telegram bot? This led me to learn about how I write a Telegram bot in Haskell and how I can deploy such a Haskell program to Amazon Lambda. In particular the latter bit might be interesting to some of my readers, so here is how went about it.

Kaleidogen Kaleidogen is a little contemplative game (or toy game where, starting from just unicolored disks, you combine abstract circular patterns to breed more interesting patterns. See my FARM 2019 talk for more details, or check out the source repository. BTW, I am looking for help turning it into an Android app!
KaleidogenBot in action

KaleidogenBot in action

Amazon Lambda Amazon Lambda is the Function as a service offering of Amazon Web Services. The idea is that you don t rent a server, where you have to deal with managing the whole system and that you are paying for constantly, but you just upload the code that responds to outside requests, and AWS takes care of the rest: Starting and stopping instances, providing a secure base system etc. When nobody is using the service, no cost occurs. This sounds ideal for hosting a toy Telegram bot: Most of the time nobody will be using it, and I really don't want to have to baby sit yet another service on my server. On Amazon Lambda, I can probably just forget about it. But Haskell is not one of the officially supported languages on Amazon Lambda. So to run Haskell on Lambda, one has to solve two problems:
  • how to invoke the Haskell code on the server, and
  • how to build Haskell so that it runs on the Amazon Linux distribution

A Haskell runtime for Lambda For the first we need a custom runtime. While this sounds complicated, it is actually a pretty simple concept: A runtime is an executable called bootstrap that queries the Lambda Runtime Interface for the next request to handle. The Lambda documentation is phrased as if this runtime has to be a dispatcher that calls the separate function s handler. But it could just do all the things directly. I found the Haskell package aws-lambda-haskell-runtime which provides precisely that: A function
runLambda :: (LambdaOptions -> IO (Either String LambdaResult)) -> IO ()
that talks to the Lambda Runtime API and invokes its argument on each message. The package also provides Template Haskell magic to collect handlers of any JSON-able type and generates a dispatcher, like you might expect from other, more dynamic languages. But that was too much magic for me, so I ignored that and just wrote the handler manually:
main :: IO ()
main = runLambda run
  where
   run ::  LambdaOptions -> IO (Either String LambdaResult)
   run opts = do
    result <- handler (decodeObj (eventObject opts)) (decodeObj (contextObject opts))
    either (pure . Left . encodeObj) (pure . Right . LambdaResult . encodeObj) result
data Event = Event
    path :: T.Text
  , body :: Maybe T.Text
    deriving (Generic, FromJSON)
data Response = Response
    statusCode :: Int
  , headers :: Value
  , body :: T.Text
  , isBase64Encoded :: Bool
    deriving (Generic, ToJSON)
handler :: Event -> Context -> IO (Either String Response)
handler Event body, path  context =
 
I expose my Lambda function to the world via Amazon s API Gateway, configured to just proxy the HTTP requests. This means that my code receives a JSON data structure describing the HTTP request (here called Event, listing only the fields I care about), and it will respond with a Response, again as JSON. The handler can then simply pattern-match on the path to decide what to do. For example this code handles URLs like /img/CAFFEEFACE.png, and responds with an image.
handler :: TC -> Event -> Context -> IO (Either String Response)
handler Event body, path  context
      Just bytes <- isImgPath path >>= T.decodeHex = do
        let pngData = genPurePNG bytes
        pure $ Right Response
              statusCode = 200
            , headers = object [ "Content-Type" .= ("image/png" :: String) ]
            , isBase64Encoded = True
            , body = T.decodeUtf8 $ LBS.toStrict $ Base64.encode pngData
             
     
isImgPath :: T.Text -> Maybe T.Text
isImgPath  = T.stripPrefix "/img/" >=> T.stripSuffix ".png"
If this program would grow more, then one should probably use something more structured for routing here; maybe servant, or bridging towards wai apps (amost like wai-lamda, but that still assumes an existing runtime, instead of simply being the runtime). But for my purposes, no extra layers of indirection or abstraction are needed!

Deploying Haskell to Lambda Building Haskell locally and deploying to different machiens is notoriously tricky; you often end up depending on a shared library that is not available on the other platform. The aws-lambda-haskell-runtime package, and similar projects like serverless-haskell, solve this using stack and Docker two technologies that are probably great, but I never warmed up to them. So instead adding layers and complexities, can I solve this instead my making things simpler? If I compiler my bootstrap into a static Linux binary, it should run on any Linux, including Amazon Linux. Unfortunately, building Haskell programs statically is also notoriously tricky. But it is made much simpler by the work of Niklas Hamb chen and others in the context of the Nix package manager, coordinated in the static-haskell-nix project. The promise here is that once you have set up building your project with Nix, then getting a static version is just one flag away. The support is not completely upstreamed into nixpkgs proper yet, but their repository has a nix file that contains a nixpkgs set with their patches:
let pkgs = (import (sources.nixpkgs-static + "/survey/default.nix")  ).pkgs; in
This, plus a fairly standard nix setup to build the package, yields what I was hoping for:
$ nix-build -A kaleidogen
/nix/store/ppwyq4d964ahd6k56wsklh93vzw07ln0-kaleidogen-0.1.0.0
$ file result/bin/kaleidogen-amazon-lambda
result/bin/kaleidogen-amazon-lambda: ELF 64-bit LSB executable, x86-64, version 1 (SYSV), statically linked, stripped
$ ls -sh result/bin/kaleidogen-amazon-lambda
6,7M result/bin/kaleidogen-amazon-lambda
If we put this file, named bootstrap, into a zip file and upload it to Amazon Lambda, then it just works! Creating the zip file is easily scripted using nix:
  function-zip = pkgs.runCommandNoCC "kaleidogen-lambda"  
    buildInputs = [ pkgs.zip ];
    ''
    mkdir -p $out
    cp $ kaleidogen /bin/kaleidogen-amazon-lambda bootstrap
    zip $out/function.zip bootstrap
  '';
So to upload this, I use this one-liner (line-wrapped for your convenience):
nix-build -A function-zip &&
aws lambda update-function-code --function-name kaleidogen \
  --zip-file fileb://result/function.zip
Thanks to how Nix pins all dependencies, I am fairly confident that I can return to this project in 4 months and still be able to build it. Of course, I want continuous integration and deployment. So I build the project with GitHub Actions, using a cachix nix cache to significantly speed up the build, and auto-deploy to Lambda using aws-lambda-deploy; see my workflow file for details.

The Telegram part The above allows me to run basically any stateless service, and a Telegram bot is nothing else: When configured to act as a WebHook, Telegram will send a request with a message to our Lambda function, where we can react on it. The telegram-api package provides bindigs for the Telegram Bot API (although I had to use the repository version, as the version on Hackage has some bitrot). Slightly simplified I can write a handler for an Update:
handleUpdate :: Update -> TelegramClient ()
handleUpdate Update  message = Just m   = do
  let c = ChatId (chat_id (chat m))
  liftIO $ printf "message from %s: %s\n" (maybe "?" user_first_name (from m)) (maybe "" T.unpack (text m))
  if "/start"  T.isPrefixOf  fromMaybe "" (text m)
  then do
    rm <- sendMessageM $ sendMessageRequest c "Hi! I am @KaleidogenBot.  "
    return ()
  else do
    m1 <- sendMessageM $ sendMessageRequest c "One moment "
    withPNGFile  $ \pngFN -> do
      m2 <- uploadPhotoM $ uploadPhotoRequest c
        (FileUpload (Just "image/png") (FileUploadFile pngFN))
      return ()
handleUpdate _ u =
  liftIO $ putStrLn $ "Unhandled message: " ++ show u
and call this from the handler that I wrote above:
     
      path == "/telegram" =
      case eitherDecode (LBS.fromStrict (T.encodeUtf8 (fromMaybe "" body))) of
        Left err ->  
        Right update -> do
          runTelegramClient token manager $ handleUpdate Nothing update
          pure $ Right Response
              statusCode = 200
            , headers = object [ "Content-Type" .= ("text/plain" :: String) ]
            , isBase64Encoded = False
            , body = "Done"
             
     
Note that the Lambda code receives the request as JSON data structure with a body that contains the original HTTP request body. Which, in this case, is itself JSON, so we have to decode that. All that is left to do is to tell Telegram where this code lives:
curl --request POST \
  --url https://api.telegram.org/bot<token>/setWebhook
  --header 'content-type: application/json'
  --data ' "url": "https://api.kaleidogen.nomeata.de/telegram" '
As a little add on, I also created a Telegram game for Kaleidogen. A Telegram game is nothing but a webpage that runs inside Telegram, so it wasn t much work to wrap the Web version of Kaleidogen that way, but the resulting Telegram game (which you can access via https://core.telegram.org/bots/games) still looks pretty neat.

No /dev/dri/renderD128 I am mostly happy with this setup: My game is now available to more people in more ways. I don t have to maintain any infrastructure. When nobody is using this bot no resources are wasted, and the costs of the service are neglectible -- this is unlikely to go beyond the free tier, and even if it would, the cost per generated image is roughly USD 0.000021. There is one slight disappointment, though. What I find most intersting about Kaleidogen from a technical point of view is that when you play it in the browser, the images are not generated by my code. Instead, my code creates a WebGL shader program on the fly, and that program generates the image on your graphics card. I even managed to make the GL rendering code work headlessly, i.e. from a command line program, using EGL and libgbm and a helper written in C. But it needs access to a graphics card via /dev/dri/renderD128. Amazon does not provide that to Lambda code, and neither do the other big Function-as-a-service providers. So I had to swallow my pride and reimplement the rendering in pure Haskell. So if you think the bot is kinda slow, then that s why. Despite properly optimizing the pure implementation (the inner loop does not do allocations and deals only with unboxed Double# values), the GL shader version is still three times as fast. Maybe in a few years GPU access will be so ubiquitous that it s even on Amazon Lambda; then I can easily use that.

1 April 2020

Joachim Breitner: 30 years of Haskell

Vitaly Bragilevsky, in a mail to the GHC Steering Committee, reminded me that the first version of the Haskell programming language was released exactly 30 years ago. On April 1st. So that raises the question: Was Haskell just an April fool's joke that was never retracted?
The cover of the 1.0 Haskell report

The cover of the 1.0 Haskell report

My own first exposure to Haskell was in April 2005; the oldest piece of Haskell I could find on my machine is this part of a university assignment from April:
> pascal 1 = [1]
> pascal (n+1) = zipWith (+) (x ++ [0]) (0 : x) where x = pascal n
This means that I now have witnessed half of Haskell's existence. I have never regretted getting into Haskell, and every time I come back from having worked in other languages (which all have their merits too), I greatly enjoy the beauty and elegance of expressing my ideas in a lazy and strictly typed language with a concise syntax. I am looking forward to witnessing (and, to a very small degree, shaping) the next 15 years of Haskell.

31 March 2020

Joachim Breitner: Animations in Kaleidogen

A while ago I wrote a little game (or toy) called Kaleidogen. It is a relatively contemplative game where, starting from just unicolored disks, you combine abstract circular patterns to breed more interesting patterns. See my FARM 2019 talk for more details, or check out the source repository. It has mostly been quiet with this game, but I finally got around to add a little bit of animation: When you have bred one of these patterns, you can animate its genesis, from nothing to a complex patterns, as you can see in this screencast:

Kaleidogen, animated

By the way: I am looking for collaborators who help me to get this into the Play Store properly, so let me know if you want to play around with Haskell, Android, Nix, OpenGL and cross-compilation.

12 October 2017

Joachim Breitner: Isabelle functions: Always total, sometimes undefined

Often, when I mention how things work in the interactive theorem prover [Isabelle/HOL] (in the following just Isabelle 1) to people with a strong background in functional programming (whether that means Haskell or Coq or something else), I cause confusion, especially around the issue of what is a function, are function total and what is the business with undefined. In this blog post, I want to explain some these issues, aimed at functional programmers or type theoreticians. Note that this is not meant to be a tutorial; I will not explain how to do these things, and will focus on what they mean.

HOL is a logic of total functions If I have a Isabelle function f :: a b between two types a and b (the function arrow in Isabelle is , not ), then by definition of what it means to be a function in HOL whenever I have a value x :: a, then the expression f x (i.e. f applied to x) is a value of type b. Therefore, and without exception, every Isabelle function is total. In particular, it cannot be that f x does not exist for some x :: a. This is a first difference from Haskell, which does have partial functions like
spin :: Maybe Integer -> Bool
spin (Just n) = spin (Just (n+1))
Here, neither the expression spin Nothing nor the expression spin (Just 42) produce a value of type Bool: The former raises an exception ( incomplete pattern match ), the latter does not terminate. Confusingly, though, both expressions have type Bool. Because every function is total, this confusion cannot arise in Isabelle: If an expression e has type t, then it is a value of type t. This trait is shared with other total systems, including Coq. Did you notice the emphasis I put on the word is here, and how I deliberately did not write evaluates to or returns ? This is because of another big source for confusion:

Isabelle functions do not compute We (i.e., functional programmers) stole the word function from mathematics and repurposed it2. But the word function , in the context of Isabelle, refers to the mathematical concept of a function, and it helps to keep that in mind. What is the difference?
  • A function a b in functional programming is an algorithm that, given a value of type a, calculates (returns, evaluates to) a value of type b.
  • A function a b in math (or Isabelle) associates with each value of type a a value of type b.
For example, the following is a perfectly valid function definition in math (and HOL), but could not be a function in the programming sense:
definition foo :: "(nat   real)   real" where
  "foo seq = (if convergent seq then lim seq else 0)"
This assigns a real number to every sequence, but it does not compute it in any useful sense. From this it follows that

Isabelle functions are specified, not defined Consider this function definition:
fun plus :: "nat   nat   nat"  where
   "plus 0       m = m"
   "plus (Suc n) m = Suc (plus n m)"
To a functional programmer, this reads
plus is a function that analyses its first argument. If that is 0, then it returns the second argument. Otherwise, it calls itself with the predecessor of the first argument and increases the result by one.
which is clearly a description of a computation. But to Isabelle, the above reads
plus is a binary function on natural numbers, and it satisfies the following two equations:
And in fact, it is not so much Isabelle that reads it this way, but rather the fun command, which is external to the Isabelle logic. The fun command analyses the given equations, constructs a non-recursive definition of plus under the hood, passes that to Isabelle and then proves that the given equations hold for plus. One interesting consequence of this is that different specifications can lead to the same functions. In fact, if we would define plus' by recursing on the second argument, we d obtain the the same function (i.e. plus = plus' is a theorem, and there would be no way of telling the two apart).

Termination is a property of specifications, not functions Because a function does not evaluate, it does not make sense to ask if it terminates. The question of termination arises before the function is defined: The fun command can only construct plus in a way that the equations hold if it passes a termination check very much like Fixpoint in Coq. But while the termination check of Fixpoint in Coq is a deep part of the basic logic, in Isabelle it is simply something that this particular command requires for its internal machinery to go through. At no point does a termination proof of the function exist as a theorem inside the logic. And other commands may have other means of defining a function that do not even require such a termination argument! For example, a function specification that is tail-recursive can be turned in to a function, even without a termination proof: The following definition describes a higher-order function that iterates its first argument f on the second argument x until it finds a fixpoint. It is completely polymorphic (the single quote in 'a indicates that this is a type variable):
partial_function (tailrec)
  fixpoint :: "('a   'a)   'a   'a"
where
  "fixpoint f x = (if f x = x then x else fixpoint f (f x))"
We can work with this definition just fine. For example, if we instantiate f with ( x. x-1), we can prove that it will always return 0:
lemma "fixpoint (  n . n - 1) (n::nat) = 0"
  by (induction n) (auto simp add: fixpoint.simps)
Similarly, if we have a function that works within the option monad (i.e. Maybe in Haskell), its specification can always be turned into a function without an explicit termination proof here one that calculates the Collatz sequence:
partial_function (option) collatz :: "nat   nat list option"
 where "collatz n =
        (if n = 1 then Some [n]
         else if even n
           then do   ns <- collatz (n div 2);    Some (n # ns)  
           else do   ns <- collatz (3 * n + 1);  Some (n # ns) )"
Note that lists in Isabelle are finite (like in Coq, unlike in Haskell), so this function returns a list only if the collatz sequence eventually reaches 1. I expect these definitions to make a Coq user very uneasy. How can fixpoint be a total function? What is fixpoint ( n. n+1)? What if we run collatz n for a n where the Collatz sequence does not reach 1?3 We will come back to that question after a little detour

HOL is a logic of non-empty types Another big difference between Isabelle and Coq is that in Isabelle, every type is inhabited. Just like the totality of functions, this is a very fundamental fact about what HOL defines to be a type. Isabelle gets away with that design because in Isabelle, we do not use types for propositions (like we do in Coq), so we do not need empty types to denote false propositions. This design has an important consequence: It allows the existence of a polymorphic expression that inhabits any type, namely
undefined :: 'a
The naming of this term alone has caused a great deal of confusion for Isabelle beginners, or in communication with users of different systems, so I implore you to not read too much into the name. In fact, you will have a better time if you think of it as arbitrary or, even better, unknown. Since undefined can be instantiated at any type, we can instantiate it for example at bool, and we can observe an important fact: undefined is not an extra value besides the usual ones . It is simply some value of that type, which is demonstrated in the following lemma:
lemma "undefined = True   undefined = False" by auto
In fact, if the type has only one value (such as the unit type), then we know the value of undefined for sure:
lemma "undefined = ()" by auto
It is very handy to be able to produce an expression of any type, as we will see as follows

Partial functions are just underspecified functions For example, it allows us to translate incomplete function specifications. Consider this definition, Isabelle s equivalent of Haskell s partial fromJust function:
fun fromSome :: "'a option   'a" where
  "fromSome (Some x) = x"
This definition is accepted by fun (albeit with a warning), and the generated function fromSome behaves exactly as specified: when applied to Some x, it is x. The term fromSome None is also a value of type 'a, we just do not know which one it is, as the specification does not address that. So fromSome None behaves just like undefined above, i.e. we can prove
lemma "fromSome None = False   fromSome None = True" by auto
Here is a small exercise for you: Can you come up with an explanation for the following lemma:
fun constOrId :: "bool   bool" where
  "constOrId True = True"
lemma "constOrId = ( _.True)   constOrId = ( x. x)"
  by (metis (full_types) constOrId.simps)
Overall, this behavior makes sense if we remember that function definitions in Isabelle are not really definitions, but rather specifications. And a partial function definition is simply a underspecification. The resulting function is simply any function hat fulfills the specification, and the two lemmas above underline that observation.

Nonterminating functions are also just underspecified Let us return to the puzzle posed by fixpoint above. Clearly, the function seen as a functional program is not total: When passed the argument ( n. n + 1) or ( b. b) it will loop forever trying to find a fixed point. But Isabelle functions are not functional programs, and the definitions are just specifications. What does the specification say about the case when f has no fixed-point? It states that the equation fixpoint f x = fixpoint f (f x) holds. And this equation has a solution, for example fixpoint f _ = undefined. Or more concretely: The specification of the fixpoint function states that fixpoint ( b. b) True = fixpoint ( b. b) False has to hold, but it does not specify which particular value (True or False) it should denote any is fine.

Not all function specifications are ok At this point you might wonder: Can I just specify any equations for a function f and get a function out of that? But rest assured: That is not the case. For example, no Isabelle command allows you define a function bogus :: () nat with the equation bogus () = Suc (bogus ()), because this equation does not have a solution. We can actually prove that such a function cannot exist:
lemma no_bogus: "  bogus. bogus () = Suc (bogus ())" by simp
(Of course, not_bogus () = not_bogus () is just fine )

You cannot reason about partiality in Isabelle We have seen that there are many ways to define functions that one might consider partial . Given a function, can we prove that it is not partial in that sense? Unfortunately, but unavoidably, no: Since undefined is not a separate, recognizable value, but rather simply an unknown one, there is no way of stating that A function result is not specified . Here is an example that demonstrates this: Two partial functions (one with not all cases specified, the other one with a self-referential specification) are indistinguishable from the total variant:
fun partial1 :: "bool   unit" where
  "partial1 True = ()"
partial_function (tailrec) partial2 :: "bool   unit" where
  "partial2 b = partial2 b"
fun total :: "bool   unit" where
  "total True = ()"
  "total False = ()"
lemma "partial1 = total   partial2 = total" by auto
If you really do want to reason about partiality of functional programs in Isabelle, you should consider implementing them not as plain HOL functions, but rather use HOLCF, where you can give equational specifications of functional programs and obtain continuous functions between domains. In that setting, () and partial2 = total. We have done that to verify some of HLint s equations.

You can still compute with Isabelle functions I hope by this point, I have not scared away anyone who wants to use Isabelle for functional programming, and in fact, you can use it for that. If the equations that you pass to fun are a reasonable definition for a function (in the programming sense), then these equations, used as rewriting rules, will allow you to compute that function quite like you would in Coq or Haskell. Moreover, Isabelle supports code extraction: You can take the equations of your Isabelle functions and have them expored into Ocaml, Haskell, Scala or Standard ML. See Concon for a conference management system with confidentially verified in Isabelle. While these usually are the equations you defined the function with, they don't have to: You can declare other proved equations to be used for code extraction, e.g. to refine your elegant definitions to performant ones. Like with code extraction from Coq to, say, Haskell, the adequacy of the translations rests on a moral reasoning foundation. Unlike extraction from Coq, where you have an (unformalized) guarantee that the resulting Haskell code is terminating, you do not get that guarantee from Isabelle. Conversely, this allows you do reason about and extract non-terminating programs, like fixpoint, which is not possible in Coq. There is currently ongoing work about verified code generation, where the code equations are reflected into a deep embedding of HOL in Isabelle that would allow explicit termination proofs.

Conclusion We have seen how in Isabelle, every function is total. Function declarations have equations, but these do not define the function in an computational sense, but rather specify them. Because in HOL, there are no empty types, many specifications that appear partial (incomplete patterns, non-terminating recursion) have solutions in the space of total functions. Partiality in the specification is no longer visible in the final product.

PS: Axiom undefined in Coq This section is speculative, and an invitation for discussion. Coq already distinguishes between types used in programs (Set) and types used in proofs Prop. Could Coq ensure that every t : Set is non-empty? I imagine this would require additional checks in the Inductive command, similar to the checks that the Isabelle command datatype has to perform4, and it would disallow Empty_set. If so, then it would be sound to add the following axiom
Axiom undefined : forall (a : Set), a.
wouldn't it? This axiom does not have any computational meaning, but that seems to be ok for optional Coq axioms, like classical reasoning or function extensionality. With this in place, how much of what I describe above about function definitions in Isabelle could now be done soundly in Coq. Certainly pattern matches would not have to be complete and could sport an implicit case _ undefined. Would it help with non-obviously terminating functions? Would it allow a Coq command Tailrecursive that accepts any tailrecursive function without a termination check?

  1. Isabelle is a metalogical framework, and other logics, e.g. Isabelle/ZF, behave differently. For the purpose of this blog post, I always mean Isabelle/HOL.
  2. Isabelle is a metalogical framework, and other logics, e.g. Isabelle/ZF, behave differently. For the purpose of this blog post, I always mean Isabelle/HOL.
  3. Let me know if you find such an n. Besides n = 0.
  4. Like fun, the constructions by datatype are not part of the logic, but create a type definition from more primitive notions that is isomorphic to the specified data type.

8 October 2017

Joachim Breitner: e.g. in TeX

When I learned TeX, I was told to not write e.g. something, because TeX would think the period after the g ends a sentence, and introduce a wider, inter-sentence space. Instead, I was to write e.g.\ . Years later, I learned from a convincing, but since forgotten source, that in fact e.g.\@ is the proper thing to write. I vaguely remembering that e.g.\ supposedly affected the inter-word space in some unwanted way. So I did that for many years. Until I recently was called out for doing it wrong, and that infact e.g.\ is the proper way. This was supported by a StackExchange answer written by a LaTeX authority and backed by a reference to documentation. The same question has, however, another answer by another TeX authority, backed by an analysis of the implementation, which concludes that e.g.\@ is proper. What now? I guess I just have to find it out myself.
The problem and two solutions

The problem and two solutions

The above image shows three variants: The obviously broken version with e.g., and the two contesting variants to fix it. Looks like they yield equal results! So maybe the difference lies in how \@ and \ react when the line length changes, and the word wrapping require differences in the inter-word spacing. Will there be differences? Let s see;
Expanding whitespace, take 1

Expanding whitespace, take 1

Expanding whitespace, take 2

Expanding whitespace, take 2

I cannot see any difference. But the inter-sentence whitespace ate most of the expansion. Is there a difference visible if we have only inter-word spacing in the line?
Expanding whitespace, take 3

Expanding whitespace, take 3

Expanding whitespace, take 4

Expanding whitespace, take 4

Again, I see the same behaviour. Conclusion: It does not matter, but e.g.\ is less hassle when using lhs2tex than e.g.\@ (which has to be escaped as e.g.\@@), so the winner is e.g.\ ! (Unless you put it in a macro, then \@ might be preferable, and it is still needed between a captial letter and a sentence period.)

10 September 2017

Joachim Breitner: Less parentheses

Yesterday, at the Haskell Implementers Workshop 2017 in Oxford, I gave a lightning talk titled syntactic musings , where I presented three possibly useful syntactic features that one might want to add to a language like Haskell. The talked caused quite some heated discussions, and since the Internet likes heated discussion, I will happily share these ideas with you

Context aka. Sections This is probably the most relevant of the three proposals. Consider a bunch of related functions, say analyseExpr and analyseAlt, like these:
analyseExpr :: Expr -> Expr
analyseExpr (Var v) = change v
analyseExpr (App e1 e2) =
  App (analyseExpr e1) (analyseExpr e2)
analyseExpr (Lam v e) = Lam v (analyseExpr flag e)
analyseExpr (Case scrut alts) =
  Case (analyseExpr scrut) (analyseAlt <$> alts)
analyseAlt :: Alt -> Alt
analyseAlt (dc, pats, e) = (dc, pats, analyseExpr e)
You have written them, but now you notice that you need to make them configurable, e.g. to do different things in the Var case. You thus add a parameter to all these functions, and hence an argument to every call:
type Flag = Bool
analyseExpr :: Flag -> Expr -> Expr
analyseExpr flag (Var v) = if flag then change1 v else change2 v
analyseExpr flag (App e1 e2) =
  App (analyseExpr flag e1) (analyseExpr flag e2)
analyseExpr flag (Lam v e) = Lam v (analyseExpr (not flag) e)
analyseExpr flag (Case scrut alts) =
  Case (analyseExpr flag scrut) (analyseAlt flag <$> alts)
analyseAlt :: Flag -> Alt -> Alt
analyseAlt flag (dc, pats, e) = (dc, pats, analyseExpr flag e)
I find this code problematic. The intention was: flag is a parameter that an external caller can use to change the behaviour of this code, but when reading and reasoning about this code, flag should be considered constant. But this intention is neither easily visible nor enforced. And in fact, in the above code, flag does change , as analyseExpr passes something else in the Lam case. The idiom is indistinguishable from the environment idiom, where a locally changing environment (such as variables in scope ) is passed around. So we are facing exactly the same problem as when reasoning about a loop in an imperative program with mutable variables. And we (pure functional programmers) should know better: We cherish immutability! We want to bind our variables once and have them scope over everything we need to scope over! The solution I d like to see in Haskell is common in other languages (Gallina, Idris, Agda, Isar), and this is what it would look like here:
type Flag = Bool
section (flag :: Flag) where
  analyseExpr :: Expr -> Expr
  analyseExpr (Var v) = if flag then change1 v else change2v 
  analyseExpr (App e1 e2) =
    App (analyseExpr e1) (analyseExpr e2)
  analyseExpr (Lam v e) = Lam v (analyseExpr e)
  analyseExpr (Case scrut alts) =
    Case (analyseExpr scrut) (analyseAlt <$> alts)
  analyseAlt :: Alt -> Alt
  analyseAlt (dc, pats, e) = (dc, pats, analyseExpr e)
Now the intention is clear: Within a clearly marked block, flag is fixed and when reasoning about this code I do not have to worry that it might change. Either all variables will be passed to change1, or all to change2. An important distinction! Therefore, inside the section, the type of analyseExpr does not mention Flag, whereas outside its type is Flag -> Expr -> Expr. This is a bit unusual, but not completely: You see precisely the same effect in a class declaration, where the type signature of the methods do not mention the class constraint, but outside the declaration they do. Note that idioms like implicit parameters or the Reader monad do not give the guarantee that the parameter is (locally) constant. More details can be found in the GHC proposal that I prepared, and I invite you to raise concern or voice support there. Curiously, this problem must have bothered me for longer than I remember: I discovered that seven years ago, I wrote a Template Haskell based implementation of this idea in the seal-module package!

Less parentheses 1: Bulleted argument lists The next two proposals are all about removing parentheses. I believe that Haskell s tendency to express complex code with no or few parentheses is one of its big strengths, as it makes it easier to visualy parse programs. A common idiom is to use the $ operator to separate a function from a complex argument without parentheses, but it does not help when there are multiple complex arguments. For that case I propose to steal an idea from the surprisingly successful markup language markdown, and use bulleted lists to indicate multiple arguments:
foo :: Baz
foo = bracket
          some complicated code
          that is evaluated first
          other complicated code for later
          even more complicated code
I find this very easy to visually parse and navigate. It is actually possible to do this now, if one defines ( ) = id with infixl 0 . A dedicated syntax extension (-XArgumentBullets) is preferable:
  • It only really adds readability if the bullets are nicely vertically aligned, which the compiler should enforce.
  • I would like to use $ inside these complex arguments, and multiple operators of precedence 0 do not mix. (infixl -1 would help).
  • It should be possible to nest these, and distinguish different nesting levers based on their indentation.

Less parentheses 1: Whitespace precedence The final proposal is the most daring. I am convinced that it improves readability and should be considered when creating a new language. As for Haskell, I am at the moment not proposing this as a language extension (but could be convinced to do so if there is enough positive feedback). Consider this definition of append:
(++) :: [a] -> [a] -> [a]
[]     ++ ys = ys
(x:xs) ++ ys = x : (xs++ys)
Imagine you were explaining the last line to someone orally. How would you speak it? One common way to do so is to not read the parentheses out aloud, but rather speak parenthesised expression more quickly and add pauses otherwise. We can do the same in syntax!
(++) :: [a] -> [a] -> [a]
[]   ++ ys = ys
x:xs ++ ys = x : xs++ys
The rule is simple: A sequence of tokens without any space is implicitly parenthesised. The reaction I got in Oxford was horror and disgust. And that is understandable we are very used to ignore spacing when parsing expressions (unless it is indentation, of course. Then we are no longer horrified, as our non-Haskell colleagues are when they see our code). But I am convinced that once you let the rule sink in, you will have no problem parsing such code with ease, and soon even with greater ease than the parenthesised version. It is a very natural thing to look at the general structure, identify compact chunks of characters , mentally group them, and then go and separately parse the internals of the chunks and how the chunks relate to each other. More natural than first scanning everything for ( and ), matching them up, building a mental tree, and then digging deeper. Incidentally, there was a non-programmer present during my presentation, and while she did not openly contradict the dismissive groan of the audience, I later learned that she found this variant quite obvious to understand and more easily to read than the parenthesised code. Some FAQs about this:
  • What about an operator with space on one side but not on the other? I d simply forbid that, and hence enforce readable code.
  • Do operator sections still require parenthesis? Yes, I d say so.
  • Does this overrule operator precedence? Yes! a * b+c == a * (b+c).
  • What is a token? Good question, and I am not not decided. In particular: Is a parenthesised expression a single token? If so, then (Succ a)+b * c parses as ((Succ a)+b) * c, otherwise it should probably simply be illegal.
  • Can we extend this so that one space binds tighter than two spaces, and so on? Yes we can, but really, we should not.
  • This is incompatible with Agda s syntax! Indeed it is, and I really like Agda s mixfix syntax. Can t have everything.
  • Has this been done before? I have not seen it in any language, but Lewis Wall has blogged this idea before.
Well, let me know what you think!

20 August 2017

Joachim Breitner: Compose Conference talk video online

Three months ago, I gave a talk at the Compose::Conference in New York about how Chris Smith and I added the ability to create networked multi-user programs to the educational Haskell programming environment CodeWorld, and finally the recording of the talk is available on YouTube (and is being discussed on reddit): It was the talk where I got the most positive feedback afterwards, and I think this is partly due to how I created the presentation: Instead of showing static slides, I programmed the complete visual display from scratch as an interaction within the CodeWorld environment, including all transitions, an working embedded game of Pong and a simulated multi-player environments with adjustable message delays. I have put the code for the presentation online. Chris and I have written about this for ICFP'17, and thanks to open access I can actually share the paper freely with you and under a CC license. If you come to Oxford you can see me perform a shorter version of this talk again.

6 August 2017

Joachim Breitner: Communication Failure

I am still far from being a professor, but I recently got a glimps of what awaits you in that role
From: Sebastian R. < @gmail.com>
To: joachim@cis.upenn.edu
Subject: re: Errors I've spotted a basic error in your course on Haskell (https://www.seas.upenn.edu/~cis194/fall16/). Before I proceed, it's cool if you're not receptive to errors being indicated; I've come across a number of professors who would rather take offense than admit we're all human and thus capable of making mistakes... My goal is to find a resource that might be useful well into the future, and a good indicator of that is how responsive the author is to change. In your introduction note you have written:
n contrast to a classical intro into Haskell, we do not start with numbers, booleans, tuples, lists and strings, but we start with pictures. These are of course library-defined (hence the input CodeWorld) and not part of the language . But that does not make them less interesting, and in fact, even the basic boolean type is library defined it just happens to be the standard library.
Howeverm there is no input CodeWorld in the code above. Have you been made aware of this error earlier? Regards, ...
Nice. I like when people learn from my lectures. The introduction is a bit werid, but ok, maybe this guy had some bad experiences. Strangley, I don t see a mistake in the material, so I respond:
From: Joachim Breitner <noscript>joachim at cis dot upenn dot edu</noscript>
To: Sebastian R. < @gmail.com>
Subject: Re: Errors Dear Sebastian, thanks for pointing out errors. But the first piece of code under Basic Haskell starts with
 -# LANGUAGE OverloadedStrings #- 
import CodeWorld
so I am not sure what you are referring to. Note that these are lecture notes, so you have to imagine a lecturer editing code live on stage along with it. If you only have the notes, you might have to infer a few things. Regards, Joachim
A while later, I receive this response:
From: Sebastian R. < @gmail.com>
To: Joachim Breitner <noscript>joachim at cis dot upenn dot edu</noscript>
Subject: Re: Errors Greetings, Joachim. Kindly open the lecture slides and search for "input CodeWorld" to find the error; it is not in the code, but in the paragraph that implicitly refers back to the code. You might note that I quoted this precisely from the lectures... and so I repeat myself... this came from your lectures; they're not my words!
In contrast to a classical intro into Haskell, we do not start with numbers, booleans, tuples, lists and strings, but we start with pictures. These are of course library-defined (hence the input CodeWorld) and not part of the language . But that does not make them less interesting, and in fact, even the basic boolean type is library defined it just happens to be the standard library.
This time around, I've highlighted the issue. I hope that made it easier for you to spot... Nonetheless, I got my answer. Don't reply if you're going to fight tooth and nail about such a basic fix; it's simply a waste of both of our time. I'd rather learn from somewhere else... On Tue, Aug 1, 2017 at 11:19 PM, Joachim Breitner <noscript>joachim at cis dot upenn dot edu</noscript> wrote:
I am a bit reminded of Sean Spicer they re not my words! but clearly I am missing something. And indeed I am: In the code snippet, I wrote correctly import CodeWorld, but in the text I had input CodeWorld. I probably did write LaTeX before writing the lecture notes. Well, glad to have that sorted out. I fixed the mistake and wrote back:
From: Joachim Breitner <noscript>joachim at cis dot upenn dot edu</noscript>
To: Sebastian R. < @gmail.com>
Betreff: Re: Errors Dear Sebastian, nobody is fighting, and I see the mistake now: The problem is not that the line is not in the code, the problem is that there is a typo in the line and I wrote input instead of import . Thanks for the report, although you did turn it into quite a riddle a simple you wrote import when it should have been import would have been a better user of both our time. Regards, Joachim Am Donnerstag, den 03.08.2017, 13:32 +1000 schrieb Sebastian R.:
(And it seems I now made the inverse typo, writing import instead of input . Anyways, I did not think of this any more until a few days later, when I found this nice message in my mailbox:
From: Sebastian R. < @gmail.com>
To: Joachim Breitner <noscript>joachim at cis dot upenn dot edu</noscript>
Subject: Re: Errors
a simple you wrote import when it should have been import would have been a better user of both our time.
We're both programmers. How about I cut ALL of the unnecessary garbage and just tell you to s/import/input/ on that last quotation (the thing immediately before this paragraph, in case you didn't know). I blatantly quoted the error, like this:
In your introduction note you have written:
n contrast to a classical intro into Haskell, we do not start with numbers, booleans, tuples, lists and strings, but we start with pictures. These are of course library-defined (hence the input CodeWorld) and not part of the language . But that does not make them less interesting, and in fact, even the basic boolean type is library defined it just happens to be the standard library.
Howeverm there is no input CodeWorld in the code above.
Since that apparently wasn't clear enough, in my second email to you I had to highlight it like so:
You might note that I quoted this precisely from the lectures... and so I repeat myself... this came from your lectures; they're not my words!
In contrast to a classical intro into Haskell, we do not start with numbers, booleans, tuples, lists and strings, but we start with pictures. These are of course library-defined (hence the input CodeWorld) and not part of the language . But that does not make them less interesting, and in fact, even the basic boolean type is library defined it just happens to be the standard library.
This time around, I've highlighted the issue. I hope that made it easier for you to spot...
I'm not sure if you're memeing at me or not now, but it seems either your reading comprehension, or your logical deduction skills might be substandard. Unfortunately, there isn't much either of us can do about that, so I'm happy to accept that some people will be so stupid; after all, it's to be expected and if we don't accept that which is to be expected then we live our lives in denial. Happy to wrap up this discusson here, Seb... On Fri, Aug 4, 2017 at 12:22 AM, Joachim Breitner <noscript>joachim at cis dot upenn dot edu</noscript> wrote:
Well, I chose to be amused by this, and I am sharing my amusement with you.

Next.

Previous.