Search Results: "calculus"

18 July 2020

Chris Lamb: The comedy is over

By now everyone must have seen the versions of comedy shows with the laugh track edited out. The removal of the laughter doesn't just reveal the artificial nature of television and how it conscripts the viewer into laughing along; by subverting key conversational conventions, it reveals some of the myriad and subtle ways humans communicate with one another:
Although the show's conversation is ostensibly between two people, the viewer serves as a silent third actor through which they and therefore we are meant to laugh along with. Then, when this third character is forcibly muted, viewers not only have to endure the stilted gaps, they also sense an uncanny loss of familiarity by losing their 'own' part in the script. A similar phenomenon can be seen in other art forms. In Garfield Minus Garfield, the forced negative spaces that these pauses introduce are discomfiting, almost to the level of performance art:
But when the technique is applied to other TV shows such as The Big Bang Theory, it is unsettling in entirely different ways, exposing the dysfunctional relationships and the adorkable mysogny at the heart of the show:
Once you start to look for it, the ur-elements of the audience, response and timing in the way we communicate are everywhere, from the gaps we leave so that others instinctively know when you have finished speaking, to the myriad of ways you can edit a film. These components are always present, it is only when one of them is taken away that they become more apparent. Today, the small delays added by videoconferencing adds an uncanny awkwardness to many of our everyday interactions too. It is said that "comedy is tragedy plus timing", so it is unsurprising that Zoom's undermining of timing leads, by this simple calculus of human interactions, to feelings of... tragedy.

Leaving aside the usual comments about Pavlovian conditioning and the shows that are the exceptions, complaints against canned laughter are the domain of the pub bore. I will therefore only add two brief remarks. First, rather than being cynically added to artificially inflate the lack of 'real' comedy, laugh tracks were initially added to replicate the live audience of existing shows. In other words, without a laugh track, these new shows might have ironically appeared almost as eerie as the fan edits cited above are today. Secondly, although laugh tracks are described as "false", this is not entirely correct. After all, someone did actually laugh, even if it was for an entirey different joke. In his Simulacra and Simulation, cultural theorist Jean Baudrillard might have poetically identified canned laughter as a "reflection of a profound reality", rather than an outright falsehood. One day, when this laughter becomes entirely algorithmically generated, Baudrillard would describe it as "an order of sorcery", placing it metaphysically on the same level as the entirely pumpkin-free Pumpkin Spiced Latte.

For a variety of reasons I recently decided to try interacting with various social media platforms in new ways. One way of loosening my addiction to this pornography of the amygdala was to hide the number of replies, 'likes' and related numbers:
The effect of installing this extension was immediate. I caught my eyes darting to where the numbers had been and realised I had been subconsciously looking for the input and perhaps even the outright validation of the masses. To be sure, these numbers can be relevant and sometimes useful, but they do implicitly involve delegating part of your responsibility of thinking for yourself to the vox populi, or the Greek chorus of the 21st century. Like many of you reading this, I am sure I told myself that the number of 'likes' has no bearing on whether I should agree with something, but hiding the numbers reveals much of this might have been a convenient fiction; as an entire century of discoveries in behavioural economics has demonstrated, all the pleasingly-satisfying arguments for rational free-market economics stand no chance against our inherent buggy mammalian brains.

Tying a few things together, when attempting to doomscroll through social media without these numbers, I realised that social media without the scorecard of engagement is almost exactly like watching these shows without the laugh track. Without the number of 'retweets', the lazy prompts to remind you exactly when, how and for how much to respond are removed, and replaced with the same stilted silences of those edited scenes from Friends. At times, the existential loneliness of Garfield Minus Garfield creeps in too, and there is more than enough of the dysfunctional, validation-seeking and parasocial 'conversations' of The Big Bang Theory. Most of all, the whole exercise permits a certain level of detached, critical analysis, allowing one to observe that the platforms often feel like a pre-written script with your 'friends' cast as actors, all perpetuated on the heady fumes of rows INSERT-ed into a database on the other side of the world. I'm not quite sure how this will affect my usage of the platforms, and any time spent away from these sites may mean fewer online connections at a time when we all need them the most. But as the Karal Marling, professor at the University of Minnesota wrote about artificial audiences: "Let me be the laugh track."

8 January 2017

Bits from Debian: New Debian Developers and Maintainers (November and December 2016)

The following contributors got their Debian Developer accounts in the last two months: The following contributors were added as Debian Maintainers in the last two months: Congratulations!

4 November 2015

Arturo Borrero Gonz lez: Rant: Software Engineering in the University of Seville



I'm an IT engineering student at the University of Seville, where there are 3 flavours to choose:
* Computers engineering: mostly focused on hardware
* Software engineering: focused on software
* IT engineering: a mixture of things, including subjects dedicated to security and virtualization

Each of these last a total amount of 4 years (better said: courses).

Software Engineering is mostly the traditional IT engineering and it's what I'm doing.

The first course is almost common to all engineering: algebra, calculus, phyisics, electronics and so on.
You may find these subjects in all IT engineering flavours and also in other traditional engineering
like telecommunications, mechanical, industrial or aeronautics.

In the second year, the subjects tend to be specific for IT engineering: networking, databases, algorithms and so on.
The third and fourth courses are almost fully specific for each flavour of IT engineering.

The problem I've found is in the third year of Software engineering.
We have here subjects like the following:
* requirements engineering
* design and testing
* software process management

All of these are focused in what they consider our biggest possibility in our professional career: the regional government.

It turns out this is not Silicon Valley, and the regional government (Junta de Andaluc a) is in fact the biggest IT player in our region.
They run thousands of servers, networks and digital services both for the public and for internal administrative usage.
The government is known for massively running concrete technologies: Oracle and java. Almost all of the biggest applications are running these technologies.

The problem here is that our subjects are fully headed towards these technologies, also using methodologies
that I consider outdated. They think you are going to work in a classical software factory on the classical software consulting business.
And they do this explicitly: the first day in the 'design & testing' subject we were told by the professor that we are likely going to work in the 'local industry', so we need to learn these technologies and no others.

An example: last week I was given by a professor a VirtualBox (sic) virtual machine running Windows XP (yeah...) just to run Eclipse (for java development) in a controlled environment. They expect the host OS to be Windows as well.
Other example: to this day, no subject which requires the usage of a VCS is using git. They stick to svn.

This is very sad!

The other day, it felt event worse because a RedHat internship was announce in a college mailing list, which explicitly excluded students from Software Engineering.... because all the reasons above. How is this possible? RedHat is a software company!

Let's be constructive. These are some things I would like to see in Software Engineering:
* other software workflows (for example, the linux kernel development model or debian package lifecycle)
* modern free libre open source technologies! python! golang!
* packaging (for example: debian, rpm, docker...)
* new database systems (nosql) or at least no more Oracle (please switch to PostgreSQL or whatever)
* no more java! enough of java!

BTW some time ago I read somewhere that no student should graduate without contributing to a major open source project. I can't agree more.

24 September 2015

Joachim Breitner: The Incredible Proof Machine

In a few weeks, I will have the opportunity to offer a weekend workshop to selected and motivated high school students1 to a topic of my choice. My idea is to tell them something about logic, proofs, and the joy of searching and finding proofs, and the gratification of irrevocable truths. While proving things on paper is already quite nice, it is much more fun to use an interactive theorem prover, such as Isabelle, Coq or Agda: You get immediate feedback, you can experiment and play around if you are stuck, and you get lots of small successes. Someone2 once called interactive theorem proving the worlds most geekiest videogame . Unfortunately, I don t think one can get high school students without any prior knowledge in logic, or programming, or fancy mathematical symbols, to do something meaningful with a system like Isabelle, so I need something that is (much) easier to use. I always had this idea in the back of my head that proving is not so much about writing text (as in normally written proofs) or programs (as in Agda) or labeled statements (as in Hilbert-style proofs), but rather something involving facts that I have proven so far floating around freely, and way to combine these facts to new facts, without the need to name them, or put them in a particular order or sequence. In a way, I m looking for labVIEW wrestled through the Curry-Horward-isomorphism. Something like this:
A proof of implication currying

A proof of implication currying

So I set out, rounded up a few contributors (Thanks!), implemented this, and now I proudly present: The Incredible Proof Machine3 This interactive theorem prover allows you to do perform proofs purely by dragging blocks (representing proof steps) onto the paper and connecting them properly. There is no need to learn syntax, and hence no frustration about getting that wrong. Furthermore, it comes with a number of example tasks to experiment with, so you can simply see it as a challenging computer came and work through them one by one, learning something about the logical connectives and how they work as you go. For the actual workshop, my plan is to let the students first try to solve the tasks of one session on their own, let them draw their own conclusions and come up with an idea of what they just did, and then deliver an explanation of the logical meaning of what they did. The implementation is heavily influenced by Isabelle: The software does not know anything about, say, conjunction ( ) and implication ( ). To the core, everything is but an untyped lambda expression, and when two blocks are connected, it does unification4 of the proposition present on either side. This general framework is then instantiated by specifying the basic rules (or axioms) in a descriptive manner. It is quite feasible to implement other logics or formal systems on top of this as well. Another influence of Isabelle is the non-linear editing: You neither have to create the proof in a particular order nor have to manually manage a proof focus . Instead, you can edit any bit of the proof at any time, and the system checks all of it continuously. As always, I am keen on feedback. Also, if you want to use this for your own teaching or experimenting needs, let me know. We have a mailing list for the project, the code is on GitHub, where you can also file bug reports and feature requests. Contributions are welcome! All aspects of the logic are implemented in Haskell and compiled to JavaScript using GHCJS, the UI is plain hand-written and messy JavaScript code, using JointJS to handle the graph interaction. Obviously, there is still plenty that can be done to improve the machine. In particular, the ability to create your own proof blocks, such as proof by contradiction, prove them to be valid and then use them in further proofs, is currently being worked on. And while the page will store your current progress, including all proofs you create, in your browser, it needs better ways to save, load and share tasks, blocks and proofs. Also, we d like to add some gamification, i.e. achievements ( First proof by contradiction , 50 theorems proven ), statistics, maybe a share theorem on twitter button. As the UI becomes more complicated, I d like to investigating moving more of it into Haskell world and use Functional Reactive Programming, i.e. Ryan Trickle s reflex, to stay sane. Customers who liked The Incredible Proof Machine might also like these artifacts, that I found while looking whether something like this exists:

  1. Students with migration background supported by the START scholarship
  2. Does anyone know the reference?
  3. We almost named it Proofcraft , which would be a name our current Minecraft-wild youth would appreciate, but it is alreay taken by Gerwin Kleins blog. Also, the irony of a theorem prover being in-credible is worth something.
  4. Luckily, two decades ago, Tobias Nipkow published a nice implementation of higher order pattern unification as ML code, which I transliterated to Haskell for this project.

2 January 2015

Russ Allbery: Review: An Imaginary Tale

Review: An Imaginary Tale, by Paul J. Nahin
Publisher: Princeton University
Copyright: 1998, 2007
Printing: 2010
ISBN: 0-691-14600-4
Format: Trade paperback
Pages: 259
Subtitled The Story of -1, An Imaginary Tale is a history of the use of the number mathematicians normally call i. Nahin starts in the 15th century with the discussion of the time on solving cubic equations, specifically del Ferro's solution to the depressed cubic. He walks through how to approach that solution with imaginary numbers, provides a brief introduction to the complex number plane, and then explains that del Ferro didn't follow the logic in those directions at all. Mathematicians at the time were dubious about negative numbers because they were not intuitive representations of real-world quantities. The square root of negative numbers was considered simply impossible. Nahin continues on in this historical vein for three chapters, walking through the mathematical problems that arose from analysis of cubics and the constant appearance of imaginary numbers, the early attempts to find a geometric interpretation, and then the slow development of the modern conception of imaginary numbers in the 19th century. The emphasis throughout is on the specifics of the math, not on the personalities, although there are a few personal tidbits. Along the way, he takes frequent side journeys to highlight the various places complex numbers are useful in solving otherwise-intractable problems. After that initial history come two chapters of applications of complex numbers: vector analysis, Kepler's laws, applications in electrical engineering, and more. He does win a special place in my heart by walking through the vector analysis problem that George Gamow uses to demonstrate complex numbers in One Two Three... Infinity: a treasure map whose directions depend on landmarks that no longer exist. Following that is a great chapter on deeper mathematical wizardry involving i, including Euler's identity, ii, and a pretty good explanation of hyperbolic functions. The final chapter is an introduction to complex function theory. One's opinion of this book is going to vary a lot depending on what type of history of math you like to read. Unfortunately, it wasn't what I was hoping for. That doesn't make it a bad book other reviewers have recommended it highly, and I think it would be a great book for someone with slightly different interests. But it's a very mathematical book. It's full of proofs, calculations, and analysis, and assumes that you remember a reasonable amount of algebra and calculus to follow along. It's been a long time since I studied math, and while I probably could have traced the links between steps of his proofs and derivations with some effort, I found myself skimming through large chunks of this book. I love histories of mathematics, and even popularizations of bits of it (particularly number theory), but with the emphasis on the popularization. If you're like me and are expecting something more like The Music of the Primes, or even One Two Three... Infinity, be warned that this is less about the people or the concepts and more about the math itself. If you've read books like that and thought they needed more equations, more detail, and more of the actual calculations, this may be exactly what you're looking for. Nahin is fun to read, at least when I wasn't getting lost in steps that are obvious to him. He's quite enthusiastic about the topic and clearly loves being able to show how to take apart a difficult mathematical equation using a novel technique. His joy reminds me of when I was most enjoying my techniques of integration class in college. Even when I started skimming past the details, I liked his excitement. This wasn't what I was looking for, so I can't exactly recommend it, but hopefully this review will help you guess whether you would like it. It's much heavier on the mathematics and lighter on the popularization, and if that's the direction you would have preferred the other similar books I've reviewed to go, this may be worth your attention. Rating: 6 out of 10

31 December 2012

Joey Hess: no longer a perl programmer

This year, I've gradually realized that I no longer identify as a perl programmer. For a decade and a half, perl was the language I reached for to solve any problem that didn't have a good reason to be solved in some other language. Now I only reach for it in the occasional one-liner -- and even then I'm more likely to find myself in ghci and end up with a small haskell program. I still maintain plenty of perl code, but even when I do, I'm not thinking in perl, but traslating from some internal lambda calculus. There's quite a few new-ish perl features that I have not bothered to learn, and I can feel some of the trivia that perl encourages be kept in mind slipping gradually away. Although the evil gotchas remain fresh in my mind! More importantly, my brain's own evaluation of code has changed; it doesn't evaluate it imperatively (unless forced to by an appropriate monad), but sees the gesalt, sees the data flow, and operates lazily and sometimes, I think in parallel. The closest I can come to explaining the feeling is how you might feel when thinking about a shell pipeline, rather than a for loop. Revisiting some of my older haskell code, I could see the perl thinking that led to it. And rewriting it into pure, type-driven, code that took advantage of laziness for automatic memoization, I saw, conclusively that the way I think about code has changed. (See the difference for yourself: before after ) I hear of many people who enjoy learning lots of programming languages, one after the other. A new one every month, or year. I suspect this is a fairly shallow learning. I like to dive deep. It took me probably 6 years to fully explore every depth of perl. And I never saw a reason to do the same with python or ruby or their ilk; they're too similar to perl for it to seem worth the bother. Though they have less arcania in their learning curves and are probably better, there's not enough value to redo that process. I'm glad haskell came along as a language that is significantly different enough that it was worth learning. The deep dive for haskell goes deep indeed. I'm already 5 years in, and have more to learn now than I ever did before. I'm glad I didn't get stuck on perl. But I may be stuck on haskell now instead, for the foreseeable future. I'd sort of like to get really fluent in javascript, but only as a means to an end -- and haskell to javascript compilers are getting sufficiently good that I may avoid it. Other than that, I sense adga and coq beckoning with their promises of proof. Perhaps one of these years. Of course if Bradley Kuhn is right and perl is the new cobol, I know what I'll be doing come the unix rollover in 2038. ;)

29 August 2012

Russell Coker: Woolworths Maths Fail

picture of discount from $3.99 to $3.00 advertised as 20% off The above is a picture of the chocolate display at Woolworths, an Australian supermarket that was formerly known as Safeway it had the same logo as the US Safeway so there s probably a connection. This is actually a 24.81% discount. It s possible that some people might consider it a legal issue to advertise something as a 25% discount when it s 1 cent short of that (even though we haven t had a coin smaller than 5 cents in Australia since 1991). But then if they wanted to advertise a discount percentage that s a multiple of 5% they could have made the discount price $2.99, presumably whatever factors made them make the original price $3.99 instead of $4.00 would also apply when choosing a discount price. So the question is, do Woolworths have a strict policy of rounding down discount rates to the nearest 5% or do they just employ people who failed maths in high school? Sometimes when discussing education people ask rhetorical questions such as when would someone use calculus in real life , I think that the best answer is people who have studied calculus probably won t write such stupid signs . Sure the claimed discount is technically correct as they don t say no more than 20% off and not misleading in a legal sense (it s OK to claim less than you provide), but it s annoyingly wrong. Well educated people don t do that sort of thing. As an aside, the chocolate in question is Green and Black, that s a premium chocolate line that is Fair Trade, Organic, and very tasty. If you are in Australia then I recommend buying some because $3.00 is a good price. Related posts:
  1. fair trade is the Linux way I have recently purchased a large quantity of fair trade...
  2. LUG Meetings etc Recently I was talking to an employee at Safeway (an...
  3. The Sad State of Shopping in Australia Paul Wayper has written a blog post criticising the main...

20 September 2011

Maximilian Attems: Recurring Maxima failures

Every once in a while I try out latest Debian/sid Maxima as it sees continuous development: Maxima git repo. Todays failure is an integral given at the Bloc course "Aspects of QCD at Finite Density". It is an exercise to calculate following simple integral that should just give a Bessel function:
(%i3) integrate(exp(z*cos(t))*cos(a*t), t, 0, %pi);
                           %pi
                          /
                          [               cos(t) z
(%o3)                     I    cos(a t) %e         dt
                          ]
                          /
                           0
As usual the result it returns is the integral itself.. :/ So yes indeed Maxima is nice for simple undergrads calculation: Maximum Calculus with Maxima, but unfortunately don't expect much for more complex problems. The result is the partition function of the chiral perturbation theory in the simple setup of equal quark masses and one quark flavour family. Sadly Integrals returning Bessel functions seem to regularly fail.

21 April 2011

Joachim Breitner: A talk on Church s result about the Entscheidungsproblem

A newly founded Logic Group in Mumbai, which seems to be a private thing but some professors from the IIT Bombay are taking part, has approached me and asked if I can give a talk about Church s lambda calculus and how he used it to show that Hilbert s Entscheidungsproblem is not solvable. They are starting a series of talk on logic on the occasion of Turing s 100th birth year and because I am finally leaving the IIT campus in two days, my talk turned out to be the opening talk for the series. Roughly guessed about fifty people attended, some professors but most of them students from fields other than mathematics and computer science I hope that they were not too confused by distinguishing truth and provability, expressability and computability, of which I did not give a proper explanation.
I started with a historical exposition of the state of logic in the beginning 20th century, then gave an introduction to lambda calculus, encoding of natural numbers in then, G del encoding. Finally I sketched the proof of the unsolvability of the question whether a lambda term has a normal form and then concluded by showing how this implies that the Entscheidungsproblem is not solvable.
I have before hand written out the talk in full, and again I ended up saying completely different things or at least said things completely different. Nevertheless, I am sharing the (planned) text of my talk, including the timeline that I draw on the whiteboard.

23 January 2011

Joachim Breitner: A Mistake in Church s Paper

For a course at the IIT Bombay on functional programing, I was preparing a presentation on Alonzo Church s theorem, based on his original 1936 paper An Unsolvable Problem of Elementary Number Theory where he first showed that the Entscheidungsproblem is not solvable. In that paper, he states a theorem (Theorem II) If a formula has a normal form [ ] any sequence of reductions of the formula must (if continued) terminate in the normal form . This seems to be wrong. Consider the lambda expression KI , where K=( xy.x), I=( x.x) and =( x.xx)( x.xx). This has a normal form I. But because reduces to , there is a non-terminating sequence KI KI , in contradiction to Church s claim. The same example contradicts his Theorem III: If a formula has a normal form, every well-formed part of it has a normal form. , which is used in the very proof of his main result, Theorem XVIII. There is no proof for Theorem II in this paper. He cites it from the then forthcoming paper Some properties of Conversion by him and J. B. Rosser. There, proofs are given (see Theorem 2 and its Corollary) but they are quite impenetrable to me. So I was searching for some correction paper, or any other discussion of this issue, but could not find any. Does any reader of this blog know more? What happened in that times when a paper had an error in some proof? What would happen now? Or is this not an error after all, but just a subtle difference between the definitions of -calculus as we know and as they introduced it? Unrelated to this question: The professor asked me to write down a summary of (my interpretation of) the importance and impact of Church s paper with regard to G del and Hilbert s program, and how that relates to B hm s theorem. In the spirit of sharing, I have uploaded my thoughts on Church and B hm, comments are welcome. Update: Christian von Essen solved the riddle in a comment to this post: Church only considers lambda abstractions as well-formed when the bound variable actually occurs (freely) in the abstracted term. This does not allow for the K combinator and thus there is no problem.

12 March 2010

Matt Brubeck: Discovering Urbit: Functional programming from scratch

C. Guy Yarvin is a good friend of Mencius Moldbug, a pseudonymous blogger known for iconoclastic novella-length essays on politics and history (and occasionally computer science). Guy recently published under his own name a novel project in language and systems design. His own writing about his work is entertaining but verbose (as Moldbug's readers might expect), so I will attempt to summarize it here. Nock, Urbit, Watt First there is Nock, a tool for defining higher-level languages comparable to the lambda calculus, but meant as foundational system software rather than foundational meta mathe matics. Its primitives include positive integers with equality and increment operators, cons cells with car/cdr/cadr/etc., and a macro for convenient branching. Nock uses trees of integers to represent both code and data. Next, Guy provides the rationale for Nock. In short, he asks how a planet-wide computing infrastructure (OS, networking, and languages) would look if designed from first priniciples for robustness and interoperability. The answer he proposes is Urbit: a URI-like name space distributed globally via content-centric networking, with a feudal structure for top-level names and cryptographic identities. Urbit is a static functional name space: it is both referentially transparent and monotonic (a name, once bound to a value, cannot be un- or re-bound). Why does this require a new formal logic and a new programming language? In Urbit, all data and code are distributed via the global namespace. For interoperability, the code must have a standard format. Nock's minimal spec is meant to be an un ambiguous, unchanging, totally standardized basis for computation in Urbit. Above it will be Watt, a self-hosting language that compiles to Nock. Urbit itself will be implemented in Watt, so Nock and Watt are designed to treat data as code using metacircular evaluation. The code A prototype implementation of Watt is on GitHub. It is not yet self-hosting; the current compiler is written in C. Watt is a functional language with static types called molds and a mechanism for explicit lazy evaluation. (I was suprised to find I had accidentally created an in com patible lazy dialect of Nock despite its goal of unambiguous semantics just by implementing it in Haskell.) The code is not fully documented, but the repository contains draft specs for both Watt and Urbit. Beware: the syntax and terminology are a bit unconventional. Guy has offered a few exercises to help get started with Nock and Watt:
The Nock challenge:
Write a decrement operator in Nock, and an interpreter that can evaluate it.
Basic Watt:
Write an integer square root function in Watt.
Advanced Watt:
How would you write a function that tests whether molds A and B are orthogonal (no noun is in both A and B)? Or compatible (any noun in A is also in B)? Are these functions NP-complete? If so, how might one work around this in practice?
If you want to learn more, start with these problems. You can email your solutions to Guy. Will it work? I find Urbit intellectually appealing; it is a simple and clean architecture that could potentially replace a lot of complex system software. But can we get there from here? Guy imagines Urbit as the product of an ages-old Martian civilization:
Since Earth code is fifty years old, and Martian code is fifty million years old, Martian code has been evolving into a big ball of mud for a million times longer than Earth software. (And two million times longer than Windows.) Therefore, at some point in Martian history, some abject fsck of a Martian code-monkey must have said: fsck this entire fscking ball of mud. For lo, its defects cannot be summarized; for they exceed the global supply of bullet points; for numerous as the fishes in the sea, like the fishes in the sea they fsck, making more little fscking fishes. For lo, it is fscked, and a big ball of mud. And there is only one thing to do with it: obliterate the trunk, fire the developers, and hire a whole new fscking army of Martian code-monkeys to rewrite the entire fscking thing. This is the crucial inference we can draw about Mars: since the Martians had 50 million years to try, in the end they must have succeeded. The result: Martian code, as we know it today. Not enormous and horrible tiny and diamond-perfect. Moreover, because it is tiny and diamond-perfect, it is perfectly stable and never changes or decays. It neither is a big ball of mud, nor tends to become one. It has achieved its final, permanent and excellent state.
Do Earthlings have the will to throw out the whole ball of mud and start from scratch? I doubt it. We can build Urbit but no one will come, unless it solves some problem radically better than current software. Moldbug thinks feudalism will produce better online reputation, but feudal reputation does not require feudal identity; it is not that much harder to build Moldbug's reputation system on Earth than on Mars. I still have not figured out the killer app that will get early adopters to switch to Urbit.

1 April 2009

Nicolas Valc rcel: Guadalinex-Edu Beta Announcement

Guadalinex-EDU: New educational distribution based in Jaunty

Guadalinex Edu is a new GNU / Linux distribution developed by the Advanced Center for ICT Schools Management (CGA), initially created for use by the educational community of Andalucia, Spain which will be available for teachers and students within the domestic sphere. Guadalinex Edu distribution is equivalent to the future Guadalinex V6 citizen but oriented to the educational sector.

Looking for the highest possible level of stability in the distribution, the CGA will be releasing a beta version of the distribution, installable by anyone who wants it. The aim is to involve the educational community in the stabilization and development of Guadalinex Edu, before its deployment in local schools.

Guadalinex Edu Beta 1 is fully functional, but should not forget that this is a beta version which may contain errors from both: the applications included by the CGA in the distribution, and errors coming from the base distribution where it must be installed (Jaunty Jackalope Ubuntu 9.04), so it is not recommended for use in production computers and, in any case, it is recommend making a backup of important data before installation.

Within an ICT Center, Guadalinex Edu will take advantage of the network infrastructure and servers from the center to offer:

For domestic users, Guadalinex Edu is offered as a meta package to be installed on Ubuntu Jaunty and offers tons of applications like:

Plus all the benefits of Guadalinex:

And Ubuntu:

Installation instructions for home users are available here(In spanish)

If you have a Launchpad account, you can add comments and report bugs at the following addresses (if you do not have an account, you can register with Launchpad for one)

Comments:
https://launchpad.net/guadalinexedu/+addquestion

Bug reports:
https://bugs.launchpad.net/guadalinexedu

Original link in spanish here

7 December 2008

Muammar El Khatib: Simple Life + Debian + Science

Two days ago, I arrived from Choron which is one of the most beautiful beaches of Venezuela. There, It was held the "II Congreso de Fisicoqu mica Te rica y Computacional" (Congress of Theoretical and Computational Physical Chemistry). The congress was really nice because it was a meeting which included quantum chemists from Venezuela and from another countries. I went there in a bus that was assigned by La Universidad del Zulia for me and other attendees. What does this have to do with the title of this post? Well, I had the chance to share time with people (my co-workers) that in some way they live the life in a simpler way than me. For example, they don't care using Linux, Windows, or Mac. They don't care listening to Salsa or whatever other music. They don't care using semi-empirical methods or Ab Initio ones. They don't care studying organic compounds or inorganic ones... So I had a reflexion: should I have to be a little bit more flexible with my points of view? Should I stop being that critic with every field that I study or I am part of? By the moments, looking at what I have been able to achieve being how I am, I think I shouldn't turn. But I accept it was good to see and share time with people that live life a little bit simpler (and I am aware that I have to improve some stuff about me).

There was a poster section where there were lots of interesting Physical Chemistry works that are being carried out in different parts of Venezuela. My poster was related to the linear and non-linear optical response of the nitrogen basis of DNA, RNA and their tautomers in Gaseous phase . It was interesting to see that most of the computational calculus servers, Grids, and clusters were running Debian instead of systems based on Red Hat which have been for long time widely used in the world of sciences in Venezuela :-) I hope to see more and more laboratories of my country using Debian with the pass of the time. However, the implementation of Debian has increased a lot in this field. In the laboratory that I am part of, almost all the machines uses Debian as environment to carry out quantum calculus.

14 November 2008

John Goerzen: Education

One of the speakers at OSCon this year — I forget which one — made a point that ran something like this, heavily paraphrased:
Education used to be an end in itself, not a means. It wasn’t about having a high-paying career. It was about knowing the world, about having knowledge and wisdom for its own sake. It was, quite bluntly, the accumulation of useless knowledge by the elite — those that could afford to spend time on such things, knowing that useless knowledge has a way of becoming useful in the most unexpected of ways. How fortunate we are to live in an age where the accumulation of useless knowledge is available to so many, and how sad it is that so few take advantage of it.
What a powerful statement, and it rings true to me. I remember in high school, when people from the local liberal arts college would come and talk. They’d talk about the value throughout a lifetime of knowledge in a broad range of disciplines: English, history, political science, religion, science, and the arts. They’d talk about how their graduates went on to lead distinguished lives, how this broad core of knowledge serves a person well through life. I guess I didn’t believe them, because due to their lack of a computer science major, I went elsewhere. That local school may not have been the best choice for me for other reasons, but as I look back on it, I think they had a much stronger message than I realized back then. Here I am, just two math classes, one computer science class, and one biology class away from a degree. Yet I have had not one class covering the history of east Asia, not one class on different world cultures or religions, and only a very basic understanding of one foreign language (German). This hits me in the face almost every day. Yesterday I was wondering about the history of slavery and racism in Europe. Today I’m curious about China’s history as an economic powerhouse. Last week I was curious about Roman law and daily life. The fact is, everything from philosophy to calculus is screamingly relevant to daily, modern life. We hear talk of “an American revolution” in Washington, of a shift of power in the Senate. It seems we forget that the notion of a Senate is considerably older than the United States is — and that we have such a thing because our founders were aware of this. Macroeconomic theory is thrust in our faces on an almost daily basis these days, yet I’ve never had a class on economics at all. We might feel fear of terrorist attacks, or see our fellow citizens lash out at “the Arabs.” Our own short memories fail to remind us of the light in which we are seen, fail to put the really quite minor terrorist threat in context of what London or Dresden endured in World War II. We demand our government to make us safer, and our government responds by making us less safe but making us *feel* safer at airports. In my own field, I see some universities buckling to pressure from Business to turn out large numbers of mediocre programmers that know the Java or .NET standard library well, but have no sense of the theory behind computer science, and would be utterly lost if asked to, say, write a recursive QuickSort. I find myself almost completely baffled that some companies that want to hire the world’s best programmers are only looking for people that are already fluent in $LANGUAGE — not ones that are good programmers, and so well-versed in computer science that they can easily pick up any language. I think there is a lot to the argument that a good, broad, classical education can serve a person well in any career. I wish I had realized that a little earlier.

3 January 2008

Ondřej Čertík: SymPy/sympycore (pure Python) up to 5x faster than Maxima (future of Sage.calculus?)

According to this test, sympycore is from 2.5x to 5x faster than Maxima. This is an absolutely fantastic result and also a perfect certificate for Python in scientific computing. Considering that we compare pure Python to LISP.

Ok, this made us excited, so we dugg deeper and ran more benchmarks. But first, let me say a few general remarks. I want a fast CAS (Computer Algebra System) in Python. General CAS, that people use, that is useful, that is easily extensible (!), that is not missing anything, that is comparable to Mathematica and Maple -- and most importantly -- I want it now and I don't care about 30 years horizons (I won't be able to do any serious programming in 30 years anyway). All right. How to do that? Well, many people tried... And failed. The only opensource CAS system, that has any chance of becoming the opensource CAS, in my own opinion, is Sage. You can read more about my impressions form Sage here. I am actually only interested in mathematical physics, so basically Sage.calculus. Currently Sage uses Maxima, because Maxima is old, proven, working system and it's reasonably fast and quite reliable, but written in LISP. Some people like LISP. I don't and I find it extremely difficult to extend Maxima. Also even though Maxima is in LISP, it uses it's own language for interacting with the user (well, that's not the way). I like python, so I want to use Python. Sage has written Python wrappers to Maxima, so Sage can do almost everything that Maxima can, plus many other things. Now. But the Sage.calculus has issues.

First, I don't know how to extend the wrappers with some new things, see my post in the sage-devel for details, it's almost 2 months old with no reaction, which shows that it's a difficult issue (or nonsense:)).

And second, it's slow. For some examples that Sage users have found out, even SymPy, as it is now, is 7x faster than Sage and sympycore 23x faster and with the recent speed improvements 40x faster than Sage.

So let's improve Sage.calculus. How? Well, no one knows for sure, but
I believe in my original idea of pure Python CAS (SymPy), possibly with some parts rewritten in C. Fortunately, quite a lot of us believe that this is the way.

What is this sympycore thing? In sympy, we wanted to have something now, instead of tomorrow, so we were adding a lot of features, not looking too much on speed. But then Pearu Peterson came and said, guys, we need speed too. So he rewrote the core (resulting in 10x to 100x speedup) and we moved to the new core. But first, the speed isn't sufficient, and second it destabilized SymPy a lot (there are still some problems with caching and assumptions half a year later). So with the next package of speed improvements, we decided to either port them to the current sympy, or wait until the new core stabilizes enough. So the new new core is called sympycore now, currently it only has the very basic arithmetics (and derivatives and simple integrals), but it's very fast. It's mainly done by Pearu. But for example the latest speed improvement using sexpressions was invented by Fredrik Johansson, another SymPy developer and the author of mpmath.

OK, let's go back to the benchmarks. First thing we realized is that Pearu was using CLISP 2.41 (2006-10-13) and compiled Maxima by hand in the above timings, but when I tried Maxima in Debian (which is compiled with GNU Common Lisp (GCL) GCL 2.6.8), I got different results, Maxima did beat sympycore.

SymPyCore:

In [5]: %time e=((x+y+z)**100).expand()
CPU times: user 0.57 s, sys: 0.00 s, total: 0.57 s
Wall time: 0.57

In [6]: %time e=((x+y+z)**20 * (y+x)**19).expand()
CPU times: user 0.25 s, sys: 0.00 s, total: 0.25 s
Wall time: 0.25

Maxima:

(%i7) t0:elapsed_real_time ()$ expand ((x+y+z)^100)$ elapsed_real_time ()-t0;
(%o9) 0.41
(%i16) t0:elapsed_real_time ()$ expand ((x + y+z)^20*(x+z)^19)$ elapsed_real_time ()-t0;
(%o18) 0.080000000000005


So when expanding, Maxima is comparable to sympycore (0.41 vs 0.57), but for general arithmetics, Maxima is 3.5x faster. We also compared GiNaC (resp. swiginac):

>>> %time e=((x+y+z)**20 * (y+x)**19).expand()
CPU times: user 0.03 s, sys: 0.00 s, total: 0.03 s
Wall time: 0.03


Then we compared just the (x+y+z)**200:

sympycore:
>>> %time e=((x+y+z)**200).expand()
CPU times: user 1.80 s, sys: 0.06 s, total: 1.86 s
Wall time: 1.92
swiginac:
>>> %time e=((x+y+z)**200).expand()
CPU times: user 0.52 s, sys: 0.02 s, total: 0.53 s
maxima:
(%i41) t0:elapsed_real_time ()$ expand ((x + y+z)^200)$ elapsed_real_time ()-t0;
(%o43) 2.220000000000027


Where GiNaC still wins, but sympycore beats Maxima, but the timings really depend on the algorithm used, sympycore uses Millers algorithm which is the most efficient.

So then we tried a fair comparison: compare expanding x * y where x and y are expanded powers (to make more terms):

sympycore:
>>> from sympy import *
>>> x,y,z=map(Symbol,'xyz')
>>> xx=((x+y+z)**20).expand()
>>> yy=((x+y+z)**21).expand()
>>> %time e=(xx*yy).expand()
CPU times: user 2.21 s, sys: 0.10 s, total: 2.32 s
Wall time: 2.31
swiginac:
>>> xx=((x+y+z)**20).expand()
>>> yy=((x+y+z)**21).expand()
>>> %time e=(xx*yy).expand()
CPU times: user 0.30 s, sys: 0.00 s, total: 0.30 s
Wall time: 0.30
maxima:
(%i44) xx:expand((x+y+z)^20)$
(%i45) yy:expand((x+y+z)^21)$
(%i46) t0:elapsed_real_time ()$ expand (xx*yy)$ elapsed_real_time ()-t0;
(%o48) 0.57999999999993

So, sympycore is 7x slower than swiginac and 3x slower than maxima. We are still using pure Python, so that's very promising.

When using sexpr functions directly then 3*(a*x+..) is 4-5x faster than Maxima in Debian/Ubuntu. So, the headline of this post is justified. :)

Conclusion

Let's build the car. Sage has the most features and it is the most complete car. It has issues, some wheels need to be improved (Sage.calculus). Let's change them then. Maybe SymPy could be the new wheel, maybe not, we'll see. SymPy is quite a reasonable car for calculus (it has plotting, it has exports to latex, nice, simple but powerfull command line with ipython and all those bells and whistles and it can also be used as a regular python library). But it also has issues, one wheel should be improved. That's the sympycore project.

All those smaller and smaller wheels show, that this is indeed the way to go, but very important thing is to put them back in the car. I.e. sympycore back to sympy and sympy back to Sage and integrate them well. While also leaving them as separate modules, so that users, that only need one particular wheel, can use them.

16 November 2007

Ondřej Čertík: SAGE Days 6

From November 9 till November 15 I attended SAGE Days 6 in Bristol, UK. It was a conference and a coding sprint for the SAGE project, which wants to build something comparable to Mathematica, Maple, Matlab and Magma built only from open source components without reinventing the wheel. Sage also provides excellent novel implementations for many mathematical algorithms (not found in Mathematica/Maple etc.).

I was invited to give a talk about SymPy, which is a Python library for symbolic mathematics, that we are doing.

Here are my notes from each day:

Saturday: one, two
Sunday: one, two
Monday, Tuesday, Wednesday: one

Impressions

Very positive. I started SymPy two years ago, because I wanted to play with symbolic mathematics in Python, see for example my presentation I gave at this conference for details. SAGE at that time was just able to do some mathematics things, but it was very weak in calculus, which is the SymPy's main domain. This has changed last half a year, when SAGE people managed to wrap Maxima in Python (which I thought to be completely impossible), so I started to follow SAGE development more closely. After the SD6, I must say I became very excited about the project.

One problem with mailinglists, IRC and other online interaction is that it's very difficult for me to get an impression about the people and the project. Being able to meet the developers and discuss with them face to face gives me the impression very quickly and very accurately.

By far the biggest guarantee, why it is worthy for me to contribute to SAGE, is the project leader, William Stein. He is very rational and pragmatic (I like these two properties) and after many discussions with him, I came to realize that he has basically identical views on the important things as I do and were I on his place, I would do the same decisions as he did and does. That's very nice, because I can concentrate my energy on things that I like to improve and don't have to worry about other things, because I know he will do it right.

The other SAGE developers are experts and with similar attitude as William has. It's enthusiastic to be among people who make things happen. For example one of the authors of Cython, Robert, implemented during SD6 a very nice HTML output, that shows Cython code, with colors according to how many Python API calls are called on that particular line, and by clicking that line it shows the corresponding C code.

SAGE project has a high aim and it stricly goes for it, without looking too much to the right or left, and that's how it should be. And it does produce a lot of very useful and high quality stuff along the way, for example Cython (probably the best wrapper for C/C++ things now, only pypy could possibly beat it, but that's still more a research project) or the SAGE notebook, which looks like a Mathematica notebook, but better and in a browser (together with a revision history, sharing, SSL encryption, etc.).

The only little problem is that currently SAGE developers are all mathematicians and as is well-known, mathematicians looks at mathematic from a very different prospective than physicists. :) And so I need calculus, advanced calculus and only when this is working, and working well, I can build on it some more advanced features. SAGE currently goes a little the other way - it has a lot of advanced features, from number theory, modular forms, elliptic curves, etc., but the basic calculus still needs a lot of improvements. SAGE wraps Maxima, because Maxima is quite fast, very well tested, so it works well. It's difficult to extend and written in LISP and that's very bad. That's where SymPy could help - it's in Python, very easy to extend, but currently slower than Maxima (rewriting parts of SymPy using Cython, or even C directly, will make it faster, hopefully as fast as Maxima or faster).

It's not yet in Debian, but SAGE people are working on it. It's not an easy task unfortunately.

Conclusion

SAGE is a very promising young project and I think it will succeed to provide an open source alternative to Maple, Matlab, Mathematica and Magma.

7 August 2007

Evan Prodromou: 19 Thermidor CCXV

I'm on the plane down from Vancouver to San Francisco this morning. My flight from Taipei hit Vancouver around 8PM last night, so I stayed at the Fairmont Hotel at the airport. It was definitely a special treat -- the Fairmont's pretty pricey -- but this trip has been really low-budget and I needed a lot of rest and recharge. I had a good room with a view of the airport and the city beyond. My flight over was uneventful -- sleep, eat, Spiderman 3 -- which was good since I'd had such a big last day in Taipei. In the morning, I'd gone out to see Taipei 101, which claims to be the tallest building in the world by some standards. I think there are lots of ways to slice the "tallest building" category -- the CN Tower has a strong claim, but that's only counting the huge antenna on top. And there's an unfinished tower in Dubai that's already close to, or taller than, Taipei 101. Regardless, Taipei 101 is a real interesting site. The elevators are the fastest in the world -- 38 seconds to go from floor 5 to floor 87. The windowed observation deck is interesting, and there's an outdoor deck two flights above it. The outdoor deck is racked by Taipei's scorching heat -- when I was there, it was 31C, up above the city's protective smog belt. The sun and empty sky make for an otherworldly experience. After 101, I took Taipei's local subway, the MRT, to the National Palace Museum. I've found the MRT easy to get around on -- most of the information was available in English as well as Chinese -- but I got a little lost looking for the museum. I was disappointed that the information booth in the Taiwan Main Station had so little tourism information, and all of it in Chinese. (Yeah, I forgot to print out the Wikitravel Taipei guide before leaving the hotel. My bad.) But I managed to wend my way out to Shilin Station and took a cab to the museum. It was worth the search. When the Chinese Nationalists left the mainland for Taiwan, they took with them many of the treasures of the Imperial Palace (much to the chagrin of the victorious Communists). These form the core of the museum's collection, and they're really quite amazing -- articles in bronze, ivory and gold from the Neolithic era to the modern day. While Western culture has gone through so many distinct and discontinuous phases -- Egypt and Mesopotamia, Greco-Roman, Medieval and Modern -- China's has varied much more smoothly and continuously. It's a little disconcerting to see artistic themes from 4000BC that are echoed in items for sale in the stores today. I had to get back to the Wikimania site -- by then almost empty -- to do a GChat conference with Niko. There were a passel of Wikimaniacs still around after the conference looking around Taiwan like I was, and after consulting Wikitravel we decided to go to Taipei's most famous restaurant, Din Something Something. It's got a short menu of delicious steamed dumplings -- pork, veggie, crab and shrimp. The shop's claim to fame is the elastic skin that keeps a pocket of hot broth inside the dumplings. Hey: if you do something well, why vary the menu? It was the best meal I've had in a long time. The dumplings themselves were incredibly good -- moist and rich with complex flavors. But the company was even better -- about 14 Wikimedians from all continents and walks of life, some of the most fascinating and important people on the planet. We were a big enough group to be stowed in our own party room, and with basket after basket of steamed dumplings piled onto the table, washed down with shared bottles of the ubiquitous Taiwan Beer, we had a crazy chaotic and fun time. I'm so glad I took an extra day after the conference to see a bit of the city. I think there's a lot more that I missed, but I'm glad that I got a taste. tags:

Wikimania 2008 Inevitably the conversation at Din Something Something turned to Wikimania 2008 and possible venues for it. Right now, the only serious bid is from Alexandria, Egypt, home of the Biblioteca Alexandrina. This is a modern reconstruction of the famous Library of Alexandria, centre of learning for the ancient world and repository of all of its knowledge. The bid is to use the sleek, glamourous conference centre attached to the Biblioteca -- offered to us free of charge. You couldn't put together a more powerful metaphor -- Wikimedia being itself a project to capture the world's knowledge and share it with all humanity. Having our event on the African continent would also send a great message about the Foundation's commitment to making free information accessible to the developing world. North Africa is definitely the most accessible part of the continent for Europeans and North Americans -- many of whom balked at the price and distance of travel to Taipei. The big problem with planning Wikimanias (Wikimaniae?) in the past has been the short time-frame. Typically bids for the event don't start until after the previous one, and the bidding process takes several months. By the time a venue is selected, there may be only 6-8 months of planning time available. For an international conference of several hundred people, this is a stressful, breakneck schedule. The idea was floated to do a little stutter-step and put Wikimania on a more long-term planning schedule. The event committee would short-circuit the bidding process this year, and declare Alexandria the site for Wikimania 2008 unless other groups insisted on bids for other cities. Then, bidding for Wikimania 2009 would start in September or October, with a decision made by the end of 2007. This would give 20 months for the Wikimania 2009 organizing team to prepare. Most importantly, they'd be able to shadow the 2008 team, learn how the job is done, attend the 2008 conference, and present about their venue for the next year. I think this is a really good idea. The Biblioteca conference centre staff will be able to provide us some professional services, making the shorter schedule (still about 11 months) a lot easier. There are some concerns about fairness of the process, or appearance of same, but my guess is that if the 2009 bid process starts soon after, people won't have a big beef. There are two major concerns with going to Alexandria, though. First is Egypt's worrying problem with human rights -- specifically gay rights. Homosexual Egyptian men are subject to arrest, and the laws are occasionally enforced. One foreigner was arrested in 2002. That's enough to put Egypt on a no-visit list with some gay-rights groups. Depressing as it is, I think it might be too much to hope for a venue in a country with an absolutely spotless human rights record. Maybe parts of Scandinavia...? There's a cruel calculus needed to evaluate which national human rights abuses can be reluctantly overlooked, and which are unacceptable. It's also clear that my personal judgment might be clouded, since this particular human rights issue is not one I would be directly targeted by. My hope is that an official statement by the Foundation would make it clear to the Egyptian government that this kind of policy is not conducive to attracting overseas business and tourism. The second concern is that Egypt and the Arab world in general have only embryonic local Wikimedian communities. While there are many people who contribute to Arabic Wikipedia, there is not a Wikimedia chapter in Egypt, nor are there regular meetups in the country or region. This might be a chicken-and-egg problem: choosing Alexandria would probably kick-start local groups. I hope that ar: Wikimedians pick up the mantle of leadership that the world Wikimedia community is offering them. tags:

30 May 2007

Paul van Tilburg: Master Project Take 2 and 3

I just realised that I forgot to post about take 2, so I’ll combine it with this post about take 3. The past five months I’ve been busy with examining the effect of restriction on CCS with parallelism. We decided to do this in two steps. First (take 2) we considered interleaving of actions, and then (take 3) we added communication. The case with only interleaving of actions went smoothly; it worked out as a nice extension of the case without parallelism. However, when we looked at the case with interleaving and communication, it started to become troubling. I ran into all kind of complications and the risk of failure I described in a previous post seemed to come true. However, I think we managed quite well to catch a large part of the problems and describe them, and also offer possible solutions. In the end, I’m quite happy of the result. So, today I can present the final version of my Master’s Thesis: “Finite Equational Bases for CCS with Restriction”. Yay! I have just submitted it for reproduction. The final presentation of my project will take place June 5, 2007 at 10:00 in the main building of the university, HG 6.29. If you’re interested and are able to attend, consider yourself invited. I want to thank Bas Luttik for his large amount of feedback, clear explanations and guidance, without him the project wouldn’t have worked out as well as it did now. Almost finished, 6 days remaining…

13 April 2007

Daniel Burrows: coq's -ness-nesses

In response to Stefano's reply to my post about Coq:

toy-ness

Usually such pieces of software are more seen as academic toys, seeing it promoted as a geeky software is a joy to me.


I really hate to detract from your happiness -- but I definitely had the word "math" implicitly stuck before "geek toy". :) Most coders that I've met are, at best, indifferent to mathematics, outright hostile at worst, so I doubt that coq will get far outside of small niches where the benefits are irresistable (security software, maybe? -- and of course pure math). I read up on this stuff despite it being utterly useless for anything I'm likely to be paid to do (unless I return to academia) because I'm a hopeless weenie loser who likes math. :P

programming-language-ness

Coq is not really a programming language (even if the CoqArt book says so, to attract the developer community), it is mainly a proof assistant; i.e. a software in which you give definitions, state theorems and prove them.


Well, this all depends on how you define programming language, which is one of those never-ending definitional questions. But using my general conception of what a programming language is, I actually have noticed two programming languages inside Coq.


  1. The more interesting one (IMO) is the proof language they call "Gallina" and the "Vernacular", in which you construct proofs by reducing your goal to a tautology. The goal here seems to be to compute a well-typed term in Coq's core calculus, and you generally construct programs interactively: Coq shows you the current program state (expressed as one or more goals and a set of local hypotheses) and you manipulate it by (e.g.) applying lemmas to yield new known facts.

    I think this language is fairly weak viewed as a programming language. It's completely unreadable and the proofs generated don't correspond to the reasoning process one normally follows when working out a proof. I am less annoyed by it now that I've learned how to do forward reasoning :), but it seems like backwards reasoning is really pushed, and that just seems .. well .. backwards to me. There's also a strong tendency to use implicitly generated names for things and to just "know" what the state of the system after issuing a command will be; as far as I can tell, the only way to understand a Coq proof is to step through it in coqide.

    While it's nice to be able to "trust the kernel", it would also be nice to be able to "read a proof" directly. Among other things, their inscrutability makes me think that Coq proofs would be annoying to maintain over time. This is not necessary for stuff that's part of Principia Mathematica, but seems likely if you're developing a new theory -- in particular, I would shudder (theatrically) at the mere thought of including embedded Coq proofs in my programs.

    Some additional brief comments on this were written by Nick Benton and put on the Web under the title of Machine Obstructed Proof.

  2. The other sense is that Coq directly embeds a strongly normalizing variant of System F (I think? -- at least the simply typed LC with some type parametricity) with primitive recursion over inductive structures. This isn't Turing-complete (of course!), but I would certainly call it a real programming language.

    As you note, though, this is less interesting unless you want to prove facts about your programs.


OCaml-ness

Regarding the synergy of Coq with OCaml: yes, you can prove in Coq the correctness of OCaml program, but actually not more than you can do with programs written in other programming languages


I was specifically thinking about Concoqtion; I have no idea how other projects compare, and I apologize if I incorrectly summarized it!

10 April 2007

Stefano Zacchiroli: coq as a geek toy

Coq: Happy to See it as a Geek Toy Daniel, I'm really happy you defined Coq a geek toy. Usually such pieces of software are more seen as academic toys, seeing it promoted as a geeky software is a joy to me. I'm a developer of Matita, a concurrent system of Coq (yet less mature a of today). I'm happy to give some more info about Coq (and hence about Matita, which is based on the same logical foundation of Coq). Coq is not really a programming language (even if the CoqArt book says so, to attract the developer community), it is mainly a proof assistant; i.e. a software in which you give definitions, state theorems and prove them. The added value of the system wrt plain pen and paper is that it grants that you won't be able to produce incorrect proofs (of course you are still free to prove useless theorems :-) ). It is more expressive than predicate calculus, technically it is based on the Calculus of (Co)Inductive Constructions, but the most important differences are that you can give inductive definitions of predicates (or define recursive data structures) and the system will let you play with them freely automatically constructing recursors on these data types. You can even define infinite data structures (using co-induction) mimicking lazy types available in languages as Haskell. Regarding the synergy of Coq with OCaml: yes, you can prove in Coq the correctness of OCaml program, but actually not more than you can do with programs written in other programming languages (see for example Why, Krakatoa, and Caduceus for some practical tools to prove properties about Java and C programs). Yet the synergy with OCaml is even more than that: once you proved properties like the existence of, say, a prime number greater than a given prime number, you can ask Coq to generate for you (using a feature called extraction) an OCaml program which consumes as input a prime number and returns the next one. More generally you can extract programs from (suitable) proofs. Some info on the Debian side: thanks to smimou Coq has been in Debian for a long time now and Why/Krakatoa/Caduceus are coming soon. Happy hacking!

Next.