Joachim Breitner: How is coinduction the dual of induction?
Earlier today, I demonstrated how to work with coinduction in the theorem provers Isabelle, Coq and Agda, with a very simple example. This reminded me of a discussion I had in Karlsruhe with my then colleague Denis Lohner: If coinduction is the dual of induction, why do the induction principles look so different? I like what we observed there, so I d like to share this.
The following is mostly based on my naive understanding of coinduction based on what I observe in the implementation in Isabelle. I am sure that a different, more categorial presentation of datatypes (as initial resp. terminal objects in some category of algebras) makes the duality more obvious, but that does not necessarily help the working Isabelle user who wants to make sense of coninduction.
Inductive lists
I will use the usual polymorphic list data type as an example. So on the one hand, we have normal, finite inductive lists:
Inductive lists
I will use the usual polymorphic list data type as an example. So on the one hand, we have normal, finite inductive lists:
datatype 'a list = nil cons (hd : 'a) (tl : "'a list")
with the well-known induction principle that many of my readers know by heart (syntax slightly un-isabellized):
P nil ( x xs. P xs P (cons x xs)) xs. P xs
Coinductive lists
In contrast, if we define our lists coinductively to get possibly infinite, Haskell-style lists, by writing
codatatype 'a llist = lnil lcons (hd : 'a) (tl : "'a llist")
we get the following coinduction principle:
( xs ys.
R xs ys' (xs = lnil) = (ys = lnil)
(xs lnil ys' lnil
hd xs = hd ys R (tl xs) (tl ys)))
( xs ys. R xs ys xs = ys)
This is less scary that it looks at first. It tell you if you give me a relation R
between lists which implies that either both lists are empty or both lists are nonempty, and furthermore if both are non-empty, that they have the same head and tails related by R
, then any two lists related by R
are actually equal.
If you think of the infinte list as a series of states of a computer program, then this is nothing else than a bisimulation.
So we have two proof principles, both of which make intuitive sense. But how are they related? They look very different! In one, we have a predicate P
, in the other a relation R
, to point out just one difference.
Relation induction
To see how they are dual to each other, we have to recognize that both these theorems are actually specializations of a more general (co)induction principle.
The datatype
declaration automatically creates a relator:
rel_list :: ('a 'b bool) 'a list 'b list bool
The definition of rel_list R xs ys
is that xs
and ys
have the same shape (i.e. length), and that the corresponding elements are pairwise related by R
. You might have defined this relation yourself at some time, and if so, you probably introduced it as an inductive predicate. So it is not surprising that the following induction principle characterizes this relation:
Q nil nil
( x xs y ys. R x y Q xs ys Q (cons x xs) (cons y ys))
( xs ys rel_list R xs ys Q xs ys)
Note how how similar this lemma is in shape to the normal induction for lists above! And indeed, if we choose Q xs ys (P xs xs = ys)
and R x y (x = y)
, then we obtain exactly that. In that sense, the relation induction is a generalization of the normal induction.
Relation coinduction
The same observation can be made in the coinductive world. Here, as well, the codatatype
declaration introduces a function
rel_llist :: ('a 'b bool) 'a llist 'b llist bool
which relates lists of the same shape with related elements only that this one also relates infinite lists, and therefore is a coinductive relation. The corresponding rule for proof by coinduction is not surprising and should remind you of bisimulation, too:
( xs ys.
R xs ys (xs = lnil) = (ys = lnil)
(xs lnil ys lnil
Q (hd xs) (hd ys) R (tl xs) (tl ys)))
( xs ys R xs ys rel_llist Q xs ys)
It is even more obvious that this is a generalization of the standard coinduction principle shown above: Just instantiate Q
with equality, which turns rel_llist Q
into equality on the lists, and you have the theorem above.
The duality
With our induction and coinduction principle generalized to relations, suddenly a duality emerges: If you turn around the implication in the conclusion of one you get the conclusion of the other one. This is an example of cosomething is something with arrows reversed .
But what about the premise(s) of the rules? What happens if we turn around the arrow here? Although slighty less immediate, it turns out that they are the same as well. To see that, we start with the premise of the coinduction rule, reverse the implication and then show that to be equivalent to the two premises of the induction rule:
( xs ys.
R xs ys (xs = lnil) = (ys = lnil)
(xs lnil ys lnil
Q (hd xs) (hd ys) R (tl xs) (tl ys)))
= case analysis (the other two cases are vacuously true)
( xs ys.
xs = lnil ys = lnil
R xs ys (xs = lnil) = (ys = lnil)
(xs lnil ys lnil
Q (hd xs) (hd ys) R (tl xs) (tl ys)))
( xs ys.
xs lnil ys lnil
R xs ys (xs = lnil) = (ys = lnil)
(xs lnil ys lnil
Q (hd xs) (hd ys) R (tl xs) (tl ys)))
= simplification
( xs ys. xs = lnil ys = lnil R xs ys
( x xs y ys. R (cons x xs) (cons y ys) (Q x y R xs ys))
= more rewriting
R nil nil
( x xs y ys. Q x y R xs ys R (cons x xs) (cons y ys))
Conclusion
The coinduction rule is not the direct dual of the induction rule, but both are specializations of more general, relational proof methods, where the duality is clearly present.
More generally, this little excursion shows that it is often beneficial to think of types less as sets, and more as relations this way of thinking is surprisingly fruitful, and led to proofs of parametricity and free theorems and other nice things.
datatype 'a list = nil cons (hd : 'a) (tl : "'a list")
P nil ( x xs. P xs P (cons x xs)) xs. P xs
codatatype 'a llist = lnil lcons (hd : 'a) (tl : "'a llist")
we get the following coinduction principle:
( xs ys.
R xs ys' (xs = lnil) = (ys = lnil)
(xs lnil ys' lnil
hd xs = hd ys R (tl xs) (tl ys)))
( xs ys. R xs ys xs = ys)
This is less scary that it looks at first. It tell you if you give me a relation R
between lists which implies that either both lists are empty or both lists are nonempty, and furthermore if both are non-empty, that they have the same head and tails related by R
, then any two lists related by R
are actually equal.
If you think of the infinte list as a series of states of a computer program, then this is nothing else than a bisimulation.
So we have two proof principles, both of which make intuitive sense. But how are they related? They look very different! In one, we have a predicate P
, in the other a relation R
, to point out just one difference.
Relation induction
To see how they are dual to each other, we have to recognize that both these theorems are actually specializations of a more general (co)induction principle.
The datatype
declaration automatically creates a relator:
rel_list :: ('a 'b bool) 'a list 'b list bool
The definition of rel_list R xs ys
is that xs
and ys
have the same shape (i.e. length), and that the corresponding elements are pairwise related by R
. You might have defined this relation yourself at some time, and if so, you probably introduced it as an inductive predicate. So it is not surprising that the following induction principle characterizes this relation:
Q nil nil
( x xs y ys. R x y Q xs ys Q (cons x xs) (cons y ys))
( xs ys rel_list R xs ys Q xs ys)
Note how how similar this lemma is in shape to the normal induction for lists above! And indeed, if we choose Q xs ys (P xs xs = ys)
and R x y (x = y)
, then we obtain exactly that. In that sense, the relation induction is a generalization of the normal induction.
Relation coinduction
The same observation can be made in the coinductive world. Here, as well, the codatatype
declaration introduces a function
rel_llist :: ('a 'b bool) 'a llist 'b llist bool
which relates lists of the same shape with related elements only that this one also relates infinite lists, and therefore is a coinductive relation. The corresponding rule for proof by coinduction is not surprising and should remind you of bisimulation, too:
( xs ys.
R xs ys (xs = lnil) = (ys = lnil)
(xs lnil ys lnil
Q (hd xs) (hd ys) R (tl xs) (tl ys)))
( xs ys R xs ys rel_llist Q xs ys)
It is even more obvious that this is a generalization of the standard coinduction principle shown above: Just instantiate Q
with equality, which turns rel_llist Q
into equality on the lists, and you have the theorem above.
The duality
With our induction and coinduction principle generalized to relations, suddenly a duality emerges: If you turn around the implication in the conclusion of one you get the conclusion of the other one. This is an example of cosomething is something with arrows reversed .
But what about the premise(s) of the rules? What happens if we turn around the arrow here? Although slighty less immediate, it turns out that they are the same as well. To see that, we start with the premise of the coinduction rule, reverse the implication and then show that to be equivalent to the two premises of the induction rule:
( xs ys.
R xs ys (xs = lnil) = (ys = lnil)
(xs lnil ys lnil
Q (hd xs) (hd ys) R (tl xs) (tl ys)))
= case analysis (the other two cases are vacuously true)
( xs ys.
xs = lnil ys = lnil
R xs ys (xs = lnil) = (ys = lnil)
(xs lnil ys lnil
Q (hd xs) (hd ys) R (tl xs) (tl ys)))
( xs ys.
xs lnil ys lnil
R xs ys (xs = lnil) = (ys = lnil)
(xs lnil ys lnil
Q (hd xs) (hd ys) R (tl xs) (tl ys)))
= simplification
( xs ys. xs = lnil ys = lnil R xs ys
( x xs y ys. R (cons x xs) (cons y ys) (Q x y R xs ys))
= more rewriting
R nil nil
( x xs y ys. Q x y R xs ys R (cons x xs) (cons y ys))
Conclusion
The coinduction rule is not the direct dual of the induction rule, but both are specializations of more general, relational proof methods, where the duality is clearly present.
More generally, this little excursion shows that it is often beneficial to think of types less as sets, and more as relations this way of thinking is surprisingly fruitful, and led to proofs of parametricity and free theorems and other nice things.
rel_list :: ('a 'b bool) 'a list 'b list bool
Q nil nil
( x xs y ys. R x y Q xs ys Q (cons x xs) (cons y ys))
( xs ys rel_list R xs ys Q xs ys)
codatatype
declaration introduces a function
rel_llist :: ('a 'b bool) 'a llist 'b llist bool
which relates lists of the same shape with related elements only that this one also relates infinite lists, and therefore is a coinductive relation. The corresponding rule for proof by coinduction is not surprising and should remind you of bisimulation, too:
( xs ys.
R xs ys (xs = lnil) = (ys = lnil)
(xs lnil ys lnil
Q (hd xs) (hd ys) R (tl xs) (tl ys)))
( xs ys R xs ys rel_llist Q xs ys)
It is even more obvious that this is a generalization of the standard coinduction principle shown above: Just instantiate Q
with equality, which turns rel_llist Q
into equality on the lists, and you have the theorem above.
The duality
With our induction and coinduction principle generalized to relations, suddenly a duality emerges: If you turn around the implication in the conclusion of one you get the conclusion of the other one. This is an example of cosomething is something with arrows reversed .
But what about the premise(s) of the rules? What happens if we turn around the arrow here? Although slighty less immediate, it turns out that they are the same as well. To see that, we start with the premise of the coinduction rule, reverse the implication and then show that to be equivalent to the two premises of the induction rule:
( xs ys.
R xs ys (xs = lnil) = (ys = lnil)
(xs lnil ys lnil
Q (hd xs) (hd ys) R (tl xs) (tl ys)))
= case analysis (the other two cases are vacuously true)
( xs ys.
xs = lnil ys = lnil
R xs ys (xs = lnil) = (ys = lnil)
(xs lnil ys lnil
Q (hd xs) (hd ys) R (tl xs) (tl ys)))
( xs ys.
xs lnil ys lnil
R xs ys (xs = lnil) = (ys = lnil)
(xs lnil ys lnil
Q (hd xs) (hd ys) R (tl xs) (tl ys)))
= simplification
( xs ys. xs = lnil ys = lnil R xs ys
( x xs y ys. R (cons x xs) (cons y ys) (Q x y R xs ys))
= more rewriting
R nil nil
( x xs y ys. Q x y R xs ys R (cons x xs) (cons y ys))
Conclusion
The coinduction rule is not the direct dual of the induction rule, but both are specializations of more general, relational proof methods, where the duality is clearly present.
More generally, this little excursion shows that it is often beneficial to think of types less as sets, and more as relations this way of thinking is surprisingly fruitful, and led to proofs of parametricity and free theorems and other nice things.
( xs ys.
R xs ys (xs = lnil) = (ys = lnil)
(xs lnil ys lnil
Q (hd xs) (hd ys) R (tl xs) (tl ys)))
= case analysis (the other two cases are vacuously true)
( xs ys.
xs = lnil ys = lnil
R xs ys (xs = lnil) = (ys = lnil)
(xs lnil ys lnil
Q (hd xs) (hd ys) R (tl xs) (tl ys)))
( xs ys.
xs lnil ys lnil
R xs ys (xs = lnil) = (ys = lnil)
(xs lnil ys lnil
Q (hd xs) (hd ys) R (tl xs) (tl ys)))
= simplification
( xs ys. xs = lnil ys = lnil R xs ys
( x xs y ys. R (cons x xs) (cons y ys) (Q x y R xs ys))
= more rewriting
R nil nil
( x xs y ys. Q x y R xs ys R (cons x xs) (cons y ys))