In standard mathematics, there is the notion of "equivalent definitions" of something. For example, we can define the Fibonacci sequence directly via recursion or by using a more performance-friendly auxiliary function. In HoTT, the concept of equivalent definitions extends to types as well; we can have equivalent definitions of the natural numbers, and then later prove they are equal by exhibiting an equivalence (in this case, a bijection) between them.
But as far as I can tell, dependently-typed proof assistants don't support equivalent definitions; they require you to make one definitive definition def x := A
and then later provide characterizations (e.g. theorem hx : x = B
) of that definition. These characterizations are secondary to the primary definition in the sense that x
will always reduce to A
, and never B
, when checking for definitional equality.
A major downside of the "one definition, many characterizations" approach is that the actual definition of something becomes part of the public interface, so if the definition of a function is changed, it can potentially break many proofs that depend on it. This makes it very difficult to maintain backward compatibility in programming libraries, since you can't even change the implementation of a function. Additionally, sometimes objects in math are best thought of as having no definition, because the actual definition is much more complicated than the characterization (for example, the real numbers are in some sense less complicated than Cauchy sequences or Dedekind cuts).
Lean has a few main ways to make definitions "more equal", which all involve blocking computation:
- Definitions can be tagged with
@[irreducible]
, which discourages the elaborator from unfolding them (though it doesn't affect the kernel) irreducible_def
can be used, which uses theopaque
keyword behind the scenes to make something that won't ever reduce without you explicitly performing a rewrite.- For types specifically, we can make a one-element structure which essentially just wraps another type and makes it so that they are no longer provably equal. Custom induction principles can then be added later.
- In the future, there will be a module system in Lean which might address some of these issues.
- This doesn't address my question, but Lean also permits using different definitions in kernel reduction and executable code via
@[csimp]
.
This works fine in Lean, but I was wondering about possibly alternative approaches. You could even imagine a system where users are free to swap out the definitional equalities dynamically (although I have no idea how this would be done in a way compatible with the kernel) and use whatever is most convenient for a given problem. Mathematicians sometimes say to "identify $A$ with $B$" for the scope of a given problem, and maybe this is a way to support some limited version of that?
Basically, I'm just curious what the possible approaches are to treat characterizations on more equal footing to the original definition. In some non-DTT proof assistants, this is a non-issue; for example, Metamath does not support computation, so characterizations are on equal footing to definitions. What can DTT proof assistants do about it? Does it matter whether the proof assistant supports HoTT?