3
$\begingroup$

In standard mathematics, there is the notion of "equivalent definitions" of something. For example, we can define the Fibonacci sequence directly via recursion or by using a more performance-friendly auxiliary function. In HoTT, the concept of equivalent definitions extends to types as well; we can have equivalent definitions of the natural numbers, and then later prove they are equal by exhibiting an equivalence (in this case, a bijection) between them.

But as far as I can tell, dependently-typed proof assistants don't support equivalent definitions; they require you to make one definitive definition def x := A and then later provide characterizations (e.g. theorem hx : x = B) of that definition. These characterizations are secondary to the primary definition in the sense that x will always reduce to A, and never B, when checking for definitional equality.

A major downside of the "one definition, many characterizations" approach is that the actual definition of something becomes part of the public interface, so if the definition of a function is changed, it can potentially break many proofs that depend on it. This makes it very difficult to maintain backward compatibility in programming libraries, since you can't even change the implementation of a function. Additionally, sometimes objects in math are best thought of as having no definition, because the actual definition is much more complicated than the characterization (for example, the real numbers are in some sense less complicated than Cauchy sequences or Dedekind cuts).

Lean has a few main ways to make definitions "more equal", which all involve blocking computation:

  • Definitions can be tagged with @[irreducible], which discourages the elaborator from unfolding them (though it doesn't affect the kernel)
  • irreducible_def can be used, which uses the opaque keyword behind the scenes to make something that won't ever reduce without you explicitly performing a rewrite.
  • For types specifically, we can make a one-element structure which essentially just wraps another type and makes it so that they are no longer provably equal. Custom induction principles can then be added later.
  • In the future, there will be a module system in Lean which might address some of these issues.
  • This doesn't address my question, but Lean also permits using different definitions in kernel reduction and executable code via @[csimp].

This works fine in Lean, but I was wondering about possibly alternative approaches. You could even imagine a system where users are free to swap out the definitional equalities dynamically (although I have no idea how this would be done in a way compatible with the kernel) and use whatever is most convenient for a given problem. Mathematicians sometimes say to "identify $A$ with $B$" for the scope of a given problem, and maybe this is a way to support some limited version of that?

Basically, I'm just curious what the possible approaches are to treat characterizations on more equal footing to the original definition. In some non-DTT proof assistants, this is a non-issue; for example, Metamath does not support computation, so characterizations are on equal footing to definitions. What can DTT proof assistants do about it? Does it matter whether the proof assistant supports HoTT?

$\endgroup$

2 Answers 2

3
$\begingroup$

For what it's worth, in Andromeda 2 definitions are not a primitive concept. If you want to define x of type A to be equal to e, you postulate two new rules:

rule x : A
rule x_def : x == e : A

Nothing prevents you from having a second rule

rule x_def' : x == e' : A

The price you pay for this is twofold. First, it's your problem if e == e' is inconsistent. Second, you have to tell the equality checker which of these to use (or construct equality proofs with your bare hands). On the other hand, the equality checker can use rules locally, so you can direct it to use x_def in one part of your code and x_def' in the other.

$\endgroup$
2
  • 1
    $\begingroup$ This looks pretty similar to how metamath does it (definitions are just axioms), although Andromeda is dependently typed while metamath is not. When you say "equality checker" do you mean definitional equality, and if so, does this mean you can add definitional equalities locally? That would be pretty remarkable if it were possible. $\endgroup$ Commented 2 days ago
  • 1
    $\begingroup$ See An extensible equality checking algorithm for dependent type theories. $\endgroup$ Commented yesterday
3
$\begingroup$

A major downside of the "one definition, many characterizations" approach is that the actual definition of something becomes part of the public interface, so if the definition of a function is changed, it can potentially break many proofs that depend on it. This makes it very difficult to maintain backward compatibility in programming libraries, since you can't even change the implementation of a function.

It really depends on how much you care about definitional equality. In contributions to Lean's Mathlib library, particularly towards the more advanced, abstract end, relying on the actual definition of anything is strongly discouraged.

The convention is that when you define something, you immediately provide one or more lemmas giving characterizations of it. Usually one of those lemmas will be true by definition (rfl) and others will involve a proof, but authors of downstream code are not supposed to know or care which is the rfl-lemma. These lemmas form the "public API", while the actual definition is an implementation detail, and unfolding the actual definition directly in downstream code is generally discouraged ("breaking the API boundary"), concepts from software engineering that Mathlib has adopted. Instead of unfolding the definition, you should instead use one of the API lemmas – even if you wrote the definition yourself and you know perfectly well that the lemma you're using is just repeating that definition (because things might not stay that way).

The aim of this convention is precisely to minimize breakage if the definition needs to be changed: if you change your mind about which characterization of your gadget is "the" definition, then as long as both characterizations of it are still true and still present in the library as lemmas, it doesn't matter for downstream code that you have changed which of those lemmas is true by rfl.

As a proof that this works: we just changed the definition of what "sum of an infinite series" means in mathlib – a definition that is utterly fundamental to huge swathes of mathematics (all of analysis, much of topology, differential geometry, analytic number theory, etc). The new definition involves an optional parameter, and when this is left at the default value, the new definition becomes equivalent to the old one but not definitionally so. The remarkable thing was how little breakage this caused; although the change was a non-trivial task, it wasn't an impossible one, precisely because most of the downstream code was using the public API rather than unfolding the definitions.

New contributor
David Loeffler is a new contributor to this site. Take care in asking for clarification, commenting, and answering. Check out our Code of Conduct.
$\endgroup$

Start asking to get answers

Find the answer to your question by asking.

Ask question

Explore related questions

See similar questions with these tags.