practically interesting in its own right.
'.dot' notation (sorry Andreas). In the past I used it primarily out of
desperation when formalizing category theoretic stuff. I dotted the
consumption below the amount of memory my computer had. In this case I
irrelevant equation in a type. Later due to improvements in Agda,
a more standard type theory without '.dot'.
in being able to turn on impredicative quantification for Prop. Agda
me than having a different function space. There is also a nice
separation, if you don't like it don't use it.
programmer and a proof engineer. As a programmer I would like to have
program. I think it is the same with proofs, different approaches
optimize/consider the trade offs (inc. readability or writability(is
that a word?)).
as well as type checking performance but both are important.
Post by Andreas NuytsDear everyone,
Another dimension (or is it again the compile/run-time dimension?) is
that of type-level vs. value-level annotations.
A) -> B x`. In this view, RelFun may be inhabited by lambdas that are in
fact known to be irrelevant and whose argument can therefore be erased,
both at compile and at run-time.
My (draft of a) suggestion is the following: when the Agda user types a
lambda-expression `λ (x : A) -> b[x]`, it is inferred from `b[x]`
whether this is an irrelevant lambda. Of course, if the type given by
the user is `IrrFun`, then this only type-checks if the lambda is
inferred to be irrelevant. However, if the type given by the user is
`RelFun` but the lambda is irrelevant, then this fact is remembered in
the form of an internal annotation on the lambda, as it allows erasure
of the argument both at compile and at runtime. Without the annotation
on the lambda, erasure could only happen after beta-reducing the lambda,
which is probably less efficient.
This behaviour would be in accordance to what happens if you explicitly
coerce a function `f : IrrFun` to `RelFun` by lambda-expanding. It then
becomes `λ (x : A) -> f _`.
When it comes to inferring type-level annotations: this is a problem
quite similar to inferring universe levels. When a user does not specify
a universe level, do you want to infer the smallest one, or do you want
to complain? (I think Agda currently complains about an unresolved meta,
although C-c C-= will fill out the smallest option?) Similarly, when a
user does not specify a modality, do you want to infer the most
informative one (irrelevance, if possible), or do you want to complain?
(I don't have any answers here, just pointing out the similarity of the
problems.)
Best regards,
Andreas Nuyts
(not the Andreas mentioned in Jesper's emails)
Dear Jon,
Thanks for your answer. I see these two not so much as two different
kinds of proof irrelevance, but as two separate dimensions: compile-time
vs run-rime irrelevance and annotated vs inferred irrelevance. For
example, Agda has annotated compile-time irrelevance (irrelevant
functions and Prop), inferred run-time irrelevance (type erasure,
forcing analysis, detection of unused arguments), Andreas recently added
little inferred compile-time irrelevance (the only thing that comes to
mind is the unit type, which is automatically proof-irrelevant because
of eta).
It seems to me you like a combination of annotated compile-time
irrelevance and inferred run-time irrelevance, is that correct?
What Wolfram seemed to suggest however is to infer more *compile-time*
irrelevance. For example, we could infer that the empty type is
proof-irrelevant and automatically discard any equation at the empty
type during conversion (i.e. have eta for the empty type). Or we could
detect that the identity type has a single constructor and (assuming
--with-K) replace all proofs of identity by primTrustMe during
elaboration.
The thing is, such automatically inferred irrelevance wouldn't work
very well in some cases, and it would certainly not help for the fancy
applications of irrelevance you describe (enforcing parametricity or
coherence). However it *would* give some benefits of irrelevance to
people who use Agda but don't want to add extra irrelevance annotations,
or indeed even to people who don't know about irrelevance at all. So it
has a much greater potential impact than the annotated style of
irrelevance.
Of course, much depends on how often we could actually detect
irrelevance automatically, which in turn would depend greatly on the
concrete codebase. But perhaps doing the experiment could give us a
better idea.
-- Jesper
On Mon, Oct 29, 2018 at 1:33 AM Jon Sterling
Dear Jesper,
I feel that these are two separate issues getting conflated -- on the
one hand, a compiler may infer that something is irrelevant, and then
erase it in order to achieve a more efficient execution. On the other
irrelevance is part of the specification of some function, and whether
or not it executes in an irrelevant or erased way is really beside the
point (of course, in such a case, the compiler *should* erase it because
it is low hanging fruit).
The latter (semantic) kind of proof irrelevance could be used, for
- to specify that some function is parametric in its indices: for
instance, operations on vectors
- relatedly, to force *coherence*: for instance, when defining some
interpretation of some language, you can use proof irrelevence + some
inversion lemmas to formalize the old pattern (from Streicher) of
defining your function by recursion on the labels of the derivation
rather than on the derivation itself. In old school math, this required
defining a partial function and then establishing that it terminates,
but in very fancy type theory we can use proof irrelevance to do this
simultaneously and get a total function all at once.
- and of course, to obtain more definitional equivalences where possible
Let me unleash a kind of important point: as I mentioned, in all these
cases, erasure should obviously be done, but there are cases where a
good compiler would perform erasure even when it is not possible to type
the term using the irrelevant types. Something might be erasable for
global reasons. Anyway, it's not so important to me that Agda catch all
of these cases, because I'm not using Agda for executing code (but
others might be).
What I'm saying is that inferring when something is computationally
irrelevant must be treated as orthogonal to whether something can be
typed with a proof irrelevance modality. Computational irrelevance is a
property of code in the compilation target language (which could be
anything), not a property of code in the Agda language.
We should take compilation seriously, and not conflate it with
typechecking and elaboration. Dependently typed languages have *two*
computational semantics: the one which generates definitional
equivalence of terms, and the one which executes.
Best,
Jon
Post by Jesper CockxHi Agda people,
In the comments on issue #3334
<https://github.com/agda/agda/issues/3334#issuecomment-433730080>,
Wolfram
Post by Jesper CockxI consider it counterproductive to be able to declare irreleveance,
and
Post by Jesper Cockxeven more counterproductive to have to do it for efficiency reasons,
because that encourages premature optimisations. The compiler
(including
Post by Jesper Cockxthe type checker) should detect it whenever possible, and optimise
it away
Post by Jesper Cockxas soon as possible.
Often the fact that a certain term is irrelevant is obvious to *you*
as the
Post by Jesper CockxAgda programmer but there's no way for Agda to figure this out from
the
Post by Jesper Cockxcode. If it's possible that you will use the term at any point in
the
Post by Jesper Cockxfuture, Agda cannot (and should not) erase it. So any automatic
detection
Post by Jesper Cockxof irrelevance would probably be very disappointing. (The
alternative where
Post by Jesper CockxAgda would eagerly erase things without asking and then complaining
later
Post by Jesper Cockxwhen you try to use them is arguably worse). Another advantage of
annotating irrelevant things, is that Agda can warn you when you
use it
Post by Jesper Cockxaccidentally in a non-erased position, instead of removing the
irrelevance
Post by Jesper Cockxsilently and causing hard-to-explain performance regressions.
But I realized this is based mostly on speculation and not hard facts.
Historically, Agda has focused on the annotated style of
proof-irrelevance
Post by Jesper Cockx(Andreas' irrelevant functions and irrelevant fields, and more
recently my
Post by Jesper Cockximplementation of Prop). Maybe inferred proof irrelevance would work
better
Post by Jesper Cockxthan expected?
So now my question to *you*, the Agda community is this: do you prefer
Agda's current style of annotated proof-irrelevance, or would you
rather
Post by Jesper Cockxhave Agda infer things for you (and perhaps fail to do so in some
cases)?
Post by Jesper CockxI won't promise I will remove Prop and focus all my attention on
inferring
Post by Jesper Cockxirrelevance even if everyone votes for the latter option, but it
would be
Post by Jesper Cockxinteresting to know going forward what you think.
-- Jesper
_______________________________________________
Agda mailing list
https://lists.chalmers.se/mailman/listinfo/agda
_______________________________________________
Agda mailing list
https://lists.chalmers.se/mailman/listinfo/agda
_______________________________________________
Agda mailing list
https://lists.chalmers.se/mailman/listinfo/agda