Discussion:
[Agda] difficult termination
Sergei Meshveliani
2017-11-20 15:47:13 UTC
Permalink
Dear all,

I am thinking of the practical termination proof possibility in Agda.

In my current practice with computer algebra, all termination proofs are
as follows.

1) Syntactic: the built-in rule of Agda when in each recursive call of
the function some argument is syntactically smaller (in a certain
"subterm" ordering) than in the current call.

2) By the counter.

For example, for a certain recursively defined function f : Bin -> Bin,
I prove termination by rewriting f as follows:

f x = aux x (toNat x) ≤-refl
where
aux : (x : Bin) -> (cnt : Nat) -> toNat x ≤ cnt -> Bin
aux x cnt xN≤cnt =
...

When cnt is 0, it is derived from xN = toNat x ≤ cnt that
x ≡ 0#, and this is the base of recursion.
Otherwise the LHS is
aux x (suc cnt) xN≤cnt',

and the recursive call is
aux y cnt yN≤cnt

cnt is reduced by one, and y is so that xN ∸ yN ≥ 1 for
yN = toNat y.

So that the first argument is reduced at least by one, and yN ≤ cnt is
derived from xN≤cnt'.
And this recursive call of aux proves termination for Agda.

Probably, such proofs by the counter are possible only for primitively
recursive functions.
For example, for the above f, (toNat x) is the upper bound for the
number of recursive calls of aux.

Do I mistake?

Further, the Ackermann's function

ack : ℕ → ℕ → ℕ
ack 0 m = suc m
ack (suc n) 0 = ack n 1
ack (suc n) (suc m) = ack n (ack (suc n) m)

is not primitively recursive. But Agda sees its termination by the
syntactic comparison rule.

Can anybody give a _simple_ example of a terminating algorithm which
is difficult or impossible to write in Agda with providing termination
proof for Agda ?

Regards,

------
Sergei
Sergei Meshveliani
2017-11-20 17:03:08 UTC
Permalink
Post by Sergei Meshveliani
Dear all,
I am thinking of the practical termination proof possibility in Agda.
In my current practice with computer algebra, all termination proofs are
as follows.
1) Syntactic: the built-in rule of Agda when in each recursive call of
the function some argument is syntactically smaller (in a certain
"subterm" ordering) than in the current call.
2) By the counter.
[..]
Further, the Ackermann's function
ack : ℕ → ℕ → ℕ
ack 0 m = suc m
ack (suc n) 0 = ack n 1
ack (suc n) (suc m) = ack n (ack (suc n) m)
is not primitively recursive. But Agda sees its termination by the
syntactic comparison rule.
Can anybody give a _simple_ example of a terminating algorithm which
is difficult or impossible to write in Agda with providing termination
proof for Agda ?
This can be finding by searching through of any n : ℕ satisfying
some subtle property P.
It needs to be known a non-constructive proof for ∃ \(n : ℕ) → P n,
but a bound for such x to be not known.
I wonder what might be this example.

Regards,

------
Sergei
Nils Anders Danielsson
2017-11-20 17:57:50 UTC
Permalink
Post by Sergei Meshveliani
This can be finding by searching through of any n : ℕ satisfying
some subtle property P.
It needs to be known a non-constructive proof for ∃ \(n : ℕ) → P n,
but a bound for such x to be not known.
I wonder what might be this example.
It should not be possible to implement something with the following type
in Agda (if it is, then that is a bug):

(P : ℕ → Set) → (∀ n → Dec (P n)) → ¬ ¬ ∃ P → ∃ P

This is (one form of) Markov's principle.

Here is one implementation that Agda rejects:

markov : (P : ℕ → Set) → (∀ n → Dec (P n)) → ¬ ¬ ∃ P → ∃ P
markov P dec _ = search 0
where
search : ℕ → ∃ P
search n with dec n
... | yes p = n , p
... | no _ = search (suc n)
--
/NAD
Sergei Meshveliani
2017-11-20 18:57:18 UTC
Permalink
Post by Nils Anders Danielsson
Post by Sergei Meshveliani
This can be finding by searching through of any n : ℕ satisfying
some subtle property P.
It needs to be known a non-constructive proof for ∃ \(n : ℕ) → P n,
but a bound for such x to be not known.
I wonder what might be this example.
It should not be possible to implement something with the following type
(P : ℕ → Set) → (∀ n → Dec (P n)) → ¬ ¬ ∃ P → ∃ P
This is (one form of) Markov's principle.
markov : (P : ℕ → Set) → (∀ n → Dec (P n)) → ¬ ¬ ∃ P → ∃ P
markov P dec _ = search 0
where
search : ℕ → ∃ P
search n with dec n
... | yes p = n , p
... | no _ = search (suc n)
Thank you.
Yes, I thought of Markov's principle.
It is good.
But P is still rather arbitrary.
It is desirable to have an example in which P is as concrete
as possible.
May be, some modification of Ackermann's function is possible.

------
Sergei
Martin Escardo
2017-11-20 20:34:58 UTC
Permalink
Post by Sergei Meshveliani
Thank you.
Yes, I thought of Markov's principle.
It is good.
But P is still rather arbitrary.
It is desirable to have an example in which P is as concrete
as possible.
May be, some modification of Ackermann's function is possible.
It won't be so easy. You are asking questions about the proof theory of
Agda / Martin-Lof Type Theory.

If you don't have any universe (type "Set") or fancy inductive types
(defining universes), the expressivity of the language is like that of
Goedel's system T regarding what functions N->N you can defined
(including, in particular, Ackermann's function, as you say).

Once you start having universes Set1, Set2, Set3, ... you start
increasing the amount of definable functions N->N. The termination
checker by itself won't believe your definitions automatically, though,
as you increase the universes you allow yourself to use. Instead, you
will have to provide bonafide, rather subtle, means of definition by
yourself.

Also, there is another subtle point: the fact that you can't prove
something like "for all ..., blah", doesn't mean that you will be able
to come up with an example of a "..." such that "not blah".

Martin
Sergei Meshveliani
2017-11-21 11:16:29 UTC
Permalink
Post by Martin Escardo
Post by Sergei Meshveliani
Thank you.
Yes, I thought of Markov's principle.
It is good.
But P is still rather arbitrary.
It is desirable to have an example in which P is as concrete
as possible.
May be, some modification of Ackermann's function is possible.
It won't be so easy. You are asking questions about the proof theory of
Agda / Martin-Lof Type Theory.
If you don't have any universe (type "Set") or fancy inductive types
(defining universes), the expressivity of the language is like that of
Goedel's system T regarding what functions N->N you can defined
(including, in particular, Ackermann's function, as you say).
Once you start having universes Set1, Set2, Set3, ... you start
increasing the amount of definable functions N->N. The termination
checker by itself won't believe your definitions automatically, though,
as you increase the universes you allow yourself to use. Instead, you
will have to provide bonafide, rather subtle, means of definition by
yourself.
Also, there is another subtle point: the fact that you can't prove
something like "for all ..., blah", doesn't mean that you will be able
to come up with an example of a "..." such that "not blah".
This can be some P for which \exists P is proved non-constructively
and is very difficult to prove constructively.
For example, Higman's lemma about a certain repetition existing in any
infinite sequence of words. But Higman's lemma has a simple enough proof
in Agda -- for the alphabet of two letters (see Sergei Romanenko's
code). Probably general case is not much more complex.

Primitive recursion is not enough for practical usage of Agda.
But adding such a simple rule of termination as "argument being a proper
subterm" seems to change the situation greatly.
And I wonder -- how greatly.

I continue programming computer algebra, and wonder: when there will
encounter a practical method, for which I fail with termination proof.

Regards,

------
Sergei

Loading...