A Computability-Theoretic
Learning Theory Sampler

John Case (case@cis.udel.edu)
University of Delaware (USA)

This talk is about algorithmic learning (or inference) of programs for computational objects -- from data about those objects. It provides a sampler of results in three settings (together with a list of other settings that might have been presented). The three settings:

1.
For learning or inferring programs for computable functions, presented are some delicate tradeoffs between the generality of an algorithmic learning device and the quality of the successful programs it converges to. There are preliminary results to the effect that, with small increases in generality of the learning device, the computational complexity of some successfully learned programs is provably unalterably suboptimal. There are also results in which the complexity of successfully learned programs is optimal and the learning device is quite general, but some of those optimal, learned programs are provably unalterably information deficient -- in fact, deficient as to safe, algorithmic extractability of even the fact that they are approximately optimal. For these results, the safe, algorithmic methods of information extraction will be by proofs in arbitrary, true, computably axiomatizable extensions of Peano Arithmetic.

2.
A number of child cognitive development phenomena follow the U-shaped form of: first learning a (successful) behavior; then unlearning it; and, lastly, relearning it. One can ask if U-shaped learning is an evolutionary accident or essential. For learning grammars (or r.e. indices) from positive data for r.e. languages, there are classes which are learnable with syntactic convergence in the limit to successful grammars but not without returning to abandoned enumeration behaviors. However, there is a subtlety: the abandoned behavior returned to need not be a successful behavior. We will provide a current report on what is known in this formal learning setting. For example, in this formal setting, returning to a twice abandoned behavior is provably not essential.

3.
Closed computable games model reactive process-control problems. Closed implies that if Player I does not lose at any finite point in the playing of the game, Player I does not lose (in the limit). Examples include discrete regulation of room temperature with Player I as thermostat. A master of a closed computable game plays an algorithmic winning strategy for Player I. Presented are results about advantages for Player I watching the behaviors (not programs) of masters. It can be shown that: selected masters enable learning to win more process-control games than arbitrary masters; for each kind of master, it's better to learn ones own winning strategy instead of trying to copy the master's; and, for each kind of master, one can learn more process control games with m+1 than with m masters. Discussed will be the connection to behavior cloning in applied machine learning.