While commenting on the comments to my previous note (see here), I thought about the comparison of human and computers in terms of learning performance. Does this comparison even make sense? If it does, on which basis, or with which criteria can it be made?

If you adopt a purely theoretical point of view, the no free lunch theorem tells you that you just cannot compare two learning machines *in general*. So all comparisons should be based on a specific, restricted set of reference problems.

If you adopt a Bayesian point of view, learning is very easy: once you have chosen your prior (an likelihood function), it is just matter of computation to get the posterior. So that, again, you cannot compare two learning machines, you can only compare their priors. Their performance will be more or less directly related to how well the prior matches the problem at hand (i.e. how high is the prior probability of the problem to be learned).

So we may already have very efficient learning algorithms (probably even better than the brain because they can compute more precisely and much faster -- although one can discuss what computing means in this context), and we still believe computers are not able to match humans learning performance because we compare them on tasks for which humans have much better priors.

Of course I am not saying anything new here: Bayesians would tell you that this has always been clear for them, you only have to build a good prior and you are done.

But building a good prior is not an easy task: it requires to define the right features, to find the right notion of smoothness... and there is basically no guidance for this! Moreover, it is completely problem-specific. So, apart from helping to implement Bayes rule efficiently, general (i.e. application-independent) Machine Learning research cannot help much.

One could then draw the conclusion that the essence of the learning problem is not statistical but computational.

But I still think there are important statistical problems, and I will come back to this issue in a future note...

## Recent Comments