The question is whether, in order to make progress toward building learning machines, it is necessary to study the only available examples of such machines we have so far: animal brains (and more specifically human ones).
People who would answer no to this question often cite the example of planes: the first successes for building flying machines were obtained when people stopped trying to imitate birds. So it seems that understanding how Nature solved the problem may not always help. One reason is that animal brains were not designed only to solve the learning problem, just as birds were not designed to solve the flying problem. It is only a byproduct of an evolution that was mainly geared towards survival and adaptation to certain environmental conditions.
Despite these considerations, there has been many attempts to make bridges between the study of natural and artificial learning. For example, people working on artificial neural networks, or on genetic algorithms were never ashamed of using biological findings as a source of inspiration.
My feeling is that it is fine to be interested both in artificial and natural learning provided the following is accepted:
- First of all it is not necessary to start from natural learning to develop a theory of artificial learning, and there is no need for this theory to explain the specifics of natural learning.
- Second, it is important to abstract away from natural learning in order to formulate precisely what learning means.
- However, any source of inspiration is good, especially when one gets stuck, so why not looking for inspiration in natural learning.
I used to think that the work being done in cognitive psychology was too specifically human to be of any interest for people working on learning theory. Also I was very suspicious about whatever was said to be "biologically inspired" or "cognitively inspired". However, the remarkable efforts for abstracting concepts that has been done by some cognitive psychologists suggest that might have been too critical.
In a recent interview, Tom Mitchell, a famous ML researcher expressed similar views:
[The interviewer] - Learning the brain’s algorithms for doing things is very difficult, and is not very well understood as yet. Do you ever find it frustrating trying to get computers to learn things that we ourselves don’t know the inner workings of?
[Tom Mitchell] - That’s actually a very interesting observation—I actually don’t get frustrated by that—why? I don’t know!
Maybe it’s odd, but it’s true that much of the work in machine learning—how to get computers to learn—has been kind of unguided by anything we know about human learning. It just grew up on its own—“ok, how would we engineer this system to look at a lot of data and discover regularities?”—so people engineered those instead of looking at how humans do it and then trying to duplicate it. But recently, because I’ve been looking at the brain, I’ve been starting to learn more about what people know about human learning—and it’s very different. For example, when we humans learn, a big part of what determines whether we succeed or not is all about motivation. And there’s nothing in machine learning algorithms that even remotely corresponds to motivation. So it’s just a very different phenomenon…maybe in 10 years we’ll understand it better, but right now, the two are very different.
Also, I recently heard about the work of Alison Gopnik who studied the way children learn causes and she draws some interesting connections between the causal structure that is inferred by them and graphical models such as Bayesian networks. Also she explains that the way children learn is very "multivariate" in the sense that they try many things at a time and extract causal relationships easily from multidimensional observations. In other words, it is not necessary for them to act on one knob at a time to understand how a machine works.