I have just started reading "Blink, the power of thinking without thinking" by Malcolm Gladwell. Similarly to Freakonomics, this book has sold very well in the US. I was thus curious about it.
Overall, it is fun to read, although a bit unorganized. But what is especially striking for me is the main claim that humans can reason unconsciously.
More precisely, there are many situations (and the book gives a large number of surprising yet convincing examples) where humans are able to perform difficult "classification" tasks unexpectedly fast. For example, some art experts are able to tell apart genuine sculptures from fake ones virtually in a blink. Even more surprising: they are completely unable to explain what makes them think a specific sculpture is fake!
It thus seems (and there are plenty of psychological studies about this), that, with enough training, humans are able to learn very difficult classification (I take it in the classical Machine Learning sense) tasks, including tasks that are not natural.
Let me try to explain what is new here.
We know that humans are very good at learning certain classification tasks: young children can classify objects from images very easily and get a much better performance than any computer to date.
Also we know that once this has been learned, the actual classification of any new image can be done in a few milliseconds.
Hence, with enough training, the brain is able to perform this complicated task very easily, without requiring any conscious reasoning to take place.
However, I used to think that the tasks we can learn easily are those for which we have a sufficiently strong prior encoded into our genes. In other words, I thought that the ability to learn visual classification tasks was the result of a long natural evolution (which provides us with the appropriate pre-wiring, or the prior in Bayesian terms) combined with a short period of adaptation (similar to computing the posterior in Bayesian terms again).
What is new to me in this book is the following: we can be trained to perform tasks that have nothing to do with evolutionary constraints, and this training can be performed unconsciously (without any explicit or conscious reasoning). An example of this phenomenon is given in the book: a tennis coach once realized that he could predict whether a tennis player would miss his service right before he would hit the ball. However he would not be able to explain why and how he could do so!
This may show that our brain hosts a powerful learning engine (with a powerful feature extractor to isolate the relevant information) that does not even require our attention to be triggered and that can deal with many different learning tasks.
Of course this raises the question of the prior: we know that there is no better learning algorithm, but only algorithms better adapted to learning problems. In other words, we can only learn the problems that have a large enough weight under the prior, which means it is hard to be good simultaneously for many different tasks.
Why is it that the prior encode into our brain allows us to learn such useless tasks as being able to tell whether a tennis-man will fail his service? and why is it that this prior is not more "peaked" around the tasks that are really useful for our survival?
I guess this book is related to a lot of interesting cognitive science problems but it also revived my interest in human learning and its relationship to Machine Learning...
Recent Comments