It seems obvious that despite 3 or 4 decades of research on Machine Learning, the ability of computers to learn is far inferior to that of humans.
As a result it seems natural to choose as a goal for learning algorithms to match human performance.
However, is it really fair to compare computer learning with human learning? Or, more precisely, how should they be compared in order to have an objective assessment of their respective abilities?
Here are some remarks that one should have in mind before making such a comparison:
First of all, it is well known that, theoretically speaking, there is no better algorithm: if you look at all the possible problems, and compare any two algorithms, there will be exactly one half (I am not formalizing these notions here, but precise statements exist) of the problems on which the first algorithm will do a better job than the second one. In other terms, there is no algorithm that systematically outperforms the other algorithms on all problems. It is thus pointless to compare learning algorithms in general.
So in order to make a meaningful comparison, one should first select a limited number of problems of interest. The natural class of problems in our case would be to consider all the learning problems that humans are coping with: e.g. speech recognition, object recognition, motor learning...
Having a list of such problems would make a reasonable benchmark on which learning algorithms could be evaluated and compared to human performance.
Of course, on all these tasks the algorithms we have built so far are unable to match (or in certain cases even get close to) human performance.
Does it mean our algorithms are bad?
I would tend to think rather the opposite: they are very good and much better than humans in many respects. In my opinion, the reason why they do not get good success rates on those problems that are trivially solved by humans is that they don't have access to the same information and they are not given the right "priors"
Human brains are genetically designed for solving certain types of learning problems. For example, there are plenty of hard wiring in the visual system that makes object recognition easy for humans.
Also, for most "human learning" tasks, when we build a database of examples to be given to a computer, we only give a very limited amount of information to the computer. For example, when humans start to learn to recognize hand written characters, they already have a visual system that has been extensively trained to recognize all sorts of shapes. Imagine someone who has been blind from birth who suddenly recovers sight at the age of 6 and is directly put in front of handwritten characters on a screen, and asked to classify them. I would suspect that he would not get a better accuracy than existing learning algorithms.
A similar but more realistic example could be constructed as follows. Imagine you are presented with images of handwritten characters whose pixels have been shuffled in a deterministic way. For vector-based learning algorithms this would not make any difference (they would still get some good predictive accuracy), while for humans it would be completely impossible to reach any reasonable level of generalization.
Yet another example to illustrate the kind of task that computers are faced with is the spam classification. It is clear that humans are better at classifying spam vs non-spam emails. The reason is that they "understand" what the emails mean and this is because of years of language training. So now, imagine you are given 200 emails written in chineese among which 100 are spam and 100 non-spam. Do you think someone who has never had any training about chineese language would be able to reach the kind of generalization accuracy a computer would?
The examples above illustrate how little information computers have when they are faced with a supervised learning task. It seems reasonable to assume that humans faced with similar tasks would not be much better.
The above discussion aims at emphasizing the importance of "having the right prior". To some extent, building an algorithm essentially means designing a prior, and designing a prior can only be done with respect to a class of problems (there is no "universal" prior). Designing a prior (which also means choosing an appropriate representation of the data) actually allows to introduce a large amount of information into the learning algorithm. Most human learning tasks are tasks that require a lot of information and this is why computers usually fail on those.
Does we can consider from another way? If we seperate learning algorithm into differnt stages. If we can proof the result of each stage can improve/help human learning, we can declare that this learning algorithm is useful/similar to human learning?
Posted by: www.facebook.com/profile.php?id=1277200990 | October 14, 2009 at 04:42 AM
What is the precise difference between intuitive reasoning and symbolic reasoning?
Posted by: Soft Cialis | January 29, 2010 at 04:51 PM
What courses are good for learning human nature?
What classes or courses are good in dealing and learning more about human nature?
Posted by: generic viagra | February 12, 2010 at 04:45 PM
What courses are good for learning human nature?
What classes or courses are good in dealing and learning more about human nature?
Posted by: Penegra | February 26, 2010 at 08:26 PM
I am a software developer in development and I would like to get my paws on an algorithms book that really breaks then down and makes no general assumptions about the reader's previous knowledge on algorithms/programming.
What intro to algorithms book got you started?
Posted by: viagra online | March 16, 2010 at 09:50 PM
I'm searching for a "post-it" software that allows to create post-it
like notes that can be attached to particular windows or applications.
When the window appears the note appears, when the windows disappears
the note disappears...
Posted by: Condos For Sale in Costa Rica | April 19, 2010 at 06:37 AM
how do i configure my wifi router for controlled free internet access
in my coffee shop? The goal is to provide free wifi access for my
customers but be able to limit the tome they spend the time online. I
have a Linksys router.
Posted by: pancreatic abscess | April 20, 2010 at 10:54 PM
yesterday I was conducting an investigation regarding this issue. Although navigate by numerous nternet sites found no information as complete as that shown in this blog. The information presented in your blog is really interesting so I want to thank and also congratulate the great work. thanks again.
Posted by: generic cialis | April 26, 2010 at 08:16 PM
computers are nothing w/out us!great article!
Posted by: pittsburgh plastic surgeons | July 06, 2010 at 09:38 AM
Computer is ultimately a machine. Thanks for sharing
this information. keep posting such a nice post.
......Alex
Posted by: online viagra | July 19, 2010 at 10:38 AM
Take a moment to let the goodness of life touch your spirit and calm your thoughts. Then,share your good fortune with another. By coach purses
Posted by: coach purses | July 20, 2010 at 09:49 AM
If we separate learning algorithm differnt stages. If we can test the result of each stage can improve or help in human learning, we can declare that this learning algorithm is useful / similar to human learning?
Posted by: Cut machine | July 21, 2010 at 12:19 AM
If we can test the result of each stage can improve or help in human learning, we can declare that this learning algorithm is useful and similar to human learning?
Posted by: daytime soap opera spoilers | July 22, 2010 at 01:25 PM
It's quite hard to making this comparison between humans and computers, it's a paradox, a magic circle if you wish, that can't be really be examined in proper form; see, all people are different and the information in the world is truly infinite, so how can you program a computer to know all and be compatible with all? it's a weak link.
Posted by: software testing services | July 23, 2010 at 04:57 PM
The blog has lots of interesting information, I entertained a lot, thanks for sharing!
Posted by: Investments In Costa Rica | July 28, 2010 at 12:19 AM
What is the precise difference between reasoning and intuitive reasoning symbolic?
Posted by: Cartus hp | August 04, 2010 at 11:50 AM
It seems clear that despite three or four decades of research in Machine Learning, the ability of computers to learn is much lower than that of humans.
Posted by: Wooden fence panels | August 09, 2010 at 01:41 PM
And, I've usually seen Hanukkah cards that are more "fussy", perhaps elegant. I wondered if I could do a clean and graphic card. I was really happy with how it turned out. I used a computer font, a Quickutz die and some gems. That's it.
Posted by: viagra online | October 01, 2010 at 08:43 PM
You can definitely see your enthusiasm in the work you write. The world hopes for more passionate writers like you who aren?ˉt afraid to say how they believe. Always go after your heart.
Posted by: Nike Zoom | November 10, 2010 at 10:12 AM
This is my first time I have visited this site. I found a lot of interesting information in your blog. From the tons of comments on your posts, I guess I am not the only one! keep up the good work.
Posted by: Michael | January 16, 2011 at 12:59 PM
i am agree with the previous comment.
Posted by: cialis super active | February 04, 2011 at 02:09 AM
which are another example of what some people call the "lexicalist hypothesis", a view that has occasionally -- but erroneously -- been inferred from some previous writings of mine. Since my linguistic creativity is infinite, I can categorically state that there aren't templates of any sort.
Posted by: Cheap Chanel Bags | February 12, 2011 at 03:05 AM
computers are nothing w/out us!great article!
Posted by: aöf | February 12, 2011 at 11:00 AM
see: Watson CPU on Jeopardy. I think that says a lot about the capability of computers. The best Jeopardy players ever didn't even get more than 8 questions right against it.
http://www.buckettruckblogger.com
Posted by: Seo Company in Joplin Missouri | February 15, 2011 at 11:14 PM
Never knew what was in tears, because of the loss, I tasted the taste of tears.
Posted by: Nike Free Shoes | March 07, 2011 at 03:58 AM