« More is Less | Main | Machine Learning Blogs »



I first want to say that your blog is full of very interesting thought.
About being rational, I think that you've closed the discution perfectly, separating idea that are knowledge and the one that are not. I know that taking a deeper view would have bring you into inextricable aporia, and I think that the clarity of your text is good to begin a discution. Although there are some point that might be lightened. The one I'm not clear with learnability. Here is my point of view (I can't get out, help me).

You say that man has to be rational to construct assumptions and I agree with you when you say that it is less obvious. There are many "rules" that are more or less rationnal , but I think that there is an important one that should be seen as a primary principle : Learnability.
There is surely an subtle relation between rationality and learnability if you link them with the concept of simplicity, but I keep thinking that rationality has to be separated from learnability. In my opinion, principle like Ergodicity or "distinguishability" are not made of rationality but for learnability. It agree that it is not rational to try to learn something that is not learnable, and as a consequence man could say that it is rational to suppose that what we learn is learnable. But the problem is that we don't know by advance if something is learnable or not, and this let me say that learnability has to be a primal belief. It is sure that we must believe at least once on learnability, someone who never believe in learnability don't believe in the simple fact that he is alive. The primal belief is the price to pay to live. In fact Learnability can be postulate to be a rational principle, but some questions still stand. why do we sometime say "this phenomenon can't be learned"? What rational principle do we apply to take this sort of decision ? Is it better to say that every experience let us learn something?

In fact this problem is quite philosophic and not that clear for me (and maybe I mixed all up). I think it is more difficult for someone who is used to mathematical "closed" abstraction to face the real philosophical problem of learnability. Mathematicaly speaking learnability is some sort of game we play using hypothesis (sometime mixed up with rational hypothesis (conclusion of past theorem?)). For exemple when you have to face a problem of "learning if there is a signal (something to learn ?) in all this noise or if what you see is only due to random things wich has nothing to do with the phenomena" , you keep believing that there might be a structured enought phenomenon or there might not be. Unfortunately, mathematics tell you that If you want to take this decision without making a decision as bad as a random decision you have to make strong hypothesis on the distinguishability of the signal that might exist in the noise and the noise without signal. In other word, if you want to learn if there is something to learn, you have to suppose that you can learn the answer to the question : "is there something to be learned or not". Is it more rational, then, when you don't see anything, to say that you don't see anything or that it is not possible to learn if there is something or if there is anything? I think (this is really shematic because mathematician (the logician behind him) would say something like "I keep the two answer") that learner has to believe that there isn't anything (i.e. I learned that there isn't anything). In this first exemple it has no real concequences but imagine that two "learner" have to face the same phenomena, and that the first one says "with those hypothesis (which to my mind are the most rational), I can't learn any thing" when the second one declare "I do not agree with your hypothesis I have stronger hypothesis (which to my mind are the most rational) that make the problem learnable". I'm quite sure that you whant to say somthing like "it depends how stronger are the hypothesis" but how will you react if they are stronger from a little nothing ? (I think I have to search for a more precise exemple here)
To conclude this (I lost a little bit myself) I would say that inference has to be done with the weakest hypotheses (note that the word "weakest" is here really problematic) some sort of primal belief that makes consistancy possible, that make possible the use of rationnality to "avoid inconsistencies". Can the weakest hypothesis be "to strong" to be accepted ? how to decide ? Isn't it better to think of inference as the sheapest way of leaning something

There are no truth only tendencies so
be rational ... but don't be too septic


Robin, I've read your comment, but I have to admit that I didn't understood exactly what you meant. Nevertheless you rise the idea of "learnability" which is quite new for me and which is related to my question:
- under which mathematical framework should be studied this notion of learnability ? Is this (those?) framework general enough ?

In other words: Olivier, you stress that the probabilistic framework is just one setting among others, but what are those others ?

Possible anwser:
* Vaillant PAC learning ?
* Vapnik risk minimisation ?
* Gold algorithimic learning ?
* ...

Is is that kind of framework you are thinking about ?


Pierre, I don't think I maid myself clear, and I am sorry for my English. I will work on it... The problem of learnability has it own framework in statistical inference, and I am sure you know it (we don't properly talk about learnability). Suppose in a parametric framework, that you want to infer the value of a parameter : you will talk about orthogonality when two distributions you get for different values of the parameter charge disjoint areas (this correspond to perfect learnability), and you will talk about "equivalent" distribution when the l1 distance between two distribution you get for different values of the parameter is null (This is perfect non-Learnability). For exemple in an infinite dimensional gaussian framework there is a theorem (from Kakutani) that tells you that two distribution are either orthogonal or equivalent. If the unknow parameter is the mean of an L^2 process, equivalence appears if the difference of two value of the parameter lies in the range of the inverse square-root of the covariance operator.
When you have repeated experiences you implicitly asume ergodicity (the fact that the distribution is the same for each experience) and without this you often have non learnability... All of this is mathematical framework to think learnability.

BUT what I wanted to say is that it hide the real question of learnability behind hypotheses that can't be known in the real world that's why I say that it is hard for mathematician to face the problem of learnability (because he summarise it with hypotheses that are true or not)... The reason why I talk about learnability is not to put it in a framework, but to point out that it is a principle which is not rational that guides us when we build framework, statistical methods or more gerneraly when we want to learn things from the real world...

insurance web design

Of course if would very confusing when calculating it physically, but considering moral comes easy.

The comments to this entry are closed.