Usually, performing inductive inference occurs in two steps. The first one consists in constructing a set of assumptions that summarize the knowledge one has about a phenomenon of interest prior to observing instances of this phenomenon. The second one consists in actually observing these instances and deriving new knowledge from this observation.
A possible question is: what principle may guide each of these steps?
A possible answer is: be as rational as possible. In other words, try to avoid inconsistencies.
Regarding the second one, it is sometimes possible to formulate the problem as a purely deductive one. Indeed, the question is "given such assumptions and given such data, what can I deduce?". For example, in a probabilistic framework, one would have a prior distribution and observations and would aim at obtaining an updated distribution. The rational way of doing this is to apply Bayes rule.
In other settings, when the assumptions are not formulated in a probabilistic language, or when the objective is to optimize some sort of worst-case performance, other rules could be used.
The point is that once the objective is clearly and formally specified, rationality naturally leads to the solution via pure deduction.
Regarding the first one (constructing the assumptions), the situation is less obvious. There are guiding principles though, which again rely on rationality.
One such principle is the one of symmetry: if there is no reason to prefer one side of a coin to the other (or to assume that both faces would have different properties), simply consider them equally probable. A more elaborated version of this principle is the principle of maximum entropy: when choosing a prior distribution over a set of possibilities, choose, among the ones that are consistent with your prior beliefs, the one with maximum entropy.
Finally, there is also the principle of simplicity (Occam's razor) which suggests to give more prior weight to the simple hypotheses than the complex ones.
However, all these principles cannot be justified in a formal way. One can surely construct settings where applying one specific principle is the "best" thing to do, but this is somewhat artificial and does not provide a justification.
Instead of proving things, I guess the best thing to do is to provide recommendations. One such recommendation is "be rational", or in other words, try to take into account every piece of evidence you may have before observing the data and to do this in a way that does not lead to contradictions and does not expose you to more risk than you are willing to accept. So in a way, inferences should take into account both your knowledge and your uncertainty and be calibrated according to what you accept to loose if you fail.
I like the idea that performing an inference is like horse race gambling: you try to get as much information you can about the horses, but you know there will always be some missing piece of information. Even if gambling is somewhat irrational, when you have no choice but to do it, better do it in the most rational way!
hello
I first want to say that your blog is full of very interesting thought.
About being rational, I think that you've closed the discution perfectly, separating idea that are knowledge and the one that are not. I know that taking a deeper view would have bring you into inextricable aporia, and I think that the clarity of your text is good to begin a discution. Although there are some point that might be lightened. The one I'm not clear with learnability. Here is my point of view (I can't get out, help me).
You say that man has to be rational to construct assumptions and I agree with you when you say that it is less obvious. There are many "rules" that are more or less rationnal , but I think that there is an important one that should be seen as a primary principle : Learnability.
There is surely an subtle relation between rationality and learnability if you link them with the concept of simplicity, but I keep thinking that rationality has to be separated from learnability. In my opinion, principle like Ergodicity or "distinguishability" are not made of rationality but for learnability. It agree that it is not rational to try to learn something that is not learnable, and as a consequence man could say that it is rational to suppose that what we learn is learnable. But the problem is that we don't know by advance if something is learnable or not, and this let me say that learnability has to be a primal belief. It is sure that we must believe at least once on learnability, someone who never believe in learnability don't believe in the simple fact that he is alive. The primal belief is the price to pay to live. In fact Learnability can be postulate to be a rational principle, but some questions still stand. why do we sometime say "this phenomenon can't be learned"? What rational principle do we apply to take this sort of decision ? Is it better to say that every experience let us learn something?
In fact this problem is quite philosophic and not that clear for me (and maybe I mixed all up). I think it is more difficult for someone who is used to mathematical "closed" abstraction to face the real philosophical problem of learnability. Mathematicaly speaking learnability is some sort of game we play using hypothesis (sometime mixed up with rational hypothesis (conclusion of past theorem?)). For exemple when you have to face a problem of "learning if there is a signal (something to learn ?) in all this noise or if what you see is only due to random things wich has nothing to do with the phenomena" , you keep believing that there might be a structured enought phenomenon or there might not be. Unfortunately, mathematics tell you that If you want to take this decision without making a decision as bad as a random decision you have to make strong hypothesis on the distinguishability of the signal that might exist in the noise and the noise without signal. In other word, if you want to learn if there is something to learn, you have to suppose that you can learn the answer to the question : "is there something to be learned or not". Is it more rational, then, when you don't see anything, to say that you don't see anything or that it is not possible to learn if there is something or if there is anything? I think (this is really shematic because mathematician (the logician behind him) would say something like "I keep the two answer") that learner has to believe that there isn't anything (i.e. I learned that there isn't anything). In this first exemple it has no real concequences but imagine that two "learner" have to face the same phenomena, and that the first one says "with those hypothesis (which to my mind are the most rational), I can't learn any thing" when the second one declare "I do not agree with your hypothesis I have stronger hypothesis (which to my mind are the most rational) that make the problem learnable". I'm quite sure that you whant to say somthing like "it depends how stronger are the hypothesis" but how will you react if they are stronger from a little nothing ? (I think I have to search for a more precise exemple here)
To conclude this (I lost a little bit myself) I would say that inference has to be done with the weakest hypotheses (note that the word "weakest" is here really problematic) some sort of primal belief that makes consistancy possible, that make possible the use of rationnality to "avoid inconsistencies". Can the weakest hypothesis be "to strong" to be accepted ? how to decide ? Isn't it better to think of inference as the sheapest way of leaning something
There are no truth only tendencies so
be rational ... but don't be too septic
Posted by: robin | March 07, 2006 at 08:55 AM
Robin, I've read your comment, but I have to admit that I didn't understood exactly what you meant. Nevertheless you rise the idea of "learnability" which is quite new for me and which is related to my question:
- under which mathematical framework should be studied this notion of learnability ? Is this (those?) framework general enough ?
In other words: Olivier, you stress that the probabilistic framework is just one setting among others, but what are those others ?
Possible anwser:
* Vaillant PAC learning ?
* Vapnik risk minimisation ?
* Gold algorithimic learning ?
* ...
Is is that kind of framework you are thinking about ?
Posted by: Pierre | March 17, 2006 at 03:07 PM
Pierre, I don't think I maid myself clear, and I am sorry for my English. I will work on it... The problem of learnability has it own framework in statistical inference, and I am sure you know it (we don't properly talk about learnability). Suppose in a parametric framework, that you want to infer the value of a parameter : you will talk about orthogonality when two distributions you get for different values of the parameter charge disjoint areas (this correspond to perfect learnability), and you will talk about "equivalent" distribution when the l1 distance between two distribution you get for different values of the parameter is null (This is perfect non-Learnability). For exemple in an infinite dimensional gaussian framework there is a theorem (from Kakutani) that tells you that two distribution are either orthogonal or equivalent. If the unknow parameter is the mean of an L^2 process, equivalence appears if the difference of two value of the parameter lies in the range of the inverse square-root of the covariance operator.
When you have repeated experiences you implicitly asume ergodicity (the fact that the distribution is the same for each experience) and without this you often have non learnability... All of this is mathematical framework to think learnability.
BUT what I wanted to say is that it hide the real question of learnability behind hypotheses that can't be known in the real world that's why I say that it is hard for mathematician to face the problem of learnability (because he summarise it with hypotheses that are true or not)... The reason why I talk about learnability is not to put it in a framework, but to point out that it is a principle which is not rational that guides us when we build framework, statistical methods or more gerneraly when we want to learn things from the real world...
Posted by: robin | March 19, 2006 at 06:08 PM
Of course if would very confusing when calculating it physically, but considering moral comes easy.
Posted by: insurance web design | March 21, 2011 at 11:47 AM