The title of this post may seem awkward as most people working on ML would consider themselves as scientists.
However, ML is still a young field, and as most scientific fields in their youth, the methods and practices are still being defined and formalized. Indeed, scientific fields usually start unorganized and, as they mature, begin to have proper definitions for their goals, proper vocabulary and proper methods for verifying their results
My feeling is that Machine Learning is still at an early age of its development. Many things are still lacking and I will try to list some of them:
- Foundations
- Agreed-upon vocabulary: there are many different fields that concurred to the infancy of the domain, they all have different ways of calling the same object, for example: function, model, hypothesis, concept... It would be nice to speak the same language.
- Statement and definition of the goals: what is the object under study?
- Statement of main problems: providing a list of problems in a clear and formalized way (this has already started with the open problems sessions at the COLT conference, but there is yet to be a consensus about which ones are more important for advancing the field).
- Experimental methods
- Common language for specifying algorithms and experiments so that they can be reproduced. Indeed, too often, the papers describe results that no one can reproduce because some specific ad-hoc tuning of the parameters was used. I recently found this article which describes an attempt to provide a language for automating experiments.
- Define and agree upon a set of measures or criteria that are relevant to assess the success of an algorithm (there could be many, depending on the field of application, the idea would be to use as many as possible rather than focusing only on a handful)
- Datasets: go beyond the UCI database, share the data, define a common standard for this data or provide tools to convert from one format into another.
- An idea could be to run evaluations more or less like a challenge (as the ones that have recently been proposed, for example by the Pascal network), but with a way to share even the code of the algorithm: you could send your code to a server and the server runs it on many databases, and measures many different criteria, generates a report and adds the entry to a database, thus creating a big source of data for meta-learning.
- The goal is to say which algorithm is better for which problems, but not in general, and especially to avoid the dataset selection issue.
- Knowledge management
- Write more introductory papers, or textbooks.
- Collect and maintain relevant bibliographic references.
- Revisit what has been done or discovered so far with a fresh look: avoid repeating always the same things without trying to understand them or putting them under a new light (e.g. "SVM are great because they are well-founded", "Boosting is maximizing the margin", "Fisher's discriminant is valid only when the classes are distributed as Gaussians", "What if the data is not iid", "Bayes rule is optimal")
Of course, this may take some years and a lot of effort, but hopefully, as more money is poured into Machine Learning research and applications, this will happen...
Recent Comments