Question
Is a machine learning a science or art?
Answer
Roar Nybø I am not entirely sure this is the same question as Why are spreadsheet champions considered to be expert data analysts while computer science majors take a back seat?
That question was more about the data mining/big data side of CS, which overlaps but is not a subset or superset of machine learning.
When I think of Machine Learning, I think in terms of the subject matter of basic AI textbooks like Sutton and Barto's "Reinforement Learning" and Mitchell's "Machine Learning." I also think in terms of qualitative physics (QP) and my home field of control theory, both of which CS/AI people usually are not very aware of.
The sense you get from those books and fields is a map that looks something like this. Don't hold me to this too much... very quick sketch to give you an idea of how messed up the situation on the ground is. You could probably get 10 versions of this diagram that are 10x better, and none of them would look similar. This is because the science is young and paradigms are still diverging rather than converging.

So Machine Learning is "science" if you get an application that is within the range of one of the codified methods (where creativity is reduced to parameter tuning) in either AI or control theory. There's a bigger set of ad hoc methods, where it is mostly art, where creativity goes beyond parameter tuning to basically invention of representations and learning models. Usually this happens when the underlying optimization/decision problem is NP-complete and you need to build a tasteful local-conditions model first, before letting loose one of your favorite methods.
Generally, inductive learning methods are more "sciency" (Bayesian, SVMs etc.) while analytical learning methods (such as explanation based learning, EBL or case-based reasoning, CBR) are more "artsy."
On the application dimension, most of the methods in AI apply to complex, but closed-world and non-dynamic models.
Most of the methods in control theory apply to simple, but open-world and dynamic problems.
QP tries to bridge the gap, but usually fails. There is a famous critique paper called "Prolegomena to Any Future Qualitative Physics" by Doyle and Sacks that you should read to develop a proper sense of "taste" in this field.
Most of the interesting applications are BOTH complex and open-world, and even if you adopt a multi-disciplinary approach, you have a low chance of success.
I've tried to keep this answer at a pop-science level, but really, once you've gotten a few problems under your belt, you'll worry much less about this question.
You should be aware of a couple more perceptions that are out there.
That question was more about the data mining/big data side of CS, which overlaps but is not a subset or superset of machine learning.
When I think of Machine Learning, I think in terms of the subject matter of basic AI textbooks like Sutton and Barto's "Reinforement Learning" and Mitchell's "Machine Learning." I also think in terms of qualitative physics (QP) and my home field of control theory, both of which CS/AI people usually are not very aware of.
The sense you get from those books and fields is a map that looks something like this. Don't hold me to this too much... very quick sketch to give you an idea of how messed up the situation on the ground is. You could probably get 10 versions of this diagram that are 10x better, and none of them would look similar. This is because the science is young and paradigms are still diverging rather than converging.
So Machine Learning is "science" if you get an application that is within the range of one of the codified methods (where creativity is reduced to parameter tuning) in either AI or control theory. There's a bigger set of ad hoc methods, where it is mostly art, where creativity goes beyond parameter tuning to basically invention of representations and learning models. Usually this happens when the underlying optimization/decision problem is NP-complete and you need to build a tasteful local-conditions model first, before letting loose one of your favorite methods.
Generally, inductive learning methods are more "sciency" (Bayesian, SVMs etc.) while analytical learning methods (such as explanation based learning, EBL or case-based reasoning, CBR) are more "artsy."
On the application dimension, most of the methods in AI apply to complex, but closed-world and non-dynamic models.
Most of the methods in control theory apply to simple, but open-world and dynamic problems.
QP tries to bridge the gap, but usually fails. There is a famous critique paper called "Prolegomena to Any Future Qualitative Physics" by Doyle and Sacks that you should read to develop a proper sense of "taste" in this field.
Most of the interesting applications are BOTH complex and open-world, and even if you adopt a multi-disciplinary approach, you have a low chance of success.
I've tried to keep this answer at a pop-science level, but really, once you've gotten a few problems under your belt, you'll worry much less about this question.
You should be aware of a couple more perceptions that are out there.
- Optimization people tend to think AI machine learning people are basically BS-artists who dress bread-and-butter optimization stuff in fancy dress and call it "machine learning." I am not among these people, but you should know they exist.
- The codification of open-world learning (look up what that means in any basic AI textbook like Russell and Norvig) is at a very primitive stage. Basically "I don't know that I don't know" or "unknown unknown" type learning. We haven't gotten very far beyond where Von Neumann left it with his open-world evolving "universal constructor" automata.There is a lot of potential there.