Course Preview

Machine Learning
Instructor: Tom Mitchell
Department: Machine Learning
Institution: Carnegie Mellon University
Platform: Independent
Year: 2011
Price: Free
Prerequisites: probability, linear algebra, statistics, algorithms

Students entering the class are expected to have a pre-existing working knowledge of probability, linear algebra, statistics and algorithms, though the class has been designed to allow students with a strong numerate background to catch up and fully participate. In addition, recitation sessions will be held to review some basic concepts.

Tom Mitchell. Machine Learning, McGraw Hill, 1997.
Chris Bishop. Pattern Recognition and Machine Learning. Springer, 2006.
Trevor Hastie, Robert Tibshirani, Jerome Friedman. The Elements of Statistical Learning. 2009.
Machine Learning is concerned with computer programs that automatically improve their performance through experience (e.g., programs that learn to recognize human faces, recommend music and movies, and drive autonomous robots). This course covers the theory and practical algorithms for machine learning from a variety of perspectives. We cover topics such as Bayesian networks, decision tree learning, Support Vector Machines, statistical learning methods, unsupervised learning and reinforcement learning. The course covers theoretical concepts such as inductive bias, the PAC learning framework, Bayesian learning methods, margin-based learning, and Occam's Razor. Short programming assignments include hands-on experiments with various learning algorithms, and a larger course project gives students a chance to dig into an area of their choice. This course is designed to give a graduate-level student a thorough grounding in the methodologies, technologies, mathematics and algorithms currently needed by people who do research in machine learning.