Course Preview

Machine Learning and Pattern Recognition
1
Instructor: Yann LeCun
Department: Computer Science
Institution: New York University
Platform: Independent
Year: 2010
Price: Free
Prerequisites: Linear algebra, vector calculus, elementary statistics, probability theory

Good programming ability is a must: most assignements will consist in implementing algorithms studied in class.

Textbook:
Chris Bishop. Pattern Recognition and Machine Learning. Springer, 2006.
Richard O. Duda, Peter E. Hart, David G. Stork. "Pattern Classification" Wiley-Interscience; 2nd edition, October 2000.
T. Hastie, R. Tibshirani, and J. Friedman. "Elements of Statistical Learning", Springer-Verlag, 2001.
Ethem Alpaydin. Introduction to Machine Learning, MIT Press, October 2004.
C. Bishop. "Neural Networks for Pattern Recognition", Oxford University Press, 1996.
S. Haykin. "Neural Networks, a comprehensive foundation", Prentice Hall, 1999 (second edition).
Tom Mitchell. "Machine Learning", McGraw Hill, 1997.
Description:
The course covers a wide variety of topics in machine learning, pattern recognition, statistical modeling, and neural computation. It covers the mathematical methods and theoretical aspects, but will primarily focus on algorithmic and practical issues. Machine Learning and Pattern Recognition methods are at the core of many recent advances in "intelligent computing". Current applications include machine perception (vision, audition, speech recognition), control (process control, robotics), data mining, time-series prediction (e.g. in finance), natural language processing, text mining and text classification, bio-informatics, neural modeling, computational models of biological processes, and many other areas. Who Can Take This Course? This course can be useful to all students who would want to use or develop statistical modeling methods. This includes students in CS (AI, Vision, Graphics), Math (System Modeling), Neuroscience (Computational Neuroscience, Brain Imaging), Finance (Financial modeling and prediction), Psychology (Vision), Linguistics, Biology (Computational Biology, Genomics, Bio-informatics), and Medicine (Bio-Statistics, Epidemiology). The only formal pre-requisites are familiarity with computer programming and linear algebra, but the course relies heavily on such mathematical tools as probability and statistics, multi-variate calculus, and function optimization. The basic mathematical concepts will be introduced when needed, but students will be expected to assimilate a non-trivial amount of mathematical concepts in a fairly short time. Although this is a graduate-level course, highly motivated undergraduates at the senior level with a good math background can take this class. A few juniors have even taken this class successfully in the past. Topics Treated: 1. the basics of inductive inference, learning, and generalization. 2. linear classifiers: perceptron, LMS, logistic regression. 3. non-linear classifiers with linear parameterizations: basis-function methods, boosting, support vector machines. 4. multilayer neural networks, backpropagation 5. heterogeneous learning systems 6. graph-based models for sequences: hidden Markov models, finite-state transducers, recurrent networks. 7. unsupervised learning: density estimation, clustering, and dimensionality reduction methods. 8. introduction to graphical models and factor graphs 9. approximate inference, sampling. 10. optimization methods in learning: gradient-based methods, second-order methods, Expectation-Maximization. 11. objective functions: maximum likelihood, maximum a-posteriori, discriminative criteria, maximum margin. 12. the bias-variance dilemma, regularization, model selection. 13. applications in vision, speech, language, forecasting, and biological modeling. By the end of the course, students will be able to not only understand and use the major machine learning methods, but also implement, apply and analyze them.