Course Preview

Deep Learning
Instructor: Russ Salakhutdinov
Department: Machine Learning
Institution: Carnegie Mellon University
Platform: Independent
Year: 2017
Price: Free
Ian Goodfellow and Yoshua Bengio and Aaron Courville. Deep Learning Book, 2016.
Chris Bishop. Pattern Recognition and Machine Learning. Springer, 2006.
Kevin P. Murphy. Machine Learning: A Probabilistic Perspective.
Trevor Hastie, Robert Tibshirani, Jerome Friedman. The Elements of Statistical Learning. 2009.
David MacKay. Information Theory, Inference, and Learning Algorithms. 2003.
This course covers some of the theory and methodology of deep learning. The preliminary set of topics to be covered include: 1. Introduction: (1) Background: Linear Algebra, Distributions, Rules of probability. (2) Regression, Classification. (3) Feedforward neural nets, backpropagation algorithm. Introduction to popular optimization and regularization techniques. (4) Convolutional models with applications to computer vision. 2. Deep Learning Essentials (1) Graphical Models: Directed and Undirected. (2) Linear Factor Models, PPCA, FA, ICA, Sparse Coding and its extensions. (3) Autoencoders and its extensions. Energy-based models, RBMs. (4) Monte Carlo Methods. (5) Learning and Inference: Contrastive Divergence (CD), Stochastic Maximum Likelihood Estimation, Score Matching, Ratio Matching, Pseud-likelihood Estimation, Noise-Contrastive Estimation. (6) Annealed Importance Sampling, Partition Function Estimation. (7) Deep Generative Models: Deep Belief Networks, Deep Boltzmann Machines, Helmholtz Machines, Variational Autoencoders, Importance-weighted Autoencoders, Wake-Sleep Algorithm. (8) Generative Adversarial Networks (GANs), Generative Moment Matching Nets, Neural Autoregressive Density Estimator (NADE). 3. Additional Topics (1) More on Regularization and Optimization in Deep Nets. (2) Sequence Modeling: Recurrent Neural Networks. Sequence-to-Sequence Architectures, Attention models. (3) Deep Reinforcement Learning.