Course Preview

Natural Language Processing with Deep Learning
Instructor: Richard Socher
Department: Computer Science
Institution: Stanford University
Platform: Independent
Year: 2018
Price: Free
Prerequisites: Calculus, Linear Algebra, Probability and Statistics, Machine Learning

Proficiency in Python: All class assignments will be in Python (using numpy and tensorflow). There is a tutorial here for those who aren't as familiar with Python. If you have a lot of programming experience but in a different language (e.g. C/C++/Matlab/Javascript) you will probably be fine. College Calculus, Linear Algebra (e.g. MATH 51, CME 100): You should be comfortable taking derivatives and understanding matrix vector operations and notation. Basic Probability and Statistics (e.g. CS 109 or other stats course): You should know basics of probabilities, gaussian distributions, mean, standard deviation, etc. Foundations of Machine Learning: We will be formulating cost functions, taking derivatives and performing optimization with gradient descent. Either cs221 or cs229 cover this background. Some optimization tricks will be more intuitive with some knowledge of convex optimization.

Textbook:
Dan Jurafsky and James H. Martin. Speech and Language Processing (3rd ed. draft) <br> Yoav Goldberg. A Primer on Neural Network Models for Natural Language Processing <br> Ian Goodfellow and Yoshua Bengio and Aaron Courville. Deep Learning. 2016. <br>
Description:
This is course cs224n from Stanford taught in 2018.. Natural language processing (NLP) is one of the most important technologies of the information age. Understanding complex language utterances is also a crucial part of artificial intelligence. Applications of NLP are everywhere because people communicate most everything in language: web search, advertisement, emails, customer service, language translation, radiology reports, etc. There are a large variety of underlying tasks and machine learning models behind NLP applications. Recently, deep learning approaches have obtained very high performance across many different NLP tasks. These models can often be trained with a single end-to-end model and do not require traditional, task-specific feature engineering. In this winter quarter course students will learn to implement, train, debug, visualize and invent their own neural network models. The course provides a thorough introduction to cutting-edge research in deep learning applied to NLP. On the model side we will cover word vector representations, window-based neural networks, recurrent neural networks, long-short-term-memory models, recursive neural networks, convolutional neural networks as well as some recent models involving a memory component. Through lectures and programming assignments students will learn the necessary engineering tricks for making neural networks work on practical problems. This course is a merger of Stanford's previous cs224n course (Natural Language Processing) and cs224d (Deep Learning for Natural Language Processing).