Click the square on the bottom right of the video to view full-screen.
More information about this lesson available at the lesson wiki.
There are also Notes available.

9—Regularization, Learning Rates and NLP

Today we continue building our logistic regression from scratch, and we add the most important feature to it: regularization. We’ll learn about L1 vs L2 regularization, and how they can be implemented.

We also talk more about how learning rates work, and how to pick one for your problem.

In the second half of the lesson, we start our discussion of natural language processing (NLP). We’ll build a “bag of words” representation of the popular IMDb text dataset, using sparse matrices to ensure good performance and reasonable memory use.

We’ll build a number of models from this, including naive bayes and logistic regression, and will improve these models by adding ngram features.