Click the square on the bottom right of the video to view full-screen.
More information about this lesson available at the lesson wiki.
There are also Notes available.
2—Random Forest Deep Dive
Today we start by learning about metrics, loss functions, and (perhaps the most important machine learning concept) overfitting. We discuss using validation and test sets to help us measure overfitting.
Then we’ll learn how random forests work - first, by looking at the individual trees that make them up, then by learning about “bagging”, the simple trick that lets a random forest be much more accurate than any individual tree.
Next up, we look at some helpful tricks that random forests support for making them faster, and more accurate.