2—Random Forest Deep Dive
Today we start by learning about metrics, loss functions, and (perhaps the most important machine learning concept) overfitting. We discuss using validation and test sets to help us measure overfitting.
Then we’ll learn how random forests work - first, by looking at the individual trees that make them up, then by learning about “bagging”, the simple trick that lets a random forest be much more accurate than any individual tree.
Next up, we look at some helpful tricks that random forests support for making them faster, and more accurate.