Learn

You now have the ability to make a random forest using your own decision trees. However, scikit-learn has a RandomForestClassifier class that will do all of this work for you! RandomForestClassifier is in the sklearn.ensemble module.

RandomForestClassifier works almost identically to DecisionTreeClassifier — the .fit(), .predict(), and .score() methods work in the exact same way.

When creating a RandomForestClassifier, you can choose how many trees to include in the random forest by using the n_estimators parameter like this:

classifier = RandomForestClassifier(n_estimators = 100)

We now have a very powerful machine learning model that is fairly resistant to overfitting!

Instructions

1.

Create a RandomForestClassifier named classifier. When you create it, pass two parameters to the constructor:

  • n_estimators should be 2000. Our forest will be pretty big!
  • random_state should be 0. There’s an element of randomness when creating random forests thanks to bagging. Setting the random_state to 0 will help us test your code.
2.

Train the forest using the training data by calling the .fit() method. .fit() takes two parameters — training_points and training_labels.

3.

Test the random forest on the testing set and print the results. How accurate was the model?

Take this course for free

Mini Info Outline Icon
By signing up for Codecademy, you agree to Codecademy's Terms of Service & Privacy Policy.

Or sign up using:

Already have an account?