Machine Learning: Random Forest in Python

57 VIEWS

·

One of the most commonly used machine learning algorithms is decision tree learning. Compared to other machine learning algorithms, decision tree learning provides a simple and versatile tool for building both classification and regression predictive models. For a given set of features, the algorithm constructs a “tree”: each branch corresponds to a feature, and the leaves represent the target variable. The tree could look something like this:

This tree uses two features (Alcohol and OD280/OD315) to predict the class of each observation (0,1 or 2). As you can see, the algorithm is simple and easy to visualize, yet can build a powerful predictive model. The model can be scaled to large datasets without any performance loss. It requires little data cleaning before training, and is transparent to the data scientist (you can peek inside and interpret the inner workings). Despite the advantages, decision trees struggle to produce models with low variance. They tend to overfit the data (low bias and high variance). Fortunately, data scientists have developed ensemble techniques to get around this. In this article, I demonstrate how to create a decision tree using Python, and also discuss an extension to decision tree learning, known as random decision forests.

The general technique of random forest predictors was first introduced in the mid-1990s by Tin Kam Ho in this paper. The algorithm involves growing an ensemble of decision trees (hence the name, forests) on a dataset, and using the mode (for classification) or mean (for regression) of all of the trees to predict the target variable. In the paper, Ho implements a random selection of a subset of the features for each tree. In other words, the algorithm picks only some of the features (similar to how I picked Alcohol and OD280/OD315 out of 13 total features). It creates a tree using only these, and obtains a predicted class for each observation. This is repeated with a different set of features (picked randomly) n times. After n times, each observation has n values of our target variable (Class). These values are then either averaged (if numerical) or the mode is taken (if categorical).

Several years later, Leo Breiman improved on this algorithm. He published a paper that combines Ho’s approach and the concept of bootstrap aggregation, or “bagging,” and produced an algorithm that produces much better accuracy than traditional decision trees. Bootstrap aggregation is similar to Ho’s approach with random feature selection, except with the observation selection. For a given size of a training set m (number of observations), bagging creates k additional training sets, each of the size m by sampling uniformly and with replacement from the original training set. Sampling with replacement means that some of the observations within each additional set will be duplicates. The results of the k models are then averaged (for numerical) or the mode is taken (if categorical). The combination of Ho and Breiman’s approaches is what is known as the Random Forest learning algorithm.

Creating a decision tree

Before we create a decision tree, we need data. The data I’ll use to demonstrate the algorithm is from the UCI Machine Learning Repository. We will use the wine dataset. It contains 178 observations of wine grown in the same region in Italy. Each observation is from one of three cultivars (the ‘Class’ feature), with 13 constituent features that are the result of a chemical analysis.

The tree created with the training set looks like this:


The bottom row represents the “leaves” of the decision tree. The Gini index is a measure of impurity of the class — The closer to 0, the purer the leaf. For example, the first leaf correctly predicts the class of 44 out of 45 samples that satisfies the inequalities of the first and second branches (Color intensity and Proline). We then run our test set through the decision tree. In the example, I limit the tree depth to two levels for simplicity, but even with this limitation, the model correctly predicts the correct class ~87% of the time in the test set.

0.8703703703703703

This result is not entirely terrible, but we can improve it with the techniques proposed by Ho and Breiman.

Feature & tree bagging

One of the advantages of using the scikit-learn package is the ease with which we can implement Feature and Tree Bagging. The package has a built-in function that allows us to specify the number of additional training sets to create, as well as the number of random features to sample. To do this, run:

I specify 100 additional training sets (which equates to 100 decision trees, which is the default value) and the maximum number of features in each tree to the square root of the total number of features. You can refer to Breiman’s paper for his reasoning for using the square root, but it generally is the best choice in all cases to minimize the variance. I also keep the max depth at 2 for consistency. We can now run the test set through the trained model.

0.9629629629629629

It is clear that these techniques drastically improve the test score, increasing it by nearly 10%. If you would like to improve the model even further, try increasing the max depth to a value higher than 2. Alternatively, you can reduce the number of estimators and observe how the test score changes.

Do you think you can beat this Sweet post?

If so, you may have what it takes to become a Sweetcode contributor... Learn More.

Dante is a former physicist who left the laboratory to join the ranks of scientists and engineers turning to techniques within data science to solve today’s problems. He is currently pursuing a Master's degree in Data Science for Complex Economic Systems. He lives in Turin, Italy.


Discussion

Click on a tab to select how you'd like to leave your comment

Leave a Comment

Your email address will not be published. Required fields are marked *

Menu