Jackknife variance estimates for random forest

From HandWiki

In statistics, jackknife variance estimates for random forest are a way to estimate the variance in random forest models, in order to eliminate the bootstrap effects.

Jackknife variance estimates

The sampling variance of bagged learners is:

[math]\displaystyle{ V(x) = Var[\hat{\theta}^{\infty}(x)] }[/math]

Jackknife estimates can be considered to eliminate the bootstrap effects. The jackknife variance estimator is defined as:[1]

[math]\displaystyle{ \hat{V}_j = \frac{n-1}{n}\sum_{i=1}^{n}(\hat\theta_{(-i)} - \overline\theta)^2 }[/math]

In some classification problems, when random forest is used to fit models, jackknife estimated variance is defined as:

[math]\displaystyle{ \hat{V}_j = \frac{n-1}{n}\sum_{i=1}^{n}(\overline t^{\star}_{(-i)}(x) - \overline t^{\star}(x))^2 }[/math]

Here, [math]\displaystyle{ t^{\star} }[/math]denotes a decision tree after training, [math]\displaystyle{ t^{\star}_{(-i)} }[/math] denotes the result based on samples without [math]\displaystyle{ ith }[/math] observation.

Examples

E-mail spam problem is a common classification problem, in this problem, 57 features are used to classify spam e-mail and non-spam e-mail. Applying IJ-U variance formula to evaluate the accuracy of models with m=15,19 and 57. The results shows in paper( Confidence Intervals for Random Forests: The jackknife and the Infinitesimal Jackknife ) that m = 57 random forest appears to be quite unstable, while predictions made by m=5 random forest appear to be quite stable, this results is corresponding to the evaluation made by error percentage, in which the accuracy of model with m=5 is high and m=57 is low.

Here, accuracy is measured by error rate, which is defined as:

[math]\displaystyle{ Error Rate = \frac{1}{N}\sum_{i=1}^{N}\sum_{j=1}^{M}y_{ij}, }[/math]

Here N is also the number of samples, M is the number of classes, [math]\displaystyle{ y_{ij} }[/math] is the indicator function which equals 1 when [math]\displaystyle{ ith }[/math] observation is in class j, equals 0 when in other classes. No probability is considered here. There is another method which is similar to error rate to measure accuracy:

[math]\displaystyle{ logloss = \frac{1}{N}\sum_{i=1}^{N}\sum_{j=1}^{M}y_{ij}log(p_{ij}) }[/math]

Here N is the number of samples, M is the number of classes, [math]\displaystyle{ y_{ij} }[/math] is the indicator function which equals 1 when [math]\displaystyle{ ith }[/math] observation is in class j, equals 0 when in other classes. [math]\displaystyle{ p_{ij} }[/math] is the predicted probability of [math]\displaystyle{ ith }[/math] observation in class [math]\displaystyle{ j }[/math].This method is used in Kaggle[2] These two methods are very similar.

Modification for bias

When using Monte Carlo MSEs for estimating [math]\displaystyle{ V_{IJ}^{\infty} }[/math] and [math]\displaystyle{ V_{J}^{\infty} }[/math], a problem about the Monte Carlo bias should be considered, especially when n is large, the bias is getting large:

[math]\displaystyle{ E[\hat{V}_{IJ}^B]-\hat{V}_{IJ}^{\infty}\approx\frac{n\sum_{b=1}^{B}(t_b^{\star}-\bar{t}^{\star})^2}{B} }[/math]

To eliminate this influence, bias-corrected modifications are suggested:

[math]\displaystyle{ \hat{V}_{IJ-U}^B= \hat{V}_{IJ}^B - \frac{n\sum_{b=1}^{B}(t_b^{\star}-\bar{t}^{\star})^2}{B} }[/math]
[math]\displaystyle{ \hat{V}_{J-U}^B= \hat{V}_{J}^B - (e-1)\frac{n\sum_{b=1}^{B}(t_b^{\star}-\bar{t}^{\star})^2}{B} }[/math]

References