Jan-13-2020, 09:03 PM
But you've done that and it did not work.
Random forests are supposed to be relatively resistant to overfitting, true. But getting better on the training data while getting worse on the validation/test data means the model is fitting closer and closer to the training data, while the validation data is different enough to give you bad results.
Random forests are supposed to be relatively resistant to overfitting, true. But getting better on the training data while getting worse on the validation/test data means the model is fitting closer and closer to the training data, while the validation data is different enough to give you bad results.