Gradient Boosting is a powerful machine learning technique that builds on the concept of boosting, where weak learners (typically decision trees) are combined to create a strong predictive model.
The idea behind gradient boosting is to add new models to the ensemble sequentially. Each new model gradually minimizes the loss function (the difference between the predicted and actual results) of the whole system using the Gradient Descent method. The learning procedure consecutively fits new models to provide a more accurate estimate of the response variable.
Popular solutions
- XGBoost
- LightGBM
- CatBoost
- sklearn
- sklearn.ensemble.GradientBoostingClassifier
- sklearn.ensemble.GradientBoostingRegressor
Useful links
- https://en.wikipedia.org/wiki/Gradient_boosting
- https://github.com/lancifollia/tinygbt