Gradient boosting


Reading time: less than 1 minute

Gradient Boosting is a powerful machine learning technique that builds on the concept of boosting, where weak learners (typically decision trees) are combined to create a strong predictive model.

The idea behind gradient boosting is to add new models to the ensemble sequentially. Each new model gradually minimizes the loss function (the difference between the predicted and actual results) of the whole system using the Gradient Descent method. The learning procedure consecutively fits new models to provide a more accurate estimate of the response variable.

Popular solutions

  • XGBoost
  • LightGBM
  • CatBoost
  • sklearn
    • sklearn.ensemble.GradientBoostingClassifier
    • sklearn.ensemble.GradientBoostingRegressor

Useful links

  • https://en.wikipedia.org/wiki/Gradient_boosting
  • https://github.com/lancifollia/tinygbt

The following pages link here

Citation

If you find this work useful, please cite it as:
@article{yaltirakli,
  title   = "Gradient boosting",
  author  = "Yaltirakli, Gokberk",
  journal = "gkbrk.com",
  year    = "2024",
  url     = "https://www.gkbrk.com/gradient-boosting"
}
Not using BibTeX? Click here for more citation styles.
IEEE Citation
Gokberk Yaltirakli, "Gradient boosting", November, 2024. [Online]. Available: https://www.gkbrk.com/gradient-boosting. [Accessed Nov. 12, 2024].
APA Style
Yaltirakli, G. (2024, November 12). Gradient boosting. https://www.gkbrk.com/gradient-boosting
Bluebook Style
Gokberk Yaltirakli, Gradient boosting, GKBRK.COM (Nov. 12, 2024), https://www.gkbrk.com/gradient-boosting

Comments

© 2024 Gokberk Yaltirakli