A Dimensionality reduction technique similar to a regular Principal component analysis, but it can work with non-linear relations.
There are multiple methods for doing this. Here is one I prefer.
- Start with your features.
- Set
error
to your features. - Train two models, one that encodes
error
into 1 value, and another that turns that 1 value into error
. This step is essentially an auto-encoder. - Take the difference of the predictions and
error
, this becomes the new error
. - Do steps 3 and 4 for N iterations to end up with N principal components.
This gives us something like a PCA, or a greedy auto-encoder.
Citation
If you find this work useful, please cite it as:
@article{yaltirakli,
title = "Non linear principal component analysis",
author = "Yaltirakli, Gokberk",
journal = "gkbrk.com",
year = "2025",
url = "https://www.gkbrk.com/non-linear-principal-component-analysis"
}
Not using BibTeX? Click here for more citation styles.
IEEE Citation Gokberk Yaltirakli, "Non linear principal component analysis", February, 2025. [Online]. Available: https://www.gkbrk.com/non-linear-principal-component-analysis. [Accessed Feb. 04, 2025].
APA Style Yaltirakli, G. (2025, February 04). Non linear principal component analysis. https://www.gkbrk.com/non-linear-principal-component-analysis
Bluebook Style Gokberk Yaltirakli, Non linear principal component analysis, GKBRK.COM (Feb. 04, 2025), https://www.gkbrk.com/non-linear-principal-component-analysis