An autoencoder is a Neural network that tries to reconstruct its own input through a bottleneck. It’s a dimensionality reduction technique.
Since it doesn’t require labeling, it’s an unsupervised machine learning method.
A linear autoencoder (an autoencoder without activation functions) is roughly equivalent to a PCA.
Sparse autoencoder
A sparse autoencoder, or an SAE, is an autoencoder with a sparsity term added to the loss. Usually this is an L1 loss to encourage a representation with mostly 0 values.
Citation
If you find this work useful, please cite it as:
@article{yaltirakli,
title = "Autoencoder",
author = "Yaltirakli, Gokberk",
journal = "gkbrk.com",
year = "2024",
url = "https://www.gkbrk.com/autoencoder"
}
Not using BibTeX? Click here for more citation styles.
IEEE Citation Gokberk Yaltirakli, "Autoencoder", November, 2024. [Online]. Available: https://www.gkbrk.com/autoencoder. [Accessed Nov. 12, 2024].
APA Style Yaltirakli, G. (2024, November 12). Autoencoder. https://www.gkbrk.com/autoencoder
Bluebook Style Gokberk Yaltirakli, Autoencoder, GKBRK.COM (Nov. 12, 2024), https://www.gkbrk.com/autoencoder