Commonly called a KAN. A KAN is a Neural network where you learn the activation functions instead of the weights.
KAN vs MLP
It turns out, that you can write Kolmogorov-Arnold Network as an MLP, with some repeats and shift before ReLU.
- https://colab.research.google.com/drive/1v3AHz5J3gk-vu4biESubJdOsUheycJNz
- https://www.reddit.com/r/MachineLearning/comments/1clcu5i/d_kolmogorovarnold_network_is_just_an_mlp/
Link dump
- Paper: https://arxiv.org/abs/2404.19756
- https://cprimozic.net/blog/trying-out-kans/
- https://github.com/Ameobea/kan/blob/main/tiny_kan.py
- https://github.com/KindXiaoming/pykan
- https://github.com/Blealtan/efficient-kan
Citation
If you find this work useful, please cite it as:
@article{yaltirakli,
title = "Kolmogorov Arnold Network",
author = "Yaltirakli, Gokberk",
journal = "gkbrk.com",
year = "2024",
url = "https://www.gkbrk.com/kolmogorov-arnold-network"
}
Not using BibTeX? Click here for more citation styles.
IEEE Citation Gokberk Yaltirakli, "Kolmogorov Arnold Network", December, 2024. [Online]. Available: https://www.gkbrk.com/kolmogorov-arnold-network. [Accessed Dec. 17, 2024].
APA Style Yaltirakli, G. (2024, December 17). Kolmogorov Arnold Network. https://www.gkbrk.com/kolmogorov-arnold-network
Bluebook Style Gokberk Yaltirakli, Kolmogorov Arnold Network, GKBRK.COM (Dec. 17, 2024), https://www.gkbrk.com/kolmogorov-arnold-network