Jump to content

Kolmogorov-Arnold Networks

From Wikipedia, the free encyclopedia

Kolmogorov–Arnold Networks (KANs) are a type of artificial neural network architecture inspired by the Kolmogorov–Arnold representation theorem, also known as the superposition theorem. Unlike traditional multilayer perceptrons (MLPs), which rely on fixed activation functions and linear weights, KANs replace each weight with a learnable univariate function, often represented using splines.[1][2][3]

History

[edit]

KANs (Kolmogorov–Arnold Networks) were proposed by Liu et al. (2024)[4] as a generalization of the Kolmogorov–Arnold representation theorem (KART), aiming to outperform MLPs in small-scale AI and scientific tasks. Before KANs, numerous studies explored KART's connections to neural networks or used it as a basis for designing new network architectures.

In the 1980s and 1990s, early research applied KART to neural network design. Kůrková et al. (1992),[5] Hecht-Nielsen (1987),[6] and Nees (1994)[7] established theoretical foundations for multilayer networks based on KART. Igelnik et al. (2003)[8] introduced the Kolmogorov Spline Network using cubic splines to model complex functions. Sprecher (1996, 1997)[9][10] introduced numerical methods for building network layers, while Nakamura et al. (1993)[11] created activation functions with guaranteed approximation accuracy. These works linked KART's theoretical potential with practical neural network implementation.

KART has also been used in other computational and theoretical fields. Coppejans (2004)[12] developed nonparametric regression estimators using B-splines, Bryant (2008)[13] applied it to high-dimensional image tasks, Liu (2015)[14] investigated theoretical applications in optimal transport and image encryption, and more recently, Polar and Poluektov (2021)[15] used Urysohn operators for efficient KART construction, while Fakhoury et al. (2022)[16] introduced ExSpliNet, integrating KART with probabilistic trees and multivariate B-splines for improved function approximation.

Architecture

[edit]

KANs are based on the Kolmogorov–Arnold representation theorem, which was linked to the 13th Hilbert problem.[17][18][19]

Given consisting of n variables, a multivariate continuous function can be represented as:

  (1)

This formulation contains two nested summations: an outer and an inner sum. The outer sum aggregates terms, each involving a function . The inner sum computes n terms for each q, where each term is a continuous function of the single variable . The inner continuous functions are universal, independent of , while the outer functions depend on the specific function being represented. The representation (1) holds for all multivariate functions . If is continuous, then the outer functions are continuous; if is discontinuous, then the corresponding are generally discontinuous, while the inner functions remain the same universal functions.[19]

Liu et al.[1] proposed the name KAN. A general KAN network consisting of L layers takes x to generate the output as:

  (3)

Here, is the function matrix of the l-th KAN layer or a set of pre-activations.

Let i denote the neuron of the l-th layer and j the neuron of the (l+1)-th layer. The activation function connects (l, i) to (l+1, j):

  (4)

where nl is the number of nodes of the l-th layer.

Thus, the function matrix can be represented as an matrix of activations:

Implementations

[edit]

To make the KAN layers can be optimizable, the inner function is formed by the combination of spline and basic functions as the formula:[1]

where is the basic function, usually defined as and is the base weight matrix. Also, is the spline weight matrix and is the spline function. The spline function can be a sum of B-splines.

Many studies suggested to use other polynomial and curve functions instead of B-spline to create new KAN variants.[20][21][3]

Functions used in KAN

[edit]

The choice of functional basis strongly influences the performance of KANs. Common function families include:

  • B-splines: Provide locality, smoothness, and interpretability; they are the most widely used in current implementations.[3]
  • RBFs (include Gaussian RBFs): Capture localized features in data and are effective in approximating functions with non-linear or clustered structures.[3][22]
  • Chebyshev polynomials: Offer efficient approximation with minimized error in the maximum norm, making them useful for stable function representation.[3][23]
  • Rational function: Useful for approximating functions with singularities or sharp variations, as they can model asymptotic behavior better than polynomials.[3][24]
  • Fourier series: Capture periodic patterns effectively and are particularly useful in domains such as physics-informed machine learning.[3][25][26]
  • Wavelet functions (DoG, Mexican hat, Morlet, and Shannon): Used for feature extraction as they can capture both high-frequency and low-frequency data components.[3][27][28]
  • Piecewise linear functions: Provide efficient approximation for multivariate functions in KANs.[29][30]

Usage

[edit]

In some modern neural architectures like convolutional neural networks (CNNs), recurrent neural networks (RNNs), and Transformers, KANs are typically used as drop-in substitutes for MLP layers. Despite KANs' general-purpose design, researchers have created and used them for a number of tasks:

Drawbacks of KAN

[edit]

KANs can be computationally intensive and require a large number of parameters due to their use of polynomial functions to capture data.[36][37]

See also

[edit]

References

[edit]
  1. ^ a b c d e Liu, Ziming; Tegmark, Max (2024). "KAN: Kolmogorov–Arnold Networks". arXiv:2404.19756 [cs.LG].
  2. ^ a b c Liu, Ziming; Ma, Pingchuan; Wang, Yilun; Matusik, Wojciech; Tegmark, Max (2024). "KAN 2.0: Kolmogorov–Arnold Networks Meet Science". arXiv:2408.10205 [cs.LG].
  3. ^ a b c d e f g h i Somvanshi, S.; Javed, S. A.; Islam, M. M.; Pandit, D.; Das, S. (2026). "A Survey on Kolmogorov-Arnold Network". ACM Computing Surveys. 58 (2): 1–35. arXiv:2411.06078. doi:10.1145/3743128.
  4. ^ Liu, Z.; Wang, Y.; Vaidya, S.; Ruehle, F.; Halverson, J.; Soljačić, M.; Tegmark, M. (2024). "KAN: Kolmogorov–Arnold Networks". arXiv:2404.19756 [cs.LG].
  5. ^ Kůrková, V. (1992). "Kolmogorov's theorem and multilayer neural networks". Neural Networks. 5 (3): 501–506. doi:10.1016/0893-6080(92)90012-8.
  6. ^ Hecht-Nielsen, R. (1987). "Kolmogorov's mapping neural network existence theorem". Proceedings of the International Conference on Neural Networks. Vol. 3. New York, NY, USA: IEEE Press. pp. 11–14.
  7. ^ Nees, M. (1994). "Approximative versions of Kolmogorov's superposition theorem, proved constructively". Journal of Computational and Applied Mathematics. 54 (2): 239–250. doi:10.1016/0377-0427(94)90007-8 (inactive 25 October 2025).{{cite journal}}: CS1 maint: DOI inactive as of October 2025 (link)
  8. ^ Igelnik, B.; Parikh, N. (2003). "Kolmogorov's spline network". IEEE Transactions on Neural Networks. 14 (4): 725–733. Bibcode:2003ITNN...14..725I. doi:10.1109/TNN.2003.813830. PMID 18238055.
  9. ^ Sprecher, D. A. (1996). "A numerical implementation of Kolmogorov's superpositions". Neural Networks. 9 (5): 765–772. doi:10.1016/0893-6080(95)00081-x. PMID 12662561.
  10. ^ Sprecher, D. A. (1997). "A numerical implementation of Kolmogorov's superpositions II". Neural Networks. 10 (3): 447–457. doi:10.1016/S0893-6080(96)00106-7 (inactive 25 October 2025).{{cite journal}}: CS1 maint: DOI inactive as of October 2025 (link)
  11. ^ Nakamura, M.; Mines, R.; Kreinovich, V. (1993). "Guaranteed intervals for Kolmogorov's theorem (and their possible relation to neural networks)". Interval Computations. 3: 183–199.
  12. ^ Coppejans, M. (2004). "On Kolmogorov's representation of functions of several variables by functions of one variable". Journal of Econometrics. 123 (1): 1–31. doi:10.1016/j.jeconom.2003.09.020 (inactive 25 October 2025).{{cite journal}}: CS1 maint: DOI inactive as of October 2025 (link)
  13. ^ Bryant, D. W. (2008). Analysis of Kolmogorov's superposition theorem and its implementation in applications with low and high dimensional data (PhD thesis). Orlando, FL: University of Central Florida.
  14. ^ Liu, X. (2015). Kolmogorov superposition theorem and its applications (Doctoral dissertation). Imperial College London. hdl:10044/1/25456.
  15. ^ Polar, A.; Poluektov, M. (2021). "A deep machine learning algorithm for construction of the Kolmogorov–Arnold representation". Engineering Applications of Artificial Intelligence. 99 104137. arXiv:2001.04652. doi:10.1016/j.engappai.2020.104137.
  16. ^ Fakhoury, D.; Fakhoury, E.; Speleers, H. (2022). "ExSpliNet: An interpretable and expressive spline-based neural network". Neural Networks. 152: 332–346. doi:10.1016/j.neunet.2022.05.024. PMID 35750007.
  17. ^ Kolmogorov, A. N. (1963). "On the representation of continuous functions of many variables by superposition of continuous functions of one variable and addition". Translations of the American Mathematical Society. 2 (28): 55–59.
  18. ^ Schmidt-Hieber, Johannes (2021). "The Kolmogorov–Arnold representation theorem revisited". Neural Networks. 137: 119–126. doi:10.1016/j.neunet.2021.01.020. PMID 33592434.
  19. ^ a b Ismayilova, Aysu; Ismailov, Vugar (August 2024). "On the Kolmogorov Neural Networks". Neural Networks. 176 (Article 106333) 106333. arXiv:2311.00049. doi:10.1016/j.neunet.2024.106333. PMID 38688072.
  20. ^ Li, Ziming (2024). "Kolmogorov-Arnold Networks are Radial Basis Function Networks". arXiv:2405.06721 [cs.LG].
  21. ^ SS, S.; AR, K.; KP, A. (2024). "Chebyshev Polynomial-based Kolmogorov-Arnold Networks: An Efficient Architecture for Nonlinear Function Approximation". arXiv:2405.07200 [cs.LG].
  22. ^ Ta, H. T. (2024). "BSRBF-KAN: a combination of B-splines and radial basis functions in Kolmogorov-Arnold networks". Proceedings of the International Symposium on Information and Communication Technology. Singapore: Springer Nature Singapore. pp. 3–15.
  23. ^ Guo, Chunyu; Sun, Lucheng; Li, Shilong; Yuan, Zelong; Wang, Chao (2025). "Physics-informed Kolmogorov–Arnold network with Chebyshev polynomials for fluid mechanics". Physics of Fluids. 37 (9) 095120. arXiv:2411.04516. Bibcode:2025PhFl...37i5120G. doi:10.1063/5.0284999.
  24. ^ Aghaei, Amirmojtaba A. (2024). "RKAN: Rational Kolmogorov-Arnold Networks". arXiv:2406.14495 [cs.LG].
  25. ^ Liang, J.; Mu, L.; Fang, C. (2025). "Topology Identification of Distribution Network Based on Fourier Kolmogorov–Arnold Networks". IEEJ Transactions on Electrical and Electronic Engineering. 20 (10): 1579–1588. doi:10.1002/tee.70031.
  26. ^ Yu, S.; Chen, Z.; Yang, Z.; Gu, J.; Feng, B.; Sun, Q. (2025). Exploring Kolmogorov-Arnold Networks for Realistic Image Sharpness Assessment. IEEE. pp. 1–5.
  27. ^ Song, Y.; Zhang, H.; Man, J.; Jin, X.; Li, Q. (2025). "AWKNet: A Lightweight Neural Network for Motor Imagery Electroencephalogram Classification Based on Adaptive Wavelet Transform Kolmogorov–Arnold Networks". IEEE Transactions on Consumer Electronics. 71 (1): 1. doi:10.1109/TCE.2025.3540970.
  28. ^ Bozorgasl, Z., & Chen, H. Wav-kan: Wavelet Kolmogorov-Arnold networks, 2024. arXiv preprint arXiv:2405.12832.
  29. ^ Polar, A.; Poluektov, M. (2021-03-01). "A deep machine learning algorithm for construction of the Kolmogorov–Arnold representation". Engineering Applications of Artificial Intelligence. 99 104137. arXiv:2001.04652. doi:10.1016/j.engappai.2020.104137. ISSN 0952-1976.
  30. ^ Poluektov, Michael; Polar, Andrew (2025-07-11). "Construction of the Kolmogorov-Arnold networks using the Newton-Kaczmarz method". Machine Learning. 114 (8): 185. doi:10.1007/s10994-025-06800-6. ISSN 1573-0565.
  31. ^ Zhang, Z.; Wang, Q.; Zhang, Y.; Shen, T.; Zhang, W. (2025). "Physics-informed neural networks with hybrid Kolmogorov–Arnold network and augmented Lagrangian function for solving partial differential equations". Scientific Reports. 15 (1): 10523. Bibcode:2025NatSR..1510523Z. doi:10.1038/s41598-025-92900-1. PMC 11950322. PMID 40148388.
  32. ^ Yeo, S.; Nguyen, P. A.; Le, A. N.; Mishra, S. (2024). "KAN-PDEs: A Novel Approach to Solving Partial Differential Equations Using Kolmogorov-Arnold Networks—Enhanced Accuracy and Efficiency". Proceedings of the International Conference on Electrical and Electronics Engineering. Singapore: Springer Nature Singapore. pp. 43–62.
  33. ^ Hu, Yusong; Liang, Zichen; Yang, Fei; Hou, Qibin; Liu, Xialei; Cheng, Ming-Ming (2025). "KAC: Kolmogorov-Arnold Classifier for Continual Learning". Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR). pp. 15297–15307.
  34. ^ Li, Longlong; Zhang, Yipeng; Wang, Guanghui; Xia, Kelin (2025). "Kolmogorov–Arnold graph neural networks for molecular property prediction". Nature Machine Intelligence. 7 (8): 1346–1354. doi:10.1038/s42256-025-01087-7.
  35. ^ Yang, Zhen; Mao, Ling; Ye, Liang; Ma, Yuan; Song, Zihan; Chen, Zhe (2025). "AKGNN: When Adaptive Graph Neural Network Meets Kolmogorov-Arnold Network for Industrial Soft Sensors". IEEE Transactions on Instrumentation and Measurement. doi:10.1109/TIM.2025.3512345 (inactive 25 October 2025).{{cite journal}}: CS1 maint: DOI inactive as of October 2025 (link)
  36. ^ Le, T. X. H.; Tran, T. D.; Pham, H. L.; Le, V. T. D.; Vu, T. H.; Nguyen, V. T.; Nakashima, Y. (2024). "Exploring the limitations of Kolmogorov-Arnold networks in classification: Insights to software training and hardware implementation". Proceedings of the 2024 Twelfth International Symposium on Computing and Networking Workshops (CANDARW). Japan: IEEE. pp. 110–116. doi:10.1109/CANDARW60749.2024.00026 (inactive 25 October 2025).{{cite conference}}: CS1 maint: DOI inactive as of October 2025 (link)
  37. ^ Ta, H. T.; Thai, D. Q.; Tran, A.; Sidorov, G.; Gelbukh, A. (2025). "PRKAN: Parameter-reduced Kolmogorov-Arnold Networks". arXiv:2501.07032 [cs.LG].