Kolmogorov-Arnold Networks

https://github.com/KindXiaoming/pykan
kan_plot

Kolmogorov-Arnold Networks (KANs)

This is the github repo for the paper "KAN: Kolmogorov-Arnold Networks" and "KAN 2.0: Kolmogorov-Arnold Networks Meet Science". You may want to quickstart with hellokan, try more examples in tutorials, or read the documentation here.

Kolmogorov-Arnold Networks (KANs) are promising alternatives of Multi-Layer Perceptrons (MLPs). KANs have strong mathematical foundations just like MLPs: MLPs are based on the universal approximation theorem, while KANs are based on Kolmogorov-Arnold representation theorem. KANs and MLPs are dual: KANs have activation functions on edges, while MLPs have activation functions on nodes. This simple change makes KANs better (sometimes much better!) than MLPs in terms of both model accuracy and interpretability. A quick intro of KANs here.

Installation

Pykan can be installed via PyPI or directly from GitHub.

Pre-requisites:

Python 3.9.7 or higher
pip

For developers

git clone https://github.com/KindXiaoming/pykan.git
cd pykan
pip install -e .

Installation via github

pip install git+https://github.com/KindXiaoming/pykan.git

Installation via PyPI:

Requirements

# python==3.9.7
matplotlib==3.6.2
numpy==1.24.4
scikit_learn==1.1.3
setuptools==65.5.0
sympy==1.11.1
torch==2.2.2
tqdm==4.66.2
pandas==2.0.1
seaborn
pyyaml

After activating the virtual environment, you can install specific package requirements as follows:

pip install -r requirements.txt

Optional: Conda Environment Setup For those who prefer using Conda:

conda create --name pykan-env python=3.9.7
conda activate pykan-env
pip install git+https://github.com/KindXiaoming/pykan.git  # For GitHub installation
# or
pip install pykan  # For PyPI installation

Efficiency mode

For many machine-learning users, when (1) you need to write the training loop yourself (instead of using model.fit()); (2) you never use the symbolic branch, it is important to call model.speed() before training! Otherwise, the symbolic branch is on, which is super slow because the symbolic computations are not parallelized!

Computation requirements

Examples in tutorials are runnable on a single CPU typically less than 10 minutes. All examples in the paper are runnable on a single CPU in less than one day. Training KANs for PDE is the most expensive and may take hours to days on a single CPU. We use CPUs to train our models because we carried out parameter sweeps (both for MLPs and KANs) to obtain Pareto Frontiers. There are thousands of small models which is why we use CPUs rather than GPUs. Admittedly, our problem scales are smaller than typical machine learning tasks but are typical for science-related tasks. In case the scale of your task is large, it is advisable to use GPUs.

Documentation

The documentation can be found here.

Tutorials

Quickstart

Get started with hellokan.ipynb notebook.

More demos

More Notebook tutorials can be found in tutorials.

Advice on hyperparameter tuning

Many intuition about MLPs and other networks may not directly transfer to KANs. So how can I tune the hyperparameters effectively? Here is my general advice based on my experience playing with the problems reported in the paper. Since these problems are relatively small-scale and science-oriented, it is likely that my advice is not suitable to your case. But I want to at least share my experience such that users can have better clues where to start and what to expect from tuning hyperparameters.

  • Start from a simple setup (small KAN shape, small grid size, small data, no reguralization lamb=0). This is very different from MLP literature, where people by default use widths of order O(10^2) or higher. For example, if you have a task with 5 inputs and 1 outputs, I would try something as simple as KAN(width=[5,1,1], grid=3, k=3). If it doesn't work, I would gradually first increase width. If that still doesn't work, I would consider increasing depth. You don't need to be this extreme, if you have better understanding about the complexity of your task.

  • Once an acceptable performance is achieved, you could then try refining your KAN (more accurate or more interpretable).

  • If you care about accuracy, try grid extention technique. An example is here. But watch out for overfitting, see below.

  • If you care about interpretability, try sparsifying the network with, e.g., model.train(lamb=0.01). It would also be advisable to try increasing lamb gradually. After training with sparsification, plot it, if you see some neurons that are obvious useless, you may call pruned_model = model.prune() to get the pruned model. You can then further train (either to encourage accuracy or encouarge sparsity), or do symbolic regression.

  • I also want to emphasize that accuracy and interpretability (and also parameter efficiency) are not necessarily contradictory, e.g., Figure 2.3 in our paper. They can be positively correlated in some cases but in other cases may dispaly some tradeoff. So it would be good not to be greedy and aim for one goal at a time. However, if you have a strong reason why you believe pruning (interpretability) can also help accuracy, you may want to plan ahead, such that even if your end goal is accuracy, you want to push interpretability first.

  • Once you get a quite good result, try increasing data size and have a final run, which should give you even better results!

Disclaimer: Try the simplest thing first is the mindset of physicists, which could be personal/biased but I find this mindset quite effective and make things well-controlled for me. Also, The reason why I tend to choose a small dataset at first is to get faster feedback in the debugging stage (my initial implementation is slow, after all!). The hidden assumption is that a small dataset behaves qualitatively similar to a large dataset, which is not necessarily true in general, but usually true in small-scale problems that I have tried. To know if your data is sufficient, see the next paragraph.

Another thing that would be good to keep in mind is that please constantly checking if your model is in underfitting or overfitting regime. If there is a large gap between train/test losses, you probably want to increase data or reduce model (grid is more important than width, so first try decreasing grid, then width). This is also the reason why I'd love to start from simple models to make sure that the model is first in underfitting regime and then gradually expands to the "Goldilocks zone".

Citation

@article{liu2024kan,
  title={KAN: Kolmogorov-Arnold Networks},
  author={Liu, Ziming and Wang, Yixuan and Vaidya, Sachin and Ruehle, Fabian and Halverson, James and Solja{\v{c}}i{\'c}, Marin and Hou, Thomas Y and Tegmark, Max},
  journal={arXiv preprint arXiv:2404.19756},
  year={2024}
}

Contact

If you have any questions, please contact [email protected]

Author's note

I would like to thank everyone who's interested in KANs. When I designed KANs and wrote codes, I have math & physics examples (which are quite small scale!) in mind, so did not consider much optimization in efficiency or reusability. It's so honored to receive this unwarranted attention, which is way beyond my expectation. So I accept any criticism from people complaning about the efficiency and resuability of the codes, my apology. My only hope is that you find model.plot() fun to play with :).

For users who are interested in scientific discoveries and scientific computing (the orginal users intended for), I'm happy to hear your applications and collaborate. This repo will continue remaining mostly for this purpose, probably without signifiant updates for efficiency. In fact, there are already implmentations like efficientkan or fouierkan that look promising for improving efficiency.

For users who are machine learning focus, I have to be honest that KANs are likely not a simple plug-in that can be used out-of-the box (yet). Hyperparameters need tuning, and more tricks special to your applications should be introduced. For example, GraphKAN suggests that KANs should better be used in latent space (need embedding and unembedding linear layers after inputs and before outputs). KANRL suggests that some trainable parameters should better be fixed in reinforcement learning to increase training stability.

The most common question I've been asked lately is whether KANs will be next-gen LLMs. I don't have good intuition about this. KANs are designed for applications where one cares about high accuracy and/or interpretability. We do care about LLM interpretability for sure, but interpretability can mean wildly different things for LLM and for science. Do we care about high accuracy for LLMs? I don't know, scaling laws seem to imply so, but probably not too high precision. Also, accuracy can also mean different things for LLM and for science. This subtlety makes it hard to directly transfer conclusions in our paper to LLMs, or machine learning tasks in general. However, I would be very happy if you have enjoyed the high-level idea (learnable activation functions on edges, or interacting with AI for scientific discoveries), which is not necessariy the future, but can hopefully inspire and impact many possible futures. As a physicist, the message I want to convey is less of "KANs are great", but more of "try thinking of current architectures critically and seeking fundamentally different alternatives that can do fun and/or useful stuff".

I would like to welcome people to be critical of KANs, but also to be critical of critiques as well. Practice is the only criterion for testing understanding (实践是检验真理的唯一标准). We don't know many things beforehand until they are really tried and shown to be succeeding or failing. As much as I'm willing to see success mode of KANs, I'm equally curious about failure modes of KANs, to better understand the boundaries. KANs and MLPs cannot replace each other (as far as I can tell); they each have advantages in some settings and limitations in others. I would be intrigued by a theoretical framework that encompasses both and could even suggest new alternatives (physicists love unified theories, sorry :).

{
"by": "sumo43",
"descendants": 142,
"id": 40219205,
"kids": [
40222212,
40220240,
40231653,
40220706,
40223328,
40220203,
40222655,
40222800,
40227122,
40219890,
40254938,
40231574,
40266139,
40221216,
40219640,
40224455,
40265109,
40219935,
40239937,
40250529,
40228998,
40249235,
40219548,
40232131,
40264740,
40219546,
40228105,
40233139,
40228033,
40220702,
40224081,
40235319,
40225448,
40225962,
40220742
],
"score": 568,
"time": 1714534247,
"title": "Kolmogorov-Arnold Networks",
"type": "story",
"url": "https://github.com/KindXiaoming/pykan"
}
{
"author": "KindXiaoming",
"date": null,
"description": "Kolmogorov Arnold Networks. Contribute to KindXiaoming/pykan development by creating an account on GitHub.",
"image": "https://opengraph.githubassets.com/05878937bf690b6fc53bcea8d0fc620c7291cf9994cbcab078cd83e81667d462/KindXiaoming/pykan",
"logo": "https://logo.clearbit.com/github.com",
"publisher": "GitHub",
"title": "GitHub - KindXiaoming/pykan: Kolmogorov Arnold Networks",
"url": "https://github.com/KindXiaoming/pykan"
}
{
"url": "https://github.com/KindXiaoming/pykan",
"title": "GitHub - KindXiaoming/pykan: Kolmogorov Arnold Networks",
"description": "Kolmogorov-Arnold Networks (KANs) This is the github repo for the paper \"KAN: Kolmogorov-Arnold Networks\" and \"KAN 2.0: Kolmogorov-Arnold Networks Meet Science\". You may want to quickstart with hellokan, try...",
"links": [
"https://github.com/KindXiaoming/pykan"
],
"image": "https://opengraph.githubassets.com/05878937bf690b6fc53bcea8d0fc620c7291cf9994cbcab078cd83e81667d462/KindXiaoming/pykan",
"content": "<div><article><a target=\"_blank\" href=\"https://private-user-images.githubusercontent.com/23551623/326218913-a2d2d225-b4d2-4c1e-823e-bc45c7ea96f9.png?jwt=eyJhbGciOiJIUzI1NiIsInR5cCI6IkpXVCJ9.eyJpc3MiOiJnaXRodWIuY29tIiwiYXVkIjoicmF3LmdpdGh1YnVzZXJjb250ZW50LmNvbSIsImtleSI6ImtleTUiLCJleHAiOjE3MzQ4MDA0NTUsIm5iZiI6MTczNDgwMDE1NSwicGF0aCI6Ii8yMzU1MTYyMy8zMjYyMTg5MTMtYTJkMmQyMjUtYjRkMi00YzFlLTgyM2UtYmM0NWM3ZWE5NmY5LnBuZz9YLUFtei1BbGdvcml0aG09QVdTNC1ITUFDLVNIQTI1NiZYLUFtei1DcmVkZW50aWFsPUFLSUFWQ09EWUxTQTUzUFFLNFpBJTJGMjAyNDEyMjElMkZ1cy1lYXN0LTElMkZzMyUyRmF3czRfcmVxdWVzdCZYLUFtei1EYXRlPTIwMjQxMjIxVDE2NTU1NVomWC1BbXotRXhwaXJlcz0zMDAmWC1BbXotU2lnbmF0dXJlPTk4ZjhhMzBjYjllZGE0NDRlNmUzZWU4ZTBmMjIyODYyNWE4NDc3YmZiNzIzZmMxNDc3MzYzNDk3MjY0OTM3NDcmWC1BbXotU2lnbmVkSGVhZGVycz1ob3N0In0.OWqhlvOCNKizAbdlh2lO8Bu8FdRc1zPuLTzTDyBqXLU\"><img alt=\"kan_plot\" src=\"https://private-user-images.githubusercontent.com/23551623/326218913-a2d2d225-b4d2-4c1e-823e-bc45c7ea96f9.png?jwt=eyJhbGciOiJIUzI1NiIsInR5cCI6IkpXVCJ9.eyJpc3MiOiJnaXRodWIuY29tIiwiYXVkIjoicmF3LmdpdGh1YnVzZXJjb250ZW50LmNvbSIsImtleSI6ImtleTUiLCJleHAiOjE3MzQ4MDA0NTUsIm5iZiI6MTczNDgwMDE1NSwicGF0aCI6Ii8yMzU1MTYyMy8zMjYyMTg5MTMtYTJkMmQyMjUtYjRkMi00YzFlLTgyM2UtYmM0NWM3ZWE5NmY5LnBuZz9YLUFtei1BbGdvcml0aG09QVdTNC1ITUFDLVNIQTI1NiZYLUFtei1DcmVkZW50aWFsPUFLSUFWQ09EWUxTQTUzUFFLNFpBJTJGMjAyNDEyMjElMkZ1cy1lYXN0LTElMkZzMyUyRmF3czRfcmVxdWVzdCZYLUFtei1EYXRlPTIwMjQxMjIxVDE2NTU1NVomWC1BbXotRXhwaXJlcz0zMDAmWC1BbXotU2lnbmF0dXJlPTk4ZjhhMzBjYjllZGE0NDRlNmUzZWU4ZTBmMjIyODYyNWE4NDc3YmZiNzIzZmMxNDc3MzYzNDk3MjY0OTM3NDcmWC1BbXotU2lnbmVkSGVhZGVycz1ob3N0In0.OWqhlvOCNKizAbdlh2lO8Bu8FdRc1zPuLTzTDyBqXLU\" /></a>\n<p></p><h2>Kolmogorov-Arnold Networks (KANs)</h2><a target=\"_blank\" href=\"https://github.com/KindXiaoming/pykan#kolmogorov-arnold-networks-kans\"></a><p></p>\n<p>This is the github repo for the paper <a target=\"_blank\" href=\"https://arxiv.org/abs/2404.19756\">\"KAN: Kolmogorov-Arnold Networks\"</a> and <a target=\"_blank\" href=\"https://arxiv.org/abs/2408.10205\">\"KAN 2.0: Kolmogorov-Arnold Networks Meet Science\"</a>. You may want to quickstart with <a target=\"_blank\" href=\"https://github.com/KindXiaoming/pykan/blob/master/hellokan.ipynb\">hellokan</a>, try more examples in <a target=\"_blank\" href=\"https://github.com/KindXiaoming/pykan/tree/master/tutorials\">tutorials</a>, or read the documentation <a target=\"_blank\" href=\"https://kindxiaoming.github.io/pykan/\">here</a>.</p>\n<p>Kolmogorov-Arnold Networks (KANs) are promising alternatives of Multi-Layer Perceptrons (MLPs). KANs have strong mathematical foundations just like MLPs: MLPs are based on the universal approximation theorem, while KANs are based on Kolmogorov-Arnold representation theorem. KANs and MLPs are dual: KANs have activation functions on edges, while MLPs have activation functions on nodes. This simple change makes KANs better (sometimes much better!) than MLPs in terms of both model <strong>accuracy</strong> and <strong>interpretability</strong>. A quick intro of KANs <a target=\"_blank\" href=\"https://kindxiaoming.github.io/pykan/intro.html\">here</a>.</p>\n<p></p><h2>Installation</h2><a target=\"_blank\" href=\"https://github.com/KindXiaoming/pykan#installation\"></a><p></p>\n<p>Pykan can be installed via PyPI or directly from GitHub.</p>\n<p><strong>Pre-requisites:</strong></p>\n<div><pre><code>Python 3.9.7 or higher\npip\n</code></pre></div>\n<p><strong>For developers</strong></p>\n<div><pre><code>git clone https://github.com/KindXiaoming/pykan.git\ncd pykan\npip install -e .\n</code></pre></div>\n<p><strong>Installation via github</strong></p>\n<div><pre><code>pip install git+https://github.com/KindXiaoming/pykan.git\n</code></pre></div>\n<p><strong>Installation via PyPI:</strong></p>\n<p>Requirements</p>\n<div><pre><span># python==3.9.7</span>\n<span>matplotlib</span><span>==</span><span>3.6</span>.<span>2</span>\n<span>numpy</span><span>==</span><span>1.24</span>.<span>4</span>\n<span>scikit_learn</span><span>==</span><span>1.1</span>.<span>3</span>\n<span>setuptools</span><span>==</span><span>65.5</span>.<span>0</span>\n<span>sympy</span><span>==</span><span>1.11</span>.<span>1</span>\n<span>torch</span><span>==</span><span>2.2</span>.<span>2</span>\n<span>tqdm</span><span>==</span><span>4.66</span>.<span>2</span>\n<span>pandas</span><span>==</span><span>2.0</span>.<span>1</span>\n<span>seaborn</span>\n<span>pyyaml</span></pre></div>\n<p>After activating the virtual environment, you can install specific package requirements as follows:</p>\n<div><pre><span>pip</span> <span>install</span> <span>-</span><span>r</span> <span>requirements</span>.<span>txt</span></pre></div>\n<p><strong>Optional: Conda Environment Setup</strong>\nFor those who prefer using Conda:</p>\n<div><pre><code>conda create --name pykan-env python=3.9.7\nconda activate pykan-env\npip install git+https://github.com/KindXiaoming/pykan.git # For GitHub installation\n# or\npip install pykan # For PyPI installation\n</code></pre></div>\n<p></p><h2>Efficiency mode</h2><a target=\"_blank\" href=\"https://github.com/KindXiaoming/pykan#efficiency-mode\"></a><p></p>\n<p>For many machine-learning users, when (1) you need to write the training loop yourself (instead of using <code>model.fit()</code>); (2) you never use the symbolic branch, it is important to call <code>model.speed()</code> before training! Otherwise, the symbolic branch is on, which is super slow because the symbolic computations are not parallelized!</p>\n<p></p><h2>Computation requirements</h2><a target=\"_blank\" href=\"https://github.com/KindXiaoming/pykan#computation-requirements\"></a><p></p>\n<p>Examples in <a target=\"_blank\" href=\"https://github.com/KindXiaoming/pykan/blob/master/tutorials\">tutorials</a> are runnable on a single CPU typically less than 10 minutes. All examples in the paper are runnable on a single CPU in less than one day. Training KANs for PDE is the most expensive and may take hours to days on a single CPU. We use CPUs to train our models because we carried out parameter sweeps (both for MLPs and KANs) to obtain Pareto Frontiers. There are thousands of small models which is why we use CPUs rather than GPUs. Admittedly, our problem scales are smaller than typical machine learning tasks but are typical for science-related tasks. In case the scale of your task is large, it is advisable to use GPUs.</p>\n<p></p><h2>Documentation</h2><a target=\"_blank\" href=\"https://github.com/KindXiaoming/pykan#documentation\"></a><p></p>\n<p>The documentation can be found <a target=\"_blank\" href=\"https://kindxiaoming.github.io/pykan/\">here</a>.</p>\n<p></p><h2>Tutorials</h2><a target=\"_blank\" href=\"https://github.com/KindXiaoming/pykan#tutorials\"></a><p></p>\n<p><strong>Quickstart</strong></p>\n<p>Get started with <a target=\"_blank\" href=\"https://github.com/KindXiaoming/pykan/blob/master/hellokan.ipynb\">hellokan.ipynb</a> notebook.</p>\n<p><strong>More demos</strong></p>\n<p>More Notebook tutorials can be found in <a target=\"_blank\" href=\"https://github.com/KindXiaoming/pykan/blob/master/tutorials\">tutorials</a>.</p>\n<p></p><h2>Advice on hyperparameter tuning</h2><a target=\"_blank\" href=\"https://github.com/KindXiaoming/pykan#advice-on-hyperparameter-tuning\"></a><p></p>\n<p>Many intuition about MLPs and other networks may not directly transfer to KANs. So how can I tune the hyperparameters effectively? Here is my general advice based on my experience playing with the problems reported in the paper. Since these problems are relatively small-scale and science-oriented, it is likely that my advice is not suitable to your case. But I want to at least share my experience such that users can have better clues where to start and what to expect from tuning hyperparameters.</p>\n<ul>\n<li>\n<p>Start from a simple setup (small KAN shape, small grid size, small data, no reguralization <code>lamb=0</code>). This is very different from MLP literature, where people by default use widths of order <code>O(10^2)</code> or higher. For example, if you have a task with 5 inputs and 1 outputs, I would try something as simple as <code>KAN(width=[5,1,1], grid=3, k=3)</code>. If it doesn't work, I would gradually first increase width. If that still doesn't work, I would consider increasing depth. You don't need to be this extreme, if you have better understanding about the complexity of your task.</p>\n</li>\n<li>\n<p>Once an acceptable performance is achieved, you could then try refining your KAN (more accurate or more interpretable).</p>\n</li>\n<li>\n<p>If you care about accuracy, try grid extention technique. An example is <a target=\"_blank\" href=\"https://kindxiaoming.github.io/pykan/Examples/Example_1_function_fitting.html\">here</a>. But watch out for overfitting, see below.</p>\n</li>\n<li>\n<p>If you care about interpretability, try sparsifying the network with, e.g., <code>model.train(lamb=0.01)</code>. It would also be advisable to try increasing lamb gradually. After training with sparsification, plot it, if you see some neurons that are obvious useless, you may call <code>pruned_model = model.prune()</code> to get the pruned model. You can then further train (either to encourage accuracy or encouarge sparsity), or do symbolic regression.</p>\n</li>\n<li>\n<p>I also want to emphasize that accuracy and interpretability (and also parameter efficiency) are not necessarily contradictory, e.g., Figure 2.3 in <a target=\"_blank\" href=\"https://arxiv.org/pdf/2404.19756\">our paper</a>. They can be positively correlated in some cases but in other cases may dispaly some tradeoff. So it would be good not to be greedy and aim for one goal at a time. However, if you have a strong reason why you believe pruning (interpretability) can also help accuracy, you may want to plan ahead, such that even if your end goal is accuracy, you want to push interpretability first.</p>\n</li>\n<li>\n<p>Once you get a quite good result, try increasing data size and have a final run, which should give you even better results!</p>\n</li>\n</ul>\n<p>Disclaimer: Try the simplest thing first is the mindset of physicists, which could be personal/biased but I find this mindset quite effective and make things well-controlled for me. Also, The reason why I tend to choose a small dataset at first is to get faster feedback in the debugging stage (my initial implementation is slow, after all!). The hidden assumption is that a small dataset behaves qualitatively similar to a large dataset, which is not necessarily true in general, but usually true in small-scale problems that I have tried. To know if your data is sufficient, see the next paragraph.</p>\n<p>Another thing that would be good to keep in mind is that please constantly checking if your model is in underfitting or overfitting regime. If there is a large gap between train/test losses, you probably want to increase data or reduce model (<code>grid</code> is more important than <code>width</code>, so first try decreasing <code>grid</code>, then <code>width</code>). This is also the reason why I'd love to start from simple models to make sure that the model is first in underfitting regime and then gradually expands to the \"Goldilocks zone\".</p>\n<p></p><h2>Citation</h2><a target=\"_blank\" href=\"https://github.com/KindXiaoming/pykan#citation\"></a><p></p>\n<div><pre><span>@<span>article</span>{<span>liu2024kan</span>,</span>\n<span> <span>title</span><span>=</span>{<span>KAN</span>: <span>Kolmogorov</span><span>-</span><span>Arnold</span> <span>Networks</span>},</span>\n<span> <span>author</span><span>=</span>{<span>Liu</span>, <span>Ziming</span> <span>and</span> <span>Wang</span>, <span>Yixuan</span> <span>and</span> <span>Vaidya</span>, <span>Sachin</span> <span>and</span> <span>Ruehle</span>, <span>Fabian</span> <span>and</span> <span>Halverson</span>, <span>James</span> <span>and</span> <span>Solja</span>{<span>\\v</span>{<span>c</span>}}<span>i</span>{<span>\\'</span><span>c</span>}, <span>Marin</span> <span>and</span> <span>Hou</span>, <span>Thomas</span> <span>Y</span> <span>and</span> <span>Tegmark</span>, <span>Max</span>},</span>\n<span> <span>journal</span><span>=</span>{<span>arXiv</span> <span>preprint</span> <span>arXiv</span>:<span>2404.19756</span>},</span>\n<span> <span>year</span><span>=</span>{<span>2024</span>}</span>\n}</pre></div>\n<p></p><h2>Contact</h2><a target=\"_blank\" href=\"https://github.com/KindXiaoming/pykan#contact\"></a><p></p>\n<p>If you have any questions, please contact <a target=\"_blank\" href=\"mailto:[email protected]\">[email protected]</a></p>\n<p></p><h2>Author's note</h2><a target=\"_blank\" href=\"https://github.com/KindXiaoming/pykan#authors-note\"></a><p></p>\n<p>I would like to thank everyone who's interested in KANs. When I designed KANs and wrote codes, I have math &amp; physics examples (which are quite small scale!) in mind, so did not consider much optimization in efficiency or reusability. It's so honored to receive this unwarranted attention, which is way beyond my expectation. So I accept any criticism from people complaning about the efficiency and resuability of the codes, my apology. My only hope is that you find <code>model.plot()</code> fun to play with :).</p>\n<p>For users who are interested in scientific discoveries and scientific computing (the orginal users intended for), I'm happy to hear your applications and collaborate. This repo will continue remaining mostly for this purpose, probably without signifiant updates for efficiency. In fact, there are already implmentations like <a target=\"_blank\" href=\"https://github.com/Blealtan/efficient-kan\">efficientkan</a> or <a target=\"_blank\" href=\"https://github.com/GistNoesis/FourierKAN/\">fouierkan</a> that look promising for improving efficiency.</p>\n<p>For users who are machine learning focus, I have to be honest that KANs are likely not a simple plug-in that can be used out-of-the box (yet). Hyperparameters need tuning, and more tricks special to your applications should be introduced. For example, <a target=\"_blank\" href=\"https://github.com/WillHua127/GraphKAN-Graph-Kolmogorov-Arnold-Networks\">GraphKAN</a> suggests that KANs should better be used in latent space (need embedding and unembedding linear layers after inputs and before outputs). <a target=\"_blank\" href=\"https://github.com/riiswa/kanrl\">KANRL</a> suggests that some trainable parameters should better be fixed in reinforcement learning to increase training stability.</p>\n<p>The most common question I've been asked lately is whether KANs will be next-gen LLMs. I don't have good intuition about this. KANs are designed for applications where one cares about high accuracy and/or interpretability. We do care about LLM interpretability for sure, but interpretability can mean wildly different things for LLM and for science. Do we care about high accuracy for LLMs? I don't know, scaling laws seem to imply so, but probably not too high precision. Also, accuracy can also mean different things for LLM and for science. This subtlety makes it hard to directly transfer conclusions in our paper to LLMs, or machine learning tasks in general. However, I would be very happy if you have enjoyed the high-level idea (learnable activation functions on edges, or interacting with AI for scientific discoveries), which is not necessariy <em>the future</em>, but can hopefully inspire and impact <em>many possible futures</em>. As a physicist, the message I want to convey is less of \"KANs are great\", but more of \"try thinking of current architectures critically and seeking fundamentally different alternatives that can do fun and/or useful stuff\".</p>\n<p>I would like to welcome people to be critical of KANs, but also to be critical of critiques as well. Practice is the only criterion for testing understanding (实践是检验真理的唯一标准). We don't know many things beforehand until they are really tried and shown to be succeeding or failing. As much as I'm willing to see success mode of KANs, I'm equally curious about failure modes of KANs, to better understand the boundaries. KANs and MLPs cannot replace each other (as far as I can tell); they each have advantages in some settings and limitations in others. I would be intrigued by a theoretical framework that encompasses both and could even suggest new alternatives (physicists love unified theories, sorry :).</p>\n</article></div>",
"author": "",
"favicon": "https://github.githubassets.com/favicons/favicon.svg",
"source": "github.com",
"published": "",
"ttr": 314,
"type": "object"
}