IndepthDiver
Regular
TLDR: I think TENNs is based a lot on ABRs LMUs. ABR are a competitor but not as big as they don't do IP or currently use SNNs.
For those more technically minded:
Regarding TENNs, the presentation slides by Tony Lewis from the other week were pretty interesting.
The slides indicate TENNs is based on the Chebyshev polynomial, and it's compared to the Legendre polynomial on the same page. For reference, these are both from the family of Jacobi polynomials. Chebyshev are generally thought to converge faster (which is better) but there are a few applications where Legendre or other Jacobi polynomials are better. I find this really interesting because I think TENNs was highly inspired by the work from Applied Brain Research (ABR) on their Legendre Memory Unit (LMU).
LMUs were first proposed in a 2019 paper by Chris Eliasmith from ABR. Many on here will recall that PVDM won an award in 2021 following a presentation done on LMUs by Eliasmith (who got second place). PVDM won because a lot of shareholders from here voted for him, which I find it ironic as the last big breakthrough from Brainchip was TENNs (which I think are heavily based off the LMU algorithm).
That said, I think this was one of the better directions Brainchip could have taken. The LMU is an RNN but which overcomes many of the RNN limitations, and the ABR paper indicated it would work well on SNNs.
It's worth noting that ABR have produced a chip based on the LMU but they don't do IP so they have a smaller market. I also don't think their chip is an SNN. They have also licensed their LMU so anyone who wants to use it must pay them. Note that by using a different algorithm and possibly making other changes, Brainchip have bypassed this and potentially made their work patentable (you can't patent something that's been published in a paper). By integrating TENNs with the Akida platform, both further complement each other.
On a side note, one of the more promising architectures to replace transformers for certain tasks right now is Mamba, as it needs less training and works well with long sequences.
I initially thought Mamba may have been the next direction Brainchip would have taken for LLMs (after TENNs) but now I'm not so sure as one of the slides also does a comparison with Mamba, which it shows TENNs does relatively well against. Hopefully we find out more at the AGM.
Comparison research paper:
Original LMU paper:
Tony's slides (from Berlinlover):
ABR chip:
appliedbrainresearch.com
For those more technically minded:
Regarding TENNs, the presentation slides by Tony Lewis from the other week were pretty interesting.
The slides indicate TENNs is based on the Chebyshev polynomial, and it's compared to the Legendre polynomial on the same page. For reference, these are both from the family of Jacobi polynomials. Chebyshev are generally thought to converge faster (which is better) but there are a few applications where Legendre or other Jacobi polynomials are better. I find this really interesting because I think TENNs was highly inspired by the work from Applied Brain Research (ABR) on their Legendre Memory Unit (LMU).
LMUs were first proposed in a 2019 paper by Chris Eliasmith from ABR. Many on here will recall that PVDM won an award in 2021 following a presentation done on LMUs by Eliasmith (who got second place). PVDM won because a lot of shareholders from here voted for him, which I find it ironic as the last big breakthrough from Brainchip was TENNs (which I think are heavily based off the LMU algorithm).
That said, I think this was one of the better directions Brainchip could have taken. The LMU is an RNN but which overcomes many of the RNN limitations, and the ABR paper indicated it would work well on SNNs.
It's worth noting that ABR have produced a chip based on the LMU but they don't do IP so they have a smaller market. I also don't think their chip is an SNN. They have also licensed their LMU so anyone who wants to use it must pay them. Note that by using a different algorithm and possibly making other changes, Brainchip have bypassed this and potentially made their work patentable (you can't patent something that's been published in a paper). By integrating TENNs with the Akida platform, both further complement each other.
On a side note, one of the more promising architectures to replace transformers for certain tasks right now is Mamba, as it needs less training and works well with long sequences.
I initially thought Mamba may have been the next direction Brainchip would have taken for LLMs (after TENNs) but now I'm not so sure as one of the slides also does a comparison with Mamba, which it shows TENNs does relatively well against. Hopefully we find out more at the AGM.
Comparison research paper:
Original LMU paper:
Tony's slides (from Berlinlover):
ABR chip:

AI Chip Experts | Powering Future-Ready Artificial Intelligence with ABR
Explore ABR's Edge Time Series Processor (TSP1), a revolutionary AI chip for voice recognition, biomedical monitoring, and industrial IoT. Download the datasheet today.
