BRN Discussion Ongoing

Bravo

If ARM was an arm, BRN would be its biceps💪!
How good would it be if Arm decides to incorporate our technology into it's own chips to set it apart from other chip companies offerings?

Recent reports indicate that Arm intends to launch the chip by mid-2025, with expectations to unveil it as early as this summer.

Since Arm already has access to our technology, they could already be testing it as part of their prototype, so presumably they wouldn't have to obtain a licence from us until just prior to the launch.

It wouldn't be a stretch to think that our technology would be a competitive differentiation for Arm against Qualcomm because of Qualcomm's focus on edge AI applications and low-power AI. Given the legal disputes over Nuvia, Arm would probably want to push further into AI acceleration to gain an edge over Qualcomm and reduce its reliance on licensing revenue from them.

For what it's worth, here's what ChatGPT had to say about why integrating BrainChip’s neuromorphic technology could be a very smart move for Arm.





Screenshot 2025-02-15 at 1.55.32 pm.png


Screenshot 2025-02-15 at 2.16.24 pm.png






Screenshot 2025-02-15 at 1.52.09 pm.png






 
Last edited:
  • Like
  • Fire
  • Love
Reactions: 29 users

Bravo

If ARM was an arm, BRN would be its biceps💪!
This Forbes article (see below) was only published online 13 hours ago and it should be read bearing in mind what our very own Tony Lewis had to say on Linkedin just 2 weeks go.

Yet another reason for arm to seriously consider incorporating our technology into their new chips...



Screenshot 2025-02-15 at 3.00.57 pm.png










Qualcomm Could Benefit Most From DeepSeek’s New, Smaller AI​

Karl Freund
Contributor
Founder and Principal Analyst, Cambrian-AI Research LLC
Follow


Feb 14, 2025,10:10am EST
Qualcomm CEO Cristiano Amon

Qualcomm CEO Cristiano Amon

While the Deep Seek Moment crashed most semiconductor stocks as investors feared lower demand for data center AI chips, these new, smaller AI models are just the ticket for on-device AI. “DeepSeek R1 and other similar models recently demonstrated that AI models are developing faster, becoming smaller, more capable and efficient, and now able to run directly on device,” said Qualcomm CEO Cristiano Amon at the company’s recent earnings call. And within less than a week, DeepSeek R1-distilled models were running on Qualcomm Snapdragon-powered PCs and smartphones. (Qualcomm is a client of Cambrian-AI Research.)


While both Apple and Qualcomm will benefit from these new models, Qualcomm can quickly apply these models beyond smart phones; the company has strong positions in other markets such as automotive, robotics, and VR headsets, as well as the company’s emerging PC business. All these markets will benefit from the new smaller models and the applications built on them.

Apple is famous for its beautiful fully integrated designs, but Qualcomm partners with others to design and build the final product, speeding time to market and enabling broader adoption. For example, Qualcomm Snapdragon chips power both Meta Quest and Rayban headsets, which enjoy over 70% market share.

Major Trends Accelerating On-device AI​

Qualcomm and Apple have both been working hard to reduce model size through lower precision math and model optimization techniques such as pruning and sparsity. Now, with distillation, we are seeing step-function improvement in the quality, performance, and efficiency of AI models that can now run on device. And these smaller models do not demand users to compromise.


These new state-of-the-art smaller AI models have superior performance thanks to techniques like model distillation and novel AI network architectures, which simplify the development process without sacrificing quality. These smaller models can outperform larger ones which really only operate in the cloud.

In addition, the size of models continues to decrease rapidly. State-of-the-art quantization and pruning techniques allow developers to reduce the size of models with no material drop in accuracy.



The table below shows that the distilled versions of both the DeepSeek Qwen and Meta Llama models perform as well or better than the larger and more expensive state of the art models from OpenAI and Mistral. The GPQA Diamond benchmark is particularly interesting, as that model involves deep, multi-step reasoning to solve complex queries, which many models find challenging.
The new DeepSeek-R1 shows significantly better results (accuracy) across all math and coding benchmarks vs Open AI and Claude.

The new DeepSeek-R1 shows significantly better results (accuracy) across all math and coding ... [+]
QUALCOMM

So, Do You Really Need On-device AI?​

The market skepticism around on-device AI is fading fast. Here is an example use case that Qualcomm has provided. Imagine you are driving along and one of your passengers mentions coffee. An LLM agent hears this and suggests a place along the route where you can stop and grab a cup. Since the local driving LLM and ADAS systems are local, a cloud-based AI cannot perform this task. This is but one example of how agents will transform AI and are especially useful on-device.
Here is a great use case for LLM Agents in a car.  Coffee anyone?

Here is a great use case for LLM Agents in a car. Coffee anyone?
QUALCOMM

So, the AI World Isn’t Crashing?​

Not in the least. In fact, we would say that these new models are a tipping point for ubiquitous AI. Smaller, more efficient, and accurate AI models are key to helping make AI pervasive and affordable. Consequently, techniques demonstrated by DeepSeek are already being applied by the mainstream AI companies to keep them competitive and avoid the pitfalls of censorship and security that DeepSeek presents.
And Qualcomm is perhaps the biggest winner in this evolution of models towards affordable AI that fits and runs well on the devices that already number in the billions.
 

Attachments

  • Screenshot 2025-02-15 at 3.00.57 pm.png
    Screenshot 2025-02-15 at 3.00.57 pm.png
    192.7 KB · Views: 49
  • Like
  • Love
  • Fire
Reactions: 28 users
Hi Dingo,

Assuming our technology is involved in Nintendo Switch 2, I've been trying to get a feel for the type of revenue we could expect once sales start to ramp up.

I asked ChatGPT, how many Switch 2's would be projected to be sold each year on the basis of past sales and it said:

"Based on historical sales data and industry projections, the Nintendo Switch 2 is expected to achieve significant sales figures in its initial years. Analysts project that the console will sell between 15 and 17 million units in its first year, with total sales surpassing 80 million units by 2028." famiboards.com

For context, the original Nintendo Switch, released in 2017, has sold over 150 million units as of December 2024, making it one of Nintendo's most successful consoles.

As far as understand it, royalty calculations would depend on several factors, including licensing agreements and the pricing model used.

OPTION 1) The most common calculation appears to be "Per-Unit Royalty".
  • Example: If the agreed royalty is $1 per unit and Nintendo sells 20 million units, BrainChip would earn $20 million in royalties.
OPTION 2) Another calculation could be "Percentage of Product Revenue" similar to how ARM sometimes structures deals.
  • Example: If Nintendo sells the console for $400 and BrainChip gets 0.5% of sales, that’s $2 per unit.
According to the two options above, and so long as the analysts are actually correct about the projected 80 million units expected to being sold by 2028, then that would equate to the following potential revenue over the next 2.5 years.

  • UNDER OPTION 1) $80 million
  • UNDER OPTION 2) $160 million
Obviously, these figures are just an example only because I have no idea what price or percentage per unit BrainChip has negotiated.

I guess, we're all going to know at a certain point, once sales start, whether we begin to see those sales reflected in our Quarterly Reports. If revenue comes in around this time-frame, it'll be pretty darn oblivious it's from Megachips, because it's not like we have an other potential opportunities that would sell that sort of quantity of units.

And, if we don't receive any revenue at all by October, then I'd say it would be fairly obvious we are not involved and we'll all have to toddle off to the pub to drown our sorrows.
I have a feeling it may be more like option 1, Bravo and $1 might by overly optimistic..

MegaChips licenced us, to make themselves money, not us, so the royalty we get, will be a fraction of what "they're" making and we would only be a very small part of what makes the Switch 2 jiggle, overall.

So I think more like 30 to 50 cents would be closer to the mark..

Unit sales is the big question..

With ARM, RISK-V and Intel, they are among our "Technology Partners" so it may work differently, as far as their need for an IP Licence.

Nintendo will be a good appetiser, to stop the stomach grumbles, but we still need a main course..
 
Last edited:
  • Like
Reactions: 17 users
  • Like
  • Wow
  • Fire
Reactions: 10 users
Just watched the CES video from a couple of weeks ago and I noticed the "streaming" on the banner. I've never seen that in thir marketing and wondering if there is a potential change in direction?

1739598520653.png
 
  • Like
  • Thinking
  • Wow
Reactions: 15 users

Diogenese

Top 20
I have a feeling it may be more like option 1, Bravo and $1 might by overly optimistic..

MegaChips licenced us, to make themselves money, not us, so the royalty we get, will be a fraction of what "they're" making and we would only be a very small part of what makes the Switch 2 jiggle, overall.

So I think more like 30 to 50 cents would be closer to the mark..

Unit sales is the big question..

With ARM, RISK-V and Intel, they are among our "Technology Partners" so it may work differently, as far as their need for an IP Licence.

Nintendo will be a good appetiser, to stop the stomach grumbles, but we still need a main course..
Hi DB,

ARM may have used a per unit royalty in the past, but they are now seeking a larger slice of the pie, trying to get a higher return more in line with what they consider to be the capability/benefit enabled by their processsors, for example, in the form of a percentage of sales.

I guess ARM's IP licensing market is multi-faceted, such as companies who make ARM processors as COTS product, as compared with end product manufacturers who incorporate ARM processor IP as part of a SoC in their commercial products. eg, Qualcomm.

ARM's proposed chip manufacturing business model would be of more concern to the first group, the COTS ARM processor manufacturers.

A percent of sales model would be more complex to implement in the SoC case as far as attributing revenue between the elements of the SoC, but I suppose they could work out some formula based on the straight ARM processor sales royalty.

It does not seem unreasonable to base royalties on what the processor can do rather than on what it costs to make. Of course that has to be assessed in comparison with competitive products.

It's all too complicated for me.
 
  • Like
  • Fire
Reactions: 9 users

TECH

Regular
Gidday...I think I'm right in saying that Dr. Tony Lewis is the first senior executive to actually verbally name a competitor to Brainchip, when he recently stated that Dr. Chris Eliasmith CTO at Applied Brain Research had been working on state space models (ssm) as well, while we have developed our TENN's model.

Interesting he mentioned that, not a company we would neccessarily think of first, I don't ever remember Peter or Anil ever publicly mentioning a competitor, maybe someone will correct me on that.

Tech.
 
  • Like
Reactions: 6 users

Diogenese

Top 20
Gidday...I think I'm right in saying that Dr. Tony Lewis is the first senior executive to actually verbally name a competitor to Brainchip, when he recently stated that Dr. Chris Eliasmith CTO at Applied Brain Research had been working on state space models (ssm) as well, while we have developed our TENN's model.

Interesting he mentioned that, not a company we would neccessarily think of first, I don't ever remember Peter or Anil ever publicly mentioning a competitor, maybe someone will correct me on that.

Tech.
Curioser and curioser ...

https://brainchip.com/brainchip-sig...keting-agreement-with-applied-brain-research/

BrainChip Signs Joint Development and Marketing Agreement with Applied Brain Research 01.02.2016​

ALISO VIEJO, CA — (Marketwired) — 02/01/16 — BrainChip, Inc., a wholly owned subsidiary of BrainChip Holdings Limited (ASX: BRN), a developer of a revolutionary new Spiking Neuron Adaptive Processor (SNAP) technology that has the ability to learn autonomously, evolve and associate information just like the human brain, has signed a strategic, joint development and marketing agreement with Applied Brain Research.
...
The partnership with BrainChip represents a significant opportunity for our team to work with a breakthrough company,” said Dr. Chris Eliasmith, Co-founder of Applied Brain Research. “We believe neuromorphic semiconductor chips will run the next generation of AI systems, and BrainChip’s hardware technology is a complete development solution for companies entering the neuromorphic chip market.”
 
  • Like
  • Thinking
  • Fire
Reactions: 39 users

MDhere

Top 20
Howdy All,

I just stumbled upon this research paper titled "A Diagonal State Space Model on Loihi 2 for Efficient Streaming Sequence Processing." It is currently under double-blind review for the International Conference on Learning Representations (ICLR) 2025, which will take place from 24–28 April, 2025.

As the title suggests, the paper focuses on Intel's Loihi 2. While it doesn’t mention BrainChip’s Akida technology, it is still highly relevant to us, as Loihi 2 and Akida are frequently compared as cutting-edge neuromorphic computing platforms.

The study highlights Loihi 2’s exceptional efficiency in online token-by-token inference, stating that it "consumes approximately 1000x less energy with a 75x lower latency and a 75x higher throughput compared to the recurrent implementation of n-S4D on the Jetson GPU."

It also states "our results provide the first benchmarks of an SSM on a neuromorphic hardware platform versus an edge GPU, comparing both the recurrent and convolution modes and revealing the differences in energy, latency, throughput, and task accuracy. To the best of our knowledge, this is the most holistic picture to date of the merits of neuromorphic hardware for SSM efficiency."

The authors emphasize the broader impact of their findings, stating:

"Our work and potential optimizations and extensions can be applied and tested in real-world streaming use cases, such as keyword-spotting, audio denoising, vision for drone control, autonomous driving, and other latency- or energy-constrained domains."

While I’m not as technically proficient as many in this forum, it stands to reason that if Loihi 2 demonstrates extreme efficiency in online token-by-token inference, the same should apply to BrainChip’s Akida neural processor.

If so, this would be yet another strong validation that Akida is ideally suited for applications requiring low latency and ultra-low power consumption such as robotics, autonomous vehicles, and speech enhancement.




Extract 1
View attachment 77607


Extract 2


View attachment 77608


Howdy All,

I just stumbled upon this research paper titled "A Diagonal State Space Model on Loihi 2 for Efficient Streaming Sequence Processing." It is currently under double-blind review for the International Conference on Learning Representations (ICLR) 2025, which will take place from 24–28 April, 2025.

As the title suggests, the paper focuses on Intel's Loihi 2. While it doesn’t mention BrainChip’s Akida technology, it is still highly relevant to us, as Loihi 2 and Akida are frequently compared as cutting-edge neuromorphic computing platforms.

The study highlights Loihi 2’s exceptional efficiency in online token-by-token inference, stating that it "consumes approximately 1000x less energy with a 75x lower latency and a 75x higher throughput compared to the recurrent implementation of n-S4D on the Jetson GPU."

It also states "our results provide the first benchmarks of an SSM on a neuromorphic hardware platform versus an edge GPU, comparing both the recurrent and convolution modes and revealing the differences in energy, latency, throughput, and task accuracy. To the best of our knowledge, this is the most holistic picture to date of the merits of neuromorphic hardware for SSM efficiency."

The authors emphasize the broader impact of their findings, stating:

"Our work and potential optimizations and extensions can be applied and tested in real-world streaming use cases, such as keyword-spotting, audio denoising, vision for drone control, autonomous driving, and other latency- or energy-constrained domains."

While I’m not as technically proficient as many in this forum, it stands to reason that if Loihi 2 demonstrates extreme efficiency in online token-by-token inference, the same should apply to BrainChip’s Akida neural processor.

If so, this would be yet another strong validation that Akida is ideally suited for applications requiring low latency and ultra-low power consumption such as robotics, autonomous vehicles, and speech enhancement.




Extract 1
View attachment 77607


Extract 2


View attachment 77608

You had me at the words stumbled upon. I started singing and movin to the song stumblin in. Anyway its one of those nights where I clearly need to sleep now lol Happy weekend fellow brners. I know I've nothing to contribute tonight apart from some sing song in my mind, but well done to all of you :)
 
Last edited:
  • Like
  • Love
  • Haha
Reactions: 13 users

TECH

Regular
  • Like
  • Fire
  • Love
Reactions: 40 users

Bravo

If ARM was an arm, BRN would be its biceps💪!
LinkedIn post dated 4 February (one week ago) from Subramaniyam Pooni from Broadcom.

Yes, he mentions BrainChip!

https://www.linkedin.com/in/manip70/overlay/about-this-profile/




Screenshot 2025-02-16 at 10.17.31 am.png



Screenshot 2025-02-16 at 10.16.17 am.png


Screenshot 2025-02-16 at 10.16.31 am.png




Screenshot 2025-02-16 at 10.16.41 am.png





Screenshot 2025-02-16 at 10.16.54 am.png

 
  • Like
  • Fire
  • Love
Reactions: 74 users
Gidday...I think I'm right in saying that Dr. Tony Lewis is the first senior executive to actually verbally name a competitor to Brainchip, when he recently stated that Dr. Chris Eliasmith CTO at Applied Brain Research had been working on state space models (ssm) as well, while we have developed our TENN's model.

Interesting he mentioned that, not a company we would neccessarily think of first, I don't ever remember Peter or Anil ever publicly mentioning a competitor, maybe someone will correct me on that.

Tech.
I think listing Intel by association previously included ABR. Intel previously had a partnership where ABR supplied a software emulation tool for Loihi (this was not for their latest technology) . Back then IBM and Intel were the biggest competitors because they had large funds, full distribution platforms and voice with other company. I think they still are the strongest commercial competitors.
Ever since ABR announced their LMU technology a few years ago, I thought they were one of the strongest technology competitors, so have been checking on them occasionally. This was further emphasised when BRN indicated their cutting edge TENNs technology was based on Legendre / Chebyshev polynomials (same as what ABR's LMU is based on). The LMU was proposed in a research article by ABR personnel about 3 years before TENNs.

At the moment I would guess they are still behind by at least two years, for the following reasons:
-Their TSP1 chip which integrates their LMU technology was only released September last year. They are more new to the chip development process and haven't had as much time to iron out the practical issues. Their chip includes a CPU, which makes it less flexible or convenient for manufacturers wanting custom or cheaper alternative solutions. In constrast, BRN released AKD 2.0 IP 18 months earlier. If ABR were to release an IP solution next month that would put it 2 years behind BRN, though probably more as AKD 2.0 was designed with customer feedback and likely for specific customer applications.
-Brainchip have broader support for other model architectures
-The software is likely to be more developed under BRN and more tailored for customer applications due to the partnerships they've had for so long.
-ABR only sell chips, whereas Brainchip sell IP. Contrary to some other opinions on here, I still think this is the right decision. With AKD1000 BRN have been dealing with a partially crowded market for applications where analog chips can be good enough and cheaper (several analog chip competitor videos explain this well). ABR will have this same issue. The exception here is for markets like rad-hard required applications, where BRN are the clear winner right now.
-IP is critical to high volume uptake. In general, all the big tech companies are creating their own chips, so they can build in the right balance for AI applications. I think this will extend to the edge as well based on different product ranges companies may consider building. Particularly for markets like wearables with tiny form factors, the ability to scale next-gen devices to lower process nodes (eg 3nm) when the cost is right will be an easy way to obtain performance increases.
-ABR are part VC backed, which risks prioritising short term profits over long term strategic moves.
-I don't think their partnerships are as extensively developed, which will slow down uptake.
-Their tool-chain allows customers to deploy solutions in weeks. BRN can do that much quicker (hours from memory) due to partnerships with Edge Impulse and the like.

Note that Mercedes-Benz are doing some collaboration with the University of Waterloo based on research done by Chris Eliasmith (CTO of ABR). However, this seems to be in the broader scheme of university research partners on neuromorphic computing for ADAS purposes. While this may give them a foot in the door, I doubt it's enough to push out a well established neuromorphic partner like BRN. This UoW MoU focuses on algorithm development, and AKD is capable of running many algorithms. It will still benefit ABR though given they will get practical learnings out of it too.

I think the bigger risk would be ABR getting bought out by a giant like Intel or IBM (VC short term win) which could allow the technology to be scaled up at a faster rate and into their existing distribution channels.

2019 article on ABR's old technology partnership:
The company has entered into a partnership with Intel to put its software on the new Intel neuromorphic processor called Loihi. Several artificial intelligence applications, including a keyword speech recognition app and a robotic controller, were demonstrated at the Ontario Centres of Excellence Discovery conference last year. “We hope that every chip that goes out there with our partner Intel will have a little bit of ABR on it,” Suma says.

[LMU proposal December 2019]

Sep 2024 [talking about the ABR LMU integrated chip]
TSP1 is a single-chip solution for time series inference tasks such as real-time speech recognition (including keyword spotting), text-to-speech synthesis, natural language control interfaces, and sensor fusion applications. The TSP1 combines a neural processing fabric, CPU, sensor interfaces, and on-chip NVM, providing an integrated solution.

[Jan 2025]
This is further turbocharged by ABR’s AI toolchain, which enables customers to deploy solutions in weeks instead of months.

Mercedes-Benz and the University of Waterloo have signed a Memorandum of Understanding to collaborate on research led by Prof. Chris Eliasmith in the field of neuromorphic computing. The focus is on the development of algorithms for advanced driving assistance systems. By mimicking the functionality of the human brain, neuromorphic computing could significantly improve AI computation, making it faster and more energy-efficient. While preserving vehicle range, safety systems could, for example, detect traffic signs, lanes and objects much better, even in poor visibility, and react faster. Neuromorphic computing has the potential to reduce the energy required to process data for autonomous driving by 90 percent compared to current systems.

The work with the University of Waterloo complements a series of existing Mercedes-Benz research collaborations on neuromorphic computing. One focus is on neuromorphic end-to-end learning for autonomous driving. To realize the full potential of neuromorphic computing, Mercedes-Benz is building up a network of universities and research partnerships. The company is, for example, consortium leader in the NAOMI4Radar project funded by the German Federal Ministry for Economic Affairs and Climate Action. Here, the company is working with partners to assess how neuromorphic computing can be used to optimise the processing of radar data in automated driving systems. In addition, Mercedes-Benz has been cooperating with Karlsruhe University of Applied Sciences. This work centres on neuromorphic cameras, also known as event-based cameras.


March 6, 2023
The second-generation of Akida now includes Temporal Event Based Neural Nets (TENN) spatial-temporal convolutions
 
  • Like
  • Fire
  • Love
Reactions: 70 users

TECH

Regular
I think listing Intel by association previously included ABR. Intel previously had a partnership where ABR supplied a software emulation tool for Loihi (this was not for their latest technology) . Back then IBM and Intel were the biggest competitors because they had large funds, full distribution platforms and voice with other company. I think they still are the strongest commercial competitors.
Ever since ABR announced their LMU technology a few years ago, I thought they were one of the strongest technology competitors, so have been checking on them occasionally. This was further emphasised when BRN indicated their cutting edge TENNs technology was based on Legendre / Chebyshev polynomials (same as what ABR's LMU is based on). The LMU was proposed in a research article by ABR personnel about 3 years before TENNs.

At the moment I would guess they are still behind by at least two years, for the following reasons:
-Their TSP1 chip which integrates their LMU technology was only released September last year. They are more new to the chip development process and haven't had as much time to iron out the practical issues. Their chip includes a CPU, which makes it less flexible or convenient for manufacturers wanting custom or cheaper alternative solutions. In constrast, BRN released AKD 2.0 IP 18 months earlier. If ABR were to release an IP solution next month that would put it 2 years behind BRN, though probably more as AKD 2.0 was designed with customer feedback and likely for specific customer applications.
-Brainchip have broader support for other model architectures
-The software is likely to be more developed under BRN and more tailored for customer applications due to the partnerships they've had for so long.
-ABR only sell chips, whereas Brainchip sell IP. Contrary to some other opinions on here, I still think this is the right decision. With AKD1000 BRN have been dealing with a partially crowded market for applications where analog chips can be good enough and cheaper (several analog chip competitor videos explain this well). ABR will have this same issue. The exception here is for markets like rad-hard required applications, where BRN are the clear winner right now.
-IP is critical to high volume uptake. In general, all the big tech companies are creating their own chips, so they can build in the right balance for AI applications. I think this will extend to the edge as well based on different product ranges companies may consider building. Particularly for markets like wearables with tiny form factors, the ability to scale next-gen devices to lower process nodes (eg 3nm) when the cost is right will be an easy way to obtain performance increases.
-ABR are part VC backed, which risks prioritising short term profits over long term strategic moves.
-I don't think their partnerships are as extensively developed, which will slow down uptake.
-Their tool-chain allows customers to deploy solutions in weeks. BRN can do that much quicker (hours from memory) due to partnerships with Edge Impulse and the like.

Note that Mercedes-Benz are doing some collaboration with the University of Waterloo based on research done by Chris Eliasmith (CTO of ABR). However, this seems to be in the broader scheme of university research partners on neuromorphic computing for ADAS purposes. While this may give them a foot in the door, I doubt it's enough to push out a well established neuromorphic partner like BRN. This UoW MoU focuses on algorithm development, and AKD is capable of running many algorithms. It will still benefit ABR though given they will get practical learnings out of it too.

I think the bigger risk would be ABR getting bought out by a giant like Intel or IBM (VC short ter kmm win) which could allow the technology to be scaled up at a faster rate and into their existing distribution channels.

2019 article on ABR's old technology partnership:
The company has entered into a partnership with Intel to put its software on the new Intel neuromorphic processor called Loihi. Several artificial intelligence applications, including a keyword speech recognition app and a robotic controller, were demonstrated at the Ontario Centres of Excellence Discovery conference last year. “We hope that every chip that goes out there with our partner Intel will have a little bit of ABR on it,” Suma says.

[LMU proposal December 2019]

Sep 2024 [talking about the ABR LMU integrated chip]
TSP1 is a single-chip solution for time series inference tasks such as real-time speech recognition (including keyword spotting), text-to-speech synthesis, natural language control interfaces, and sensor fusion applications. The TSP1 combines a neural processing fabric, CPU, sensor interfaces, and on-chip NVM, providing an integrated solution.

[Jan 2025]
This is further turbocharged by ABR’s AI toolchain, which enables customers to deploy solutions in weeks instead of months.

Mercedes-Benz and the University of Waterloo have signed a Memorandum of Understanding to collaborate on research led by Prof. Chris Eliasmith in the field of neuromorphic computing. The focus is on the development of algorithms for advanced driving assistance systems. By mimicking the functionality of the human brain, neuromorphic computing could significantly improve AI computation, making it faster and more energy-efficient. While preserving vehicle range, safety systems could, for example, detect traffic signs, lanes and objects much better, even in poor visibility, and react faster. Neuromorphic computing has the potential to reduce the energy required to process data for autonomous driving by 90 percent compared to current systems.

The work with the University of Waterloo complements a series of existing Mercedes-Benz research collaborations on neuromorphic computing. One focus is on neuromorphic end-to-end learning for autonomous driving. To realize the full potential of neuromorphic computing, Mercedes-Benz is building up a network of universities and research partnerships. The company is, for example, consortium leader in the NAOMI4Radar project funded by the German Federal Ministry for Economic Affairs and Climate Action. Here, the company is working with partners to assess how neuromorphic computing can be used to optimise the processing of radar data in automated driving systems. In addition, Mercedes-Benz has been cooperating with Karlsruhe University of Applied Sciences. This work centres on neuromorphic cameras, also known as event-based cameras.


March 6, 2023
The second-generation of Akida now includes Temporal Event Based Neural Nets (TENN) spatial-temporal convolutions

Fantastic having your knowledge shared @ IndepthDiver..I personally appreciate your professional post, nice to see you (back) as such.

I do remember something from Chris Eliasmith where he said via Linkedin to me that our relationship had since moved on.

Off the record, Peter did mention many years ago who he considered potential competitors to Brainchip were at the time I posed the question, but it wasn't ABR that's for sure, but nothing stands still in this space, if so, well then you're yesterdays news !

Thanks for commenting.

Regards Chris (Tech)
 
  • Like
Reactions: 12 users
  • Like
  • Love
  • Fire
Reactions: 8 users

Diogenese

Top 20
I think listing Intel by association previously included ABR. Intel previously had a partnership where ABR supplied a software emulation tool for Loihi (this was not for their latest technology) . Back then IBM and Intel were the biggest competitors because they had large funds, full distribution platforms and voice with other company. I think they still are the strongest commercial competitors.
Ever since ABR announced their LMU technology a few years ago, I thought they were one of the strongest technology competitors, so have been checking on them occasionally. This was further emphasised when BRN indicated their cutting edge TENNs technology was based on Legendre / Chebyshev polynomials (same as what ABR's LMU is based on). The LMU was proposed in a research article by ABR personnel about 3 years before TENNs.

At the moment I would guess they are still behind by at least two years, for the following reasons:
-Their TSP1 chip which integrates their LMU technology was only released September last year. They are more new to the chip development process and haven't had as much time to iron out the practical issues. Their chip includes a CPU, which makes it less flexible or convenient for manufacturers wanting custom or cheaper alternative solutions. In constrast, BRN released AKD 2.0 IP 18 months earlier. If ABR were to release an IP solution next month that would put it 2 years behind BRN, though probably more as AKD 2.0 was designed with customer feedback and likely for specific customer applications.
-Brainchip have broader support for other model architectures
-The software is likely to be more developed under BRN and more tailored for customer applications due to the partnerships they've had for so long.
-ABR only sell chips, whereas Brainchip sell IP. Contrary to some other opinions on here, I still think this is the right decision. With AKD1000 BRN have been dealing with a partially crowded market for applications where analog chips can be good enough and cheaper (several analog chip competitor videos explain this well). ABR will have this same issue. The exception here is for markets like rad-hard required applications, where BRN are the clear winner right now.
-IP is critical to high volume uptake. In general, all the big tech companies are creating their own chips, so they can build in the right balance for AI applications. I think this will extend to the edge as well based on different product ranges companies may consider building. Particularly for markets like wearables with tiny form factors, the ability to scale next-gen devices to lower process nodes (eg 3nm) when the cost is right will be an easy way to obtain performance increases.
-ABR are part VC backed, which risks prioritising short term profits over long term strategic moves.
-I don't think their partnerships are as extensively developed, which will slow down uptake.
-Their tool-chain allows customers to deploy solutions in weeks. BRN can do that much quicker (hours from memory) due to partnerships with Edge Impulse and the like.

Note that Mercedes-Benz are doing some collaboration with the University of Waterloo based on research done by Chris Eliasmith (CTO of ABR). However, this seems to be in the broader scheme of university research partners on neuromorphic computing for ADAS purposes. While this may give them a foot in the door, I doubt it's enough to push out a well established neuromorphic partner like BRN. This UoW MoU focuses on algorithm development, and AKD is capable of running many algorithms. It will still benefit ABR though given they will get practical learnings out of it too.

I think the bigger risk would be ABR getting bought out by a giant like Intel or IBM (VC short term win) which could allow the technology to be scaled up at a faster rate and into their existing distribution channels.

2019 article on ABR's old technology partnership:
The company has entered into a partnership with Intel to put its software on the new Intel neuromorphic processor called Loihi. Several artificial intelligence applications, including a keyword speech recognition app and a robotic controller, were demonstrated at the Ontario Centres of Excellence Discovery conference last year. “We hope that every chip that goes out there with our partner Intel will have a little bit of ABR on it,” Suma says.

[LMU proposal December 2019]

Sep 2024 [talking about the ABR LMU integrated chip]
TSP1 is a single-chip solution for time series inference tasks such as real-time speech recognition (including keyword spotting), text-to-speech synthesis, natural language control interfaces, and sensor fusion applications. The TSP1 combines a neural processing fabric, CPU, sensor interfaces, and on-chip NVM, providing an integrated solution.

[Jan 2025]
This is further turbocharged by ABR’s AI toolchain, which enables customers to deploy solutions in weeks instead of months.

Mercedes-Benz and the University of Waterloo have signed a Memorandum of Understanding to collaborate on research led by Prof. Chris Eliasmith in the field of neuromorphic computing. The focus is on the development of algorithms for advanced driving assistance systems. By mimicking the functionality of the human brain, neuromorphic computing could significantly improve AI computation, making it faster and more energy-efficient. While preserving vehicle range, safety systems could, for example, detect traffic signs, lanes and objects much better, even in poor visibility, and react faster. Neuromorphic computing has the potential to reduce the energy required to process data for autonomous driving by 90 percent compared to current systems.

The work with the University of Waterloo complements a series of existing Mercedes-Benz research collaborations on neuromorphic computing. One focus is on neuromorphic end-to-end learning for autonomous driving. To realize the full potential of neuromorphic computing, Mercedes-Benz is building up a network of universities and research partnerships. The company is, for example, consortium leader in the NAOMI4Radar project funded by the German Federal Ministry for Economic Affairs and Climate Action. Here, the company is working with partners to assess how neuromorphic computing can be used to optimise the processing of radar data in automated driving systems. In addition, Mercedes-Benz has been cooperating with Karlsruhe University of Applied Sciences. This work centres on neuromorphic cameras, also known as event-based cameras.


March 6, 2023
The second-generation of Akida now includes Temporal Event Based Neural Nets (TENN) spatial-temporal convolutions
Hi IDD,

Great research!

This is the ABR LMU patent:

US11238345B2 Legendre memory units in recurrent neural networks 20190306

[0009] The LSTM, GRU, NRU, and other related alternatives, are all specific RNN architectures that aim to mitigate the difficulty in training RNNs, by providing methods of configuring the connections between nodes in the network. These architectures typically train to better levels of accuracy than randomly initialized RNNs of the same size. Nevertheless, these architectures are presently incapable of learning temporal dependencies that span more than about 100-5,000 time-steps, which severely limits the scalability of these architectures to applications involving longer input sequences. There thus remains a need for improved RNN architectures that can be trained to accurately maintain longer (i.e., longer than 100-5,000 steps in a sequential time-series) representations of temporal information, which motivates the proposed Legendre Memory Unit (LMU).

[0010] In one embodiment of the invention, there is disclosed a method for generating recurrent neural networks having Legendre Memory Unit (LMU) cells including defining a node response function for each node in the recurrent neural network, the node response function representing state over time, wherein the state is encoded into one of binary events or real values, each node having a node input and a node output; defining a set of connection weights with each node input; defining a set of connection weights with each node output; defining one or more LMU cells having a set of recurrent connections defined as a matrix that determines node connection weights based on the formula:
A=[a]i⁢j ∈ ℝ q×q where ⁢⁢aij=(2⁢i+1)⁢{-1i⟨j(-1)i-j+1i≥j

1739692375454.png


where q is an integer determined by the user, i and j are greater than or equal to zero.

PS: Any mathematics that went beyond removing shoes and socks has always been beyond my capabilities.
 
  • Like
  • Fire
  • Haha
Reactions: 17 users
  • Like
  • Thinking
  • Fire
Reactions: 7 users

IloveLamp

Top 20
1000021866.jpg
1000021862.jpg
 
  • Like
  • Fire
  • Love
Reactions: 27 users

IloveLamp

Top 20
  • Like
  • Fire
  • Love
Reactions: 34 users

IloveLamp

Top 20
  • Like
  • Fire
  • Love
Reactions: 20 users
Appears a bit more cyber research work being done within the DoD sphere.



Toby Davis​


DoD Cyber Service Academy Scholar​

United States Department of Defense Mississippi State University​

Starkville, Mississippi, United States​


About​

Graduate Student | Researcher in Cybersecurity, Artificial Intelligence, and Quantum Computing

I am a Master's student in Cybersecurity and Operations at Mississippi State University, holding a Bachelor's degree in Computer Science from The University of Southern Mississippi. I specialize in leveraging advanced computational techniques to address real-world challenges in cybersecurity, artificial intelligence, and computational biology.

Current Research:
Masters Thesis: Developing an intrusion detection system (IDS) using the Akida neuromorphic processor by BrainChip, focusing on real-time pattern recognition and energy-efficient processing.
 
  • Like
  • Fire
  • Love
Reactions: 61 users
Top Bottom