BRN Discussion Ongoing

FuzM

Member

View attachment 89251

Didn't realise that state space models was developed by Prof Chris Eliasmith from University of Waterloo.

The very same person that Mercedes has signed an MoU for research collaboration with.

Mercedes MoU
 
  • Like
  • Fire
  • Love
Reactions: 17 users

TheDrooben

Pretty Pretty Pretty Pretty Good

Interesting article by Dipti Vachani, SVP GM Automotive, Arm

Screenshot_20250802_114120_Chrome.jpg


Happy as Larry
 
  • Like
  • Love
  • Fire
Reactions: 24 users

Diogenese

Top 20
Didn't realise that state space models was developed by Prof Chris Eliasmith from University of Waterloo.

The very same person that Mercedes has signed an MoU for research collaboration with.

Mercedes MoU
Hi Fuz,

Now there's a bit of morphic resonance.

It was only 3 days ago that I mentioned Eliasmith:
in relation to this:

TENNs-PLEIADES: Building Temporal Kernels with Orthogonal Polynomials​

Yan Ru Pei, Olivier Coenen

https://arxiv.org/html/2405.12179v3

...

The seminal work proposing a memory encoding using orthogonal Legendre polynomials in a recurrent state-space model is the Legendre Memory Unit (LMU) [33], where Legendre polynomials (a special case of Jacobi polynomials) are used. The HiPPO formalism [11] then generalized this to other orthogonal functions including Chebyshev polynomials, Laguerre polynomials, and Fourier modes. Later, this sparked a cornucopia of works interfacing with deep state space models including S4 [12], H3 [2], and Mamba [10], achieving impressive results on a wide range of tasks from audio generation to language modeling. There are several common themes among these networks that PLEIADES differ from. First, these models typically only interface with 1D temporal data, and usually try to flatten high dimensional data into 1D data before processing [12, 37], with some exceptions [21]. Second, instead of explicitly performing finite-window temporal convolutions, a running approximation of the effects of such convolutions are performed, essentially yielding a system with infinite impulse responses where the effective polynomial structures are distorted [31, 11]. And in the more recent works, the polynomial structures are tenuously used only for initialization, but then made fully trainable. Finally, these networks mostly use an underlying depthwise structure [14] for long convolutions, which may limit the network capacity, albeit reducing the compute requirement of the network.
[33]↑Aaron Voelker, Ivana Kajić, and Chris Eliasmith.Legendre Memory Units: Continuous-time representation in recurrent neural networks.Advances in neural information processing systems, 32, 2019. [Uni of Waterloo]


Our Pleiades paper differentiates our SSM from the Legendre polynomial which Eliasmith proposed.
 
  • Like
  • Love
  • Fire
Reactions: 15 users

Bravo

If ARM was an arm, BRN would be its biceps💪!

Arm has started working on solutions for SoftBank relating to Stargate but unable to provide specifics in terms of products or timelines at this time.


Arm to develop its own chips, says Stargate offers “huge potential” for “different design opportunities”​


Comments made on company’s Q1 2026 earnings call
July 31, 2025 By Charlotte Trueman

Arm is looking to develop its own chips, CEO Rene Haas said during the company’s Q1 2026 earnings call.
Seemingly confirming reports from earlier this year, Haas told analysts on the call that Arm was “continuing to explore the possibility of moving beyond our current platform into additional compute to subsystems, chiplets, and potentially full end solutions.”

ARM-based chips in developement

– TSMC
He said that the company had accelerated its investment in R&D to “ensure that these opportunities are executed successfully,” adding that owners SoftBank had expanded its IP licensing and design services agreements with Arm, and Arm was working with the Japanese conglomerate to help them build towards its “greater, broader AI vision.”
When asked if the expanded licensing and design service agreements with SoftBank related to Stargate, Haas was somewhat coy.
“At a very high level, Stargate, which is a joint investment venture between SoftBank and OpenAI, is looking to scale up to 10GW over the next number of years in terms of overall investment. That is a lot of compute, and there's a huge potential for lots of different design opportunities,” he said. “SoftBank has a very broad AI vision. We're looking to help them with that.
Again, without mentioning specific products and application spaces, you can imagine in a data center that size, running different workloads around inference training and such, and today, all of the Stargate opportunities use Arm as the core CPU.

“We have a unique opportunity to provide solutions there. So a lot of that work has now started, but we're not able to give you any specifics in terms of products or timelines,” Haas said.
Despite the news, shares in Arm fell by around eight percent after the company posted its Q1 2026 results.
Revenue for the quarter was up 12 percent year-on-year (YoY) to $1.05 billion, but missed analysts ’ expectations of $1.06bn. Of that total, royalty revenue was $585 million, up 25 percent YoY, but below the $595m projected, while licensing revenue for the quarter was $468m, down one percent YoY.
“Royalty revenue is growing across all target end markets, including smartphones, data center, automotive, and IoT,“ Haas said, adding that during the quarter, Arm had signed three additional compute subsystems (CSS) licenses with its five existing customers, which included two data center licenses.
Introduced in 2023, Arm’s Neoverse CSS simplifies and accelerates the adoption of Arm Neoverse-based technology into new compute solutions by enabling its partners to build specialized silicon more affordably and quickly than previous discrete IP solutions.
Haas also reiterated comments made by the company in July, saying the number of customers using Arm-based chips in data centers has increased 14x since 2021, while its data center customers have reached 70,000. The company said it had also seen a 12x increase in the number of startups using Arm chips during the same period.
For Q2 2026, Arm is projecting revenues of between $1.01bn - $1.11bn, with estimates again hitting $1.06bn.

 
  • Like
  • Fire
Reactions: 15 users

TheDrooben

Pretty Pretty Pretty Pretty Good
  • Like
  • Fire
  • Love
Reactions: 26 users

7für7

Top 20
Newly published patent from Digimarc in which we get a mention..........

US20250245465 LASER MARKING OF MACHINE-READABLE CODES https://patentscope.wipo.int/search/en/detail.jsf?docId=US460152523&_cid=P21-MDTM9J-81491-1

View attachment 89258 View attachment 89259 View attachment 89260


Happy as Larry


Yes… but Akida is only mentioned here as part of a list of hardware examples that could be used to implement the described algorithms. It has nothing to do with the patent itself.

It’s basically like a coach explaining to a soccerplayer that the game is called soccer and is played with balls… and then listing that you could use balls from Adidas, Umbro, or other brands…

“Still another type of processor hardware is a neural network chip, e.g., the Intel Nervana NNP-T, NNP-I and Loihi chips, the Google Edge TPU chip, and the BrainChip Akida neuromorphic SoC.”

The mention of Akida here is purely descriptive – the patent would work the exact same way without it.
 
  • Like
Reactions: 1 users

TheDrooben

Pretty Pretty Pretty Pretty Good
Yes… but Akida is only mentioned here as part of a list of hardware examples that could be used to implement the described algorithms. It has nothing to do with the patent itself.

It’s basically like a coach explaining to a soccerplayer that the game is called soccer and is played with balls… and then listing that you could use balls from Adidas, Umbro, or other brands…

“Still another type of processor hardware is a neural network chip, e.g., the Intel Nervana NNP-T, NNP-I and Loihi chips, the Google Edge TPU chip, and the BrainChip Akida neuromorphic SoC.”

The mention of Akida here is purely descriptive – the patent would work the exact same way without it.
The reason I posted this is it shows an increasing awareness of the capabilities of Akida......I never mentioned the patent definitely involved using Akida. The mere mention in the patent makes this worth posting IMO especially alongside the other processors mentioned

Happy as Larry
 
  • Like
  • Love
  • Fire
Reactions: 32 users
The reason I posted this is it shows an increasing awareness of the capabilities of Akida......I never mentioned the patent definitely involved using Akida. The mere mention in the patent makes this worth posting IMO especially alongside the other processors mentioned

Happy as Larry
Thanks for sharing, it's a possibility which is good to see.
 
  • Like
  • Fire
Reactions: 9 users

7für7

Top 20
The reason I posted this is it shows an increasing awareness of the capabilities of Akida......I never mentioned the patent definitely involved using Akida. The mere mention in the patent makes this worth posting IMO especially alongside the other processors mentioned

Happy as Larry

Don’t worry… I just wanted to trigger haters … just kidding…

But jokes aside… I wrote that because some people just don’t read through what’s written and immediately assume that this patent has something to do with Akida. It wasn’t directed at you… more as a clarification.
 
  • Like
Reactions: 8 users

manny100

Top 20
Thanks for sharing, it's a possibility which is good to see.
Agree, it's good see as it demonstrates growth is underway in the Neuromorphic Edge AI industry.
Without a growing industry BRN will not do well.
If the industry grows as expected as the Tech leader we should do very well.
We will either be swallowed up by a bigger fish for our tech or we will grow into a huge business.
It all comes down to the industry growing and thriving.
 
  • Like
  • Fire
  • Love
Reactions: 13 users

Innatera claims world’s first mass-market neuromorphic microcontroller for the sensor edge​

Interviews | May 21, 2025


"The Pulsar chip has a heterogenous architecture that combines analog and digital neuromorphic blocks with a traditional convolutional neural network accelerator and a RISC-V core. "
....

I really would like to know what kind of IP this "digital neuromorphic block" looks like?:unsure:

The last 4c announcement gives me some hope.....
Innatera is 100% pure competition.
There is no "hope" of them using any BrainChip IP.

They just found another way to "skin the cat" is all.
 
  • Like
  • Thinking
  • Wow
Reactions: 10 users
Innatera is 100% pure competition.
There is no "hope" of them using any BrainChip IP.

They just found another way to "skin the cat" is all.
How does this compare to Akida is the big question ?.
 
  • Like
  • Wow
Reactions: 3 users
How does this compare to Akida is the big question ?.
My knowledge is virtually non existent on this, but my biased (but probably correct) layman's opinion is that AKIDA is on a completely different (higher) level to Innatera's technology, but in direct comparison for specific use cases (that Innatera's is used for) it's going to simply come down to which solution is "chosen".
If the OEMs want an OTS (off the shelf) solution, then Innatera are supplying the chips for that.

Our comparable tech, would possibly be AKIDA E or AKIDA Pico?..
But that's offered as IP only at this stage.
 
Last edited:
  • Like
  • Wow
Reactions: 8 users

Rach2512

Regular

Sorry if already posted, I noticed these guys were No. 69 on FF's list. Any chance we could be involved?
 
  • Fire
  • Like
Reactions: 4 users

CHIPS

Regular

Sorry if already posted, I noticed these guys were No. 69 on FF's list. Any chance we could be involved?

Weren't they the ones saying that they are actively working with Akida for about a year already?

Yes, Cecilia Pisano said that to somebody from Tata I think.

Here it is:


She even said that more than once:

1754144042529.png




1754143857943.png

 
Last edited:
  • Like
  • Fire
  • Love
Reactions: 23 users
Thinking out aloud, one at least of these engineering fee's recently announced would IMO be related to the PICO design. Sean did mention they we're asked for this specific design so brn made this for them, so then this would be for an IOT device is my guess.
I see them purchasing millions of these designs in 2026.
 
Last edited:
  • Like
  • Thinking
  • Fire
Reactions: 18 users

Sorry if already posted, I noticed these guys were No. 69 on FF's list. Any chance we could be involved?

@Rach2512 there’s a chance as they’ve been liking and commenting on Brainchip for 1-2 yrs. An employee recently commented she’d been trialling Akida for 12 months to which a TATA employee black-catted her to tell her TATA’s used, trialled etc BC since 2019 (that was the software version from faded memory of a robot on a screen copying actions from a camera).

The company provides goods and services in space which is right up our alley. Also a Staalion project (may not be spelt right) we might be helping them with.

Sorry for the vagueness of the reply but I run off my phone and memory and often I get confused with who’s doing what with who as there is so many players in the ecosystem. I reckon the ESA is also involved with the Staalion project.

Until we see the licence, contract and revenue though it’s one of many in the pipeline.
 
  • Like
  • Love
  • Fire
Reactions: 26 users

manny100

Top 20
Innatera is 100% pure competition.
There is no "hope" of them using any BrainChip IP.

They just found another way to "skin the cat" is all.
According to my reading they are best fit for different market segments.
I cannot find any peer reviewed papers comparing them.
Innatera does not use the words 'on chip learning' as Brainchip does but talks about real time 'real time intelligence' and 'adaption'. According to CEO Kumar see EET times article "the main limitation of the Innatera fabric is that it is not self learning, Kumar said noting that the neuron types are fixed, chosen for their suitability for a wide range of pattern recognition. While functions cannot be changed, parameters can be, he said."
Interesting the different methods used.
by both.
It would be great to see a peer review comparison.
Until then I am a bit uncertain as to the extent of competition Innatra offer.
The EET times article is a good read
 
  • Like
Reactions: 3 users

Diogenese

Top 20
According to my reading they are best fit for different market segments.
I cannot find any peer reviewed papers comparing them.
Innatera does not use the words 'on chip learning' as Brainchip does but talks about real time 'real time intelligence' and 'adaption'. According to CEO Kumar see EET times article "the main limitation of the Innatera fabric is that it is not self learning, Kumar said noting that the neuron types are fixed, chosen for their suitability for a wide range of pattern recognition. While functions cannot be changed, parameters can be, he said."
Interesting the different methods used.
by both.
It would be great to see a peer review comparison.
Until then I am a bit uncertain as to the extent of competition Innatra offer.
The EET times article is a good read
Hi manny,

AN Innantara patent application tries to capture all means of converting analog to spike train, but leans heavily on VCO )voltage controlled oscillator) in the description.

WO2024023111A1 SYSTEM AND METHOD FOR EFFICIENT FEATURE-CENTRIC ANALOG TO SPIKE ENCODERS 20220725

1754207601926.png



A signal processing circuit for a spiking neural network, comprising an interface for converting an analog input signal to a corresponding spike-time representation of the analog input signal. The interface comprises an analog-to-information (A/information) converter configured to produce a modulated signal which represents one or more features of the analog input signal; a feature detector circuit configured to compare the modulated signal with a reference signal representing a reference feature, and configured to produce an error signal indicating a difference between the modulated signal and the reference signal; a feature extractor circuit, which comprises a locked loop circuit having an input for receiving the error signal and configured to produce an output signal representing an occurrence of one or more of the features represented by the modulated signal; and an encoder circuit, which is configured to encode the output signal into spike trains for input to the spiking neural network.

[0092] … The A/frequency converter 32B may comprise a voltage- controlled oscillator (VCO).

4. The signal processing circuit of any of the preceding claims, wherein the feature is one or more of
i) specific characteristics, such as transient features, steady-state features,
ii) specific properties, such as (non)linearity features, statistical features, stationary features, transferfunction features, energy-content and/or based on iii) specific domain features, such as time-, delay-, frequency-, phase-domain features,
preferably wherein the A/information converter comprises an analog-to-time converter which converts the analog input signal into a modulated signal which represents certain timedomain features such as delay, frequency and/or phase
..

[00101] The type of encoding used in encoding circuit 35 may vary depending on the type of parameters used in the converter 32, detector 33 and feature extractor 34. When looking at the delay parameter, one could use time-to-first spike (TTFS), inter-spike interval (ISI), burst, or delay synchrony encoding. When looking at the frequency parameter, rate or frequency synchrony encoding might be used. When looking at the phase parameter, phase or phase synchrony encoding might be used.

"Time-to-first-spike" sounds a bit like Rank-order coding which we obtained from Spikenet which uses the order of arrival, not specifically time of arrival.

This one is for ML/federated learning, not on-chip learning:

WO2025012331A1 METHOD FOR TRAINING MACHINE LEARNING MODELS FOR STOCHASTIC SUBSTRATES 20230711

1754208969683.png


The present invention relates to a method for training signal processing pipeline for deployment to a programmable fabric of a target device. The method comprises obtaining a model and characterization data of the components of the target device, obtaining programmable parameter values of the signal processing pipeline. Next, a plurality of target devices is simulated. The simulated target devices are based on the characterization data, such that the simulated target devices represent digital twins and/or the stochastic variability of the plurality of target devices. Optimization methods are used to compute updates of the programmable parameter values of the programmable parameters for each of the simulated target devices independently, after which the programmable parameter value updates are reduced to a single update of the programmable parameter values of the signal processing pipeline.

[0070] After a system performance threshold is passed or convergence is reached, a complete description of the principal network can be deployed to any number of target hardware devices in step 109, making up the hardware deployment 100C.
 
  • Like
  • Wow
  • Fire
Reactions: 13 users
According to my reading they are best fit for different market segments.
I cannot find any peer reviewed papers comparing them.
Innatera does not use the words 'on chip learning' as Brainchip does but talks about real time 'real time intelligence' and 'adaption'. According to CEO Kumar see EET times article "the main limitation of the Innatera fabric is that it is not self learning, Kumar said noting that the neuron types are fixed, chosen for their suitability for a wide range of pattern recognition. While functions cannot be changed, parameters can be, he said."
Interesting the different methods used.
by both.
It would be great to see a peer review comparison.
Until then I am a bit uncertain as to the extent of competition Innatra offer.
The EET times article is a good read
I'm not doubting that our technology is superior Manny, but when it comes to the low end applications that Pulsar is aimed at, that doesn't really matter.

They are going for the "low hanging fruit" something BrainChip has never really been focused on.
We've always been looking at the Big End of Town.

We now have the AKIDA E and Pico (although AKIDA 1.0 IP was always available with a minimum of "Nodes" Renesas only licenced 2 or something?)

But that requires more investment and commitment from an OEM (and more time) to design in "tape out" a chip etc, than Innatera is offering, with an OTSC.

Actual performance comparisons, or some extra features such as on chip learning (which probably aren't as necessary for low end applications) don't really mean much at that end, when you are looking at the investment and commitment differences for the OEMs.

I'm heavily invested here and am on BrainChip's and your "side" I'm just being impartial and honest about this.

We don't have any idea of how much commercial progress Innatera is having with Pulsar.
They may be facing as many or more market penetration issues and acceptance than us.
And if that's the case, it may end up being a big mistake for them to be mass producing their chips.
 
Last edited:
  • Like
  • Thinking
  • Fire
Reactions: 24 users
Top Bottom