BRN Discussion Ongoing



The Air Force Research Lab is soliciting white papers for cutting-edge computing capabilities that could address size, weight and power constraints for military platforms.

The broad agency announcement, released Wednesday on Sam.gov, comes about a month after AFRL’s information directorate opened a new Extreme Computing Facility in Rome, New York.

The lab is looking for vendors for research, development, integration, test and evaluation of “technologies/techniques to support research in the focus areas computational diversity and efficient computing architectures, machine learning and artificial intelligence in embedded system and architectures, computing at the edge, nanocomputing, space computing, and robust algorithms and applications,” per the BAA.

Nearly $500 million in total funding is anticipated for the initiative in fiscal 2024-2028.

Advertisement

AFRL wants to explore and develop computational capabilities “with greater sophistication, autonomy, intelligence, and assurance for addressing dynamic mission requirements imposed by Command, Control, Communications, Computers, Intelligence, Surveillance and Reconnaissance (C4ISR) and Cyber applications and size, weight, and power (SWaP) constrained Air Force Platforms,” according to the call for white papers.

Of particular interest is emerging tech that offers “revolutionary computational capabilities which enable greater system adaptability, autonomy, and intelligence while improving information availability throughout the C4ISR enterprise. This includes high performance embedded computing that supports on-board processing using advanced machine learning applications, robust and secure machine learning technology to strengthen and defend military applications of artificial intelligence and machine learning, non-conventional neuromorphic systems and applications, tools to increase the productivity of developing applications, methods and architectures that can provide dramatic improvements in the performance/cost of systems,” the BAA states.

For nanocomputing, the lab is looking for ways to overcome the limitations of today’s semiconductor tech.

“Currently, the competitive advantages of nanocomputing cannot be realized with current CMOS technologies and novel architectures. Only with new, CMOS compatible materials and devices that enhance and/or complement existing nanoelectronics will acceleration of nanocomputing occur. The objective of this research is to explore current and emerging nanoelectronics for information processing towards novel bio-inspired computing architectures that utilize ultra-low consumed power,” according to the solicitation.

Potential applications of the technology could include autonomous systems and neuron-electronic bio-interfaces, according to AFRL.

Advertisement

Efforts related to neuromorphic computing and machine learning are geared toward enhancing knowledge and development of “computationally intelligent systems towards increased perception, adaptability, resiliency, and autonomy for energy efficient, agile” Air and Space Forces platforms.

In this regard, AFRL is interested in the utilization of advancements in computational neuroscience, nanoelectronics, nanophotonics, high-performance computing, and material science.

Another area of focus for the anticipated work is “embedded deep learning and the trade space of accuracy vs. computing resources as typical deep neural networks are pushed to low size, weight, and power embedded devices.”

R&D for “trusted and robust” machine learning capabilities that are resilient to “perturbations” in input data and secured against digital attacks from potential enemies, is also expected to be part of the initiative.

Additional applications for the tech could include pattern recognition and signature analysis, autonomous adaptive operations, human-machine collaboration, neural control of complex systems, in situ training of neuromorphic hardware, and online learning in neural networks, per the BAA.
 
  • Love
  • Fire
Reactions: 6 users

Diogenese

Top 20
View attachment 51785



Qualcomm & Apple have some very compelling ARM-based solutions that are primed to tackle AMD & Intel in the AI PC segment in 2024.

Apple & Qualcomm Expected to Act as Catalysts For Progression of ARM In The AI PC Era, x86 Chips From AMD & Intel Face Heated Competition In 2024 & Beyond​

We have seen a recent shift in the industry towards ARM-based solutions, especially with its potential adoption by newly emerging competitors. With Apple boosting the share of ARM architectures within the laptop segment, it looks like moving ahead, we might see widespread adoption of the chip architecture within the PC industry as well, mainly since ARM has been able to close the performance gap between the x86 architecture, along with being much more power efficient.

ARM has already seen a dominant position in the mobile industry, with Qualcomm and MediaTek utilizing the standard for quite some time now.

However, the market share of ARM in the PC industry is going to see a rapid rise in the coming years, since Qualcomm has already introduced its own PC SoC, known as the "Snapdragon X Elite", which boasts impressive performance and is expected to hit shelves by mid-2024. Moreover, it is rumored that NVIDIA and AMD might launch ARM-based CPUs by 2025 as well, which means that the x86 architecture will get tough competition.


The next-gen AI PC market is definitely something that all chipmakers are currently eyeing to succeed in. All companies include dedicated NPUs within their chips that accelerate AI workloads and bring additional capabilities through a robust software stack and ecosystem.

Qualcomm has currently announced its AI Engine to offer up to 75 TOPS on the fastest X Elite SOCs while AMD just announced its updated Ryzen 8040 "Hawk Point" APUs with up to 39 TOPS (16 TOPS from NPU). Its successor which arrives in 2H 2024, codenamed Strix Point, is expected to feature up to 3x uplift in AI TOPS, hitting almost 50 TOPS from the XDNA 2 NPU alone.

Apple also offers around 18 TOPS with its M3 SOCand while that's a lower number than the rest of the competition, Apple's software ecosystem which runs on its own OS offsets the requirement of a powerful AI NPU thanks to optimizations. Lastly, we have Intel who have also been making big claims about their NPU featured in Core Ultra "Meteor Lake" CPUs which debuts next week on the 14th of December.

ARM is making strides when it comes to aiding artificial intelligence through its CPU solutions, since just recently, the company has launched its Cortex-M52 SoC, which is equipped with the company's "Helium" or M-Profile Vector Extension that delivers a significant performance uplift for machine learning (ML) and digital signal processing (DSP) applications. Since the PC industry is going to see a large-scale influence of AI-based features, firms like ARM need to deliver performance in this domain.

It will be interesting to see how the influx of ARM-focused chips changes the dynamics of the PC industry, especially the AI PC segment, since it would not only bring in much more diversity but would result in a more competitive market.

2024 AI PC Platforms​

BRAND NAMEAPPLEQUALCOMMAMDINTEL
CPU NameM3Snapdragon X EliteRyzen 8040 "Hawk Point"Meteor Lake "Core Ultra"
CPU ArchitectureARMARMx86x86
CPU Process3nm4nm4nm7nm
Max CPU Cores16 Cores (MAX)12 Cores8 Cores16 Cores
NPU ArchitectureIn-HouseHexagon NPUXDNA 1 NPUCrestmont E-Core
NPU AI TOPS18 TOPS75 TOPS (Peak)16 TOPS (39 TOPS All)TBD
GPU ArchitectureIn-HouseAdreno GPURDNA 3Alchemist Arc Xe-LPG
Max GPU Cores40 CoresTBD12 Compute Units8 Xe-Cores
GPU TFLOPsTBD4.6 TFLOPS8.9 TFLOPS~4.5 TFLOPS
Memory Support (Max)LPDDR5-6400LPDDR5X-8533LPDDR5X-7500LPDDR5X-8533
AvailabilityQ4 2024Mid-2025Q1 2024Q4 2024


They mention ARM Helium, which is basically a DSP software enhancement, accepting 128 bit vectors over 2 clock cycles on a 64 bit bus.

There is no mention of ARM's Ethos hardware NPU which is based on MACs.

And let's not forget Akida is compatible with all ARM processors.

Unfortunately, if we take the article at face value, Apple has gone with software upgrades to handle AI - after all, the Apple hardware architecture would have been designed years before we cozied up to ARM.

We also know Qualcomm's Snapdragon has its in-house Hexagon 8.2 which includes support for transformer natural language models for speech to text and 4-bit data useful in image processing.

AMD were footling around with MemRistors (analog) as recently as at least 2018, but they now plug their Ryzen 7040 with XDNA AI, which is also built on MACs.

And Intel, of course has many irons in the Ai fire, but we are part of the IFS ecosystem.

So we are part of the ARM and Intel ecosystems, but we have no concrete links to AMD or Qualcomm that I recall.
 
  • Like
  • Fire
  • Love
Reactions: 37 users
What's the chances today?


1702232962695.gif
 
  • Like
  • Fire
Reactions: 5 users

Justchilln

Regular
Hey Gang!

I've been thinking about the new Renesas 22nm RA-family chips which is expected to be launched towards the end of the year. This chip is an extension to the RA family of 32-bit Arm® Cortex®-M microcontrollers and is the one that is supposed to include our amazing technology.🥳

Anyway I was thinking about the question of revenue based on mass volume and trying to work it form various angles to try and get a gauge on what we could expect to see coming into the coffers in the near future.

So we know from our website that a 28nm chip costs $15 for mass volumes (see below). And I'm presuming 22nm might be a little more expensive, let's say $20 a chip??

View attachment 51759



As for the term "mass volumes", what does that look like from a quantity perspective? I am pretty sure I remember Sean saying after the last AGM that total chip production costs on mass volumes are really high, into the tens of millions of dollars. I think he may have said up to $50 million dollars.

Sooo, based on this remedial information Renesas must be pretty confident they can sell enough chips to cover their costs and make a decent profit - which is a lot of chips when you think about it.

All IMO. DYOR.


View attachment 51758
Hi Bravo
Where did you find that pricing for the akida chip? Is it live on the company website now?
I’ve never seen that before
 
  • Like
Reactions: 2 users

Perhaps

Regular
Hi Bravo
Where did you find that pricing for the akida chip? Is it live on the company website now?
I’ve never seen that before
Renesas has an IP licence and will pay royalties, when the IP is used in commercial products. Who cares about a price for an Akida chip, it just doesn't matter.
 
  • Like
Reactions: 2 users

AARONASX

Holding onto what I've got
  • Like
  • Fire
  • Love
Reactions: 12 users
Renesas has an IP licence and will pay royalties, when the IP is used in commercial products. Who cares about a price for an Akida chip, it just doesn't matter.
Like I've asked before ,what range in value potentially can a IP Licence be worth in royalties ?
 

Perhaps

Regular
Like I've asked before ,what range in value potentially can a IP Licence be worth in royalties ?
Price range for Renesas MCUs in lower volumes is roughly between € 5-15 per piece, much lower in higher volumes. When the IP will be used at the lower end, as announced by Renesas, royalties per piece are possibly in the 10-20 cent range. The IP business needs large volumes for real money.
This example is meant for the simple 2 nodes licence of Renesas. When the full licence is in use, royalties in the range € 5-10 per piece might be possible. At the end it's a matter of the contracts, so this is the estimated possible range.
 
Last edited:
  • Like
Reactions: 3 users
Hey Gang!

I've been thinking about the new Renesas 22nm RA-family chips which is expected to be launched towards the end of the year. This chip is an extension to the RA family of 32-bit Arm® Cortex®-M microcontrollers and is the one that is supposed to include our amazing technology.🥳

Anyway I was thinking about the question of revenue based on mass volume and trying to work it form various angles to try and get a gauge on what we could expect to see coming into the coffers in the near future.

So we know from our website that a 28nm chip costs $15 for mass volumes (see below). And I'm presuming 22nm might be a little more expensive, let's say $20 a chip??

View attachment 51759



As for the term "mass volumes", what does that look like from a quantity perspective? I am pretty sure I remember Sean saying after the last AGM that total chip production costs on mass volumes are really high, into the tens of millions of dollars. I think he may have said up to $50 million dollars.

Sooo, based on this remedial information Renesas must be pretty confident they can sell enough chips to cover their costs and make a decent profit - which is a lot of chips when you think about it.

All IMO. DYOR.


View attachment 51758
ARM in everything with about $2.7billion earnings..

If Akida in 1% $27mill earnings..

Likely though if they get in 1%, they’ll quickly get to 10% or $270mill equivalent..

If the industry is growing at 30-40% CAGR, it still shows that if Brainchip gets some market penetration, then $27-35mill revenue could be around the corner in the next 12-18months..

That would definitely put a rocket under BRN.AX
 
  • Like
  • Fire
  • Haha
Reactions: 7 users
1702241881252.jpeg
 
  • Like
  • Fire
Reactions: 3 users

Justchilln

Regular
Renesas has an IP licence and will pay royalties, when the IP is used in commercial products. Who cares about a price for an Akida chip, it just doesn't matter.
Thanks mate you really seem to know your stuff 😂😂😂
 
  • Like
Reactions: 3 users
Price range for Renesas MCUs in lower volumes is roughly between € 5-15 per piece, much lower in higher volumes. When the IP will be used at the lower end, as announced by Renesas, royalties per piece are possibly in the 10-20 cent range. The IP business needs large volumes for real money.
This example is meant for the simple 2 nodes licence of Renesas. When the full licence is in use, royalties in the range € 5-10 per piece might be possible. At the end it's a matter of the contracts, so this is the estimated possible range.
So in theory possibly anywhere in between 50 million to 1 billion dollars in numerous products per IP licence
 
  • Like
Reactions: 2 users

Perhaps

Regular
Thanks mate you really seem to know your stuff 😂😂😂
The IP business must be a real strange animal for some. Try a research on usual royalties paid in the semiconductor business, maybe it helps.
 
  • Like
Reactions: 2 users

Justchilln

Regular
The IP business must be a real strange animal for some. Try a research on usual royalties paid in the semiconductor business, maybe it helps.
I am well aware of the companies business model…….but thanks for trying
 

Xray1

Regular
Hi Dio,

thanks for the reply.

It is interesting. Is there a teeny weeny tiny minute minuscule atoms chance you could be wrong about our inclusion.........?

Sorry to question, it's just i fail to see why our logo would be displayed front and centre at a renesas / moschip display that is demonstrating the rz/v2m unless we were involved in some way?

I have also posted numerous LinkedIn likes etc from brn staff on this particular renesas chipset.

I can't help but feel we are missing something.
No ..... Dio, thought he was wrong once before, but then he realised that he was mistaken .. !!! .. :) :)
 
  • Haha
  • Like
Reactions: 8 users

Xray1

Regular
Let's face it, ...... with all this unsubstantiated dot joining going on here for some time now, is the fact that we will never truely know where Akida 2.0 (E, S, P) or Akida 3.0 etc etc will end up at or in ....... we can only go by what is released/disclosed in the financials as ststed by Sean H ...........let's face it, that no company big or small in their right mind would or will want to disclose openly to the market and their competitors that Akida is their so called " Secret Sauce ".
 
  • Like
Reactions: 9 users
Let's face it, ...... with all this unsubstantiated dot joining going on here for some time now, is the fact that we will never truely know where Akida 2.0 (E, S, P) or Akida 3.0 etc etc will end up at or in ....... we can only go by what is released/disclosed in the financials as ststed by Sean H ...........let's face it, that no company big or small in their right mind would or will want to disclose openly to the market and their competitors that Akida is their so called " Secret Sauce ".
The silence can only go for so long, all the dot joining , high number of companies engaging with us ,something has to surface thru I.P Sales, we have a world class line up and a product ahead of the game, So May nxt yr when the AGM is on I hope we have at least another 2 IP Contracts and revenue thru our existing I.P contracts or the company hasn't reached milestones or expectations, Has the CEO Signed off on a IP Contract yet ?
 
  • Like
  • Thinking
Reactions: 6 users

DK6161

Regular
Let's face it, ...... with all this unsubstantiated dot joining going on here for some time now, is the fact that we will never truely know where Akida 2.0 (E, S, P) or Akida 3.0 etc etc will end up at or in ....... we can only go by what is released/disclosed in the financials as ststed by Sean H ...........let's face it, that no company big or small in their right mind would or will want to disclose openly to the market and their competitors that Akida is their so called " Secret Sauce ".
I don't get this. Some camps (including those that have been pestering people on social media) are promoting and trying to force companies to admit they are using Akida, whereas those that are using Akida are trying their hardest to keep their "secret sauce".
How's that impacting our relationship with our customers?
Also assuming everything with AI/ML/edge computing uses Akida is plain annoying and embarrassing. Just stop please.
 
  • Like
  • Haha
Reactions: 9 users

Moonshot

Regular
I don't get this. Some camps (including those that have been pestering people on social media) are promoting and trying to force companies to admit they are using Akida, whereas those that are using Akida are trying their hardest to keep their "secret sauce".
How's that impacting our relationship with our customers?
Also assuming everything with AI/ML/edge computing uses Akida is plain annoying and embarrassing. Just stop please.
Agree, it’s not good. On the open neuromorphic discord akida spam on their linked in posts is a running joke.
 
  • Like
Reactions: 7 users

ndefries

Regular
I don't get this. Some camps (including those that have been pestering people on social media) are promoting and trying to force companies to admit they are using Akida, whereas those that are using Akida are trying their hardest to keep their "secret sauce".
How's that impacting our relationship with our customers?
Also assuming everything with AI/ML/edge computing uses Akida is plain annoying and embarrassing. Just stop please.
Exactly - I'm over it so much. Chapman et al I urge you to get off linkedin with fake job titles and leave these people be. Just wait for announcements that will come. it's one thing to share dots here but it's probably a fair assumption that the pestering has caused some to dislike the company and it's not ok.
 
  • Like
  • Thinking
  • Haha
Reactions: 17 users
Top Bottom