BRN Discussion Ongoing

Murphy

Life is not a dress rehearsal!

I do get some where there but humble thing is i met in AGM who belongs in top 20 and he is so humble he doesnt show any sign of how much he hold and all and keeps quiet.
BTW i am going through some health Issue and hard for me to type so forgive me for any Typo.
Hope the health issue gets resolved soon mate.
 
  • Like
  • Love
Reactions: 23 users

Murphy

Life is not a dress rehearsal!

Could Akida be used here?​

I do know we have a partnership with Cadence.
Main parts highlighted in orange!

Cadence Accelerates On-Device and Edge AI Performance and Efficiency with New Neo NPU IP and NeuroWeave SDK for Silicon Design​

SAN JOSE, Calif.— September 14, 2023 -- Cadence Design Systems, Inc. (Nasdaq: CDNS) today unveiled its next-generation AI IP and software tools to address the escalating demand for on-device and edge AI processing. The new highly scalable Cadence® Neo™ Neural Processing Units (NPUs) deliver a wide range of AI performance in a low-energy footprint, bringing new levels of performance and efficiency to AI SoCs. Delivering up to 80 TOPS performance in a single core, the Neo NPUs support both classic and new generative AI models and can offload AI/ML execution from any host processor—including application processors, general-purpose microcontrollers and DSPs—with a simple and scalable AMBA® AXI interconnect. Complementing the AI hardware, the new NeuroWeave™ Software Development Kit (SDK) provides developers with a “one-tool” AI software solution across Cadence AI and Tensilica® IP products for no-code AI development.
“While most of the recent attention on AI has been cloud-focused, there are an incredible range of new possibilities that both classic and generative AI can enable on the edge and within devices,” said Bob O’Donnell, president and chief analyst at TECHnalysis Research. “From consumer to mobile and automotive to enterprise, we’re embarking on a new era of naturally intuitive intelligent devices. For these to come to fruition, both chip designers and device makers need a flexible, scalable combination of hardware and software solutions that allow them to bring the magic of AI to a wide range of power requirements and compute performance, all while leveraging familiar tools. New chip architectures that are optimized to accelerate ML models and software tools with seamless links to popular AI development frameworks are going to be incredibly important parts of this process.”
The flexible Neo NPUs are well suited for ultra-power-sensitive devices as well as high-performance systems with a configurable architecture, enabling SoC architects to integrate an optimal AI inferencing solution in a broad range of products, including intelligent sensors, IoT and mobile devices, cameras, hearables/wearables, PCs, AR/VR headsets and advanced driver-assistance systems (ADAS). New hardware and performance enhancements and key features/capabilities include:
  • Scalability: Single-core solution is scalable from 8 GOPS to 80 TOPS, with further extension to hundreds of TOPS with multicore
  • Broad configuration range: supports 256 to 32K MACs per cycle, allowing SoC architects to optimize their embedded AI solution to meet power, performance and area (PPA) tradeoffs
  • Integrated support for a myriad of network topologies and operators: enables efficient offloading of inferencing tasks from any host processor—including DSPs, general-purpose microcontrollers or application processors—significantly improving system performance and power
  • Ease of deployment: shortens the time to market to meet rapidly evolving next-generation vision, audio, radar, natural language processing (NLP) and generative AI pipelines
  • Flexibility: Support for Int4, Int8, Int16, and FP16 data types across a wide set of operations that form the basis of CNN, RNN and transformer-based networks allows flexibility in neural network performance and accuracy tradeoffs
  • High performance and efficiency: Up to 20X higher performance than the first-generation Cadence AI IP, with 2-5X the inferences per second per area (IPS/mm2) and 5-10X the inferences per second per Watt (IPS/W)
Since software is a critical part of any AI solution, Cadence also upgraded its common software toolchain with the introduction of the NeuroWeave SDK. Providing customers with a uniform, scalable and configurable software stack across Tensilica DSPs, controllers and Neo NPUs to address all target applications, the NeuroWeave SDK streamlines product development and enables an easy migration as design requirements evolve. It supports many industry-standard domain-specific ML frameworks, including TensorFlow, ONNX, PyTorch, Caffe2, TensorFlow Lite, MXNet, JAX and others for automated end-to-end code generation; Android Neural Network Compiler; TF Lite Delegates for real-time execution; and TensorFlow Lite Micro for microcontroller-class devices.

“For two decades and with more than 60 billion processors shipped, industry-leading SoC customers have relied on Cadence processor IP for their edge and on-device SoCs. Our Neo NPUs capitalize on this expertise, delivering a leap forward in AI processing and performance,” said David Glasco, vice president of research and development for Tensilica IP at Cadence. “In today’s rapidly evolving landscape, it’s critical that our customers are able to design and deliver AI solutions based on their unique requirements and KPIs without concern about whether future neural networks are supported. Toward this end, we’ve made significant investments in our new AI hardware platform and software toolchain to enable AI at every performance, power and cost point and to drive the rapid deployment of AI-enabled systems.”

“At Labforge, we use a cluster of Cadence Tensilica DSPs in our Bottlenose smart camera product line to enable best-in-class AI processing for power-sensitive edge applications,” said Yassir Rizwan, CEO of Labforge, Inc. “Cadence’s AI software is an integral part of our embedded low power AI solution, and we’re looking forward to leveraging the new capabilities and higher performance offered by Cadence’s new NeuroWeave SDK. With an end-to-end compiler toolchain flow, we can better solve challenging AI problems in automation and robotics—accelerating our time to market to capitalize on generative AI-based application demand and opening new market streams that may not have been possible otherwise.”

The Neo NPUs and the NeuroWeave SDK support Cadence’s Intelligent System Design™ strategy by enabling pervasive intelligence through SoC design excellence.

Availability

The Neo NPUs and the NeuroWeave SDK are expected to be in general availability beginning in December 2023. Early engagements have already started for lead customers. For more information, please visit www.cadence.com/go/NPU.

About Cadence

Cadence is a pivotal leader in electronic design, building upon more than 30 years of computational software expertise. The company applies its underlying Intelligent System Design strategy to deliver software, hardware and IP that turn design concepts into reality. Cadence customers are the world’s most innovative companies, delivering extraordinary electronic products from chips to boards to systems for the most dynamic market applications, including consumer, hyperscale computing, 5G communications, automotive, mobile, aerospace, industrial and healthcare. For nine years in a row, Fortune magazine has named Cadence one of the 100 Best Companies to Work For. Learn more at cadence.com.
Bravo had soon big connections outlined with Cadence, I reckon. Sounds promising!


If you don't have dreams, you can't have dreams gone true
 
  • Like
  • Love
Reactions: 9 users
Good to see Elxsi staff like BRN / Tata Ann article from a week ago....and NVIDIA too I guess :LOL:

Screenshot_2023-09-15-22-29-20-05_4641ebc0df1485bf6b47ebd018b5ee76.jpg
Screenshot_2023-09-15-22-28-59-44_4641ebc0df1485bf6b47ebd018b5ee76.jpg
 
  • Like
  • Fire
Reactions: 12 users

Diogenese

Top 20

Could Akida be used here?​

I do know we have a partnership with Cadence.
Main parts highlighted in orange!

Cadence Accelerates On-Device and Edge AI Performance and Efficiency with New Neo NPU IP and NeuroWeave SDK for Silicon Design​

SAN JOSE, Calif.— September 14, 2023 -- Cadence Design Systems, Inc. (Nasdaq: CDNS) today unveiled its next-generation AI IP and software tools to address the escalating demand for on-device and edge AI processing. The new highly scalable Cadence® Neo™ Neural Processing Units (NPUs) deliver a wide range of AI performance in a low-energy footprint, bringing new levels of performance and efficiency to AI SoCs. Delivering up to 80 TOPS performance in a single core, the Neo NPUs support both classic and new generative AI models and can offload AI/ML execution from any host processor—including application processors, general-purpose microcontrollers and DSPs—with a simple and scalable AMBA® AXI interconnect. Complementing the AI hardware, the new NeuroWeave™ Software Development Kit (SDK) provides developers with a “one-tool” AI software solution across Cadence AI and Tensilica® IP products for no-code AI development.
“While most of the recent attention on AI has been cloud-focused, there are an incredible range of new possibilities that both classic and generative AI can enable on the edge and within devices,” said Bob O’Donnell, president and chief analyst at TECHnalysis Research. “From consumer to mobile and automotive to enterprise, we’re embarking on a new era of naturally intuitive intelligent devices. For these to come to fruition, both chip designers and device makers need a flexible, scalable combination of hardware and software solutions that allow them to bring the magic of AI to a wide range of power requirements and compute performance, all while leveraging familiar tools. New chip architectures that are optimized to accelerate ML models and software tools with seamless links to popular AI development frameworks are going to be incredibly important parts of this process.”
The flexible Neo NPUs are well suited for ultra-power-sensitive devices as well as high-performance systems with a configurable architecture, enabling SoC architects to integrate an optimal AI inferencing solution in a broad range of products, including intelligent sensors, IoT and mobile devices, cameras, hearables/wearables, PCs, AR/VR headsets and advanced driver-assistance systems (ADAS). New hardware and performance enhancements and key features/capabilities include:
  • Scalability: Single-core solution is scalable from 8 GOPS to 80 TOPS, with further extension to hundreds of TOPS with multicore
  • Broad configuration range: supports 256 to 32K MACs per cycle, allowing SoC architects to optimize their embedded AI solution to meet power, performance and area (PPA) tradeoffs
  • Integrated support for a myriad of network topologies and operators: enables efficient offloading of inferencing tasks from any host processor—including DSPs, general-purpose microcontrollers or application processors—significantly improving system performance and power
  • Ease of deployment: shortens the time to market to meet rapidly evolving next-generation vision, audio, radar, natural language processing (NLP) and generative AI pipelines
  • Flexibility: Support for Int4, Int8, Int16, and FP16 data types across a wide set of operations that form the basis of CNN, RNN and transformer-based networks allows flexibility in neural network performance and accuracy tradeoffs
  • High performance and efficiency: Up to 20X higher performance than the first-generation Cadence AI IP, with 2-5X the inferences per second per area (IPS/mm2) and 5-10X the inferences per second per Watt (IPS/W)
Since software is a critical part of any AI solution, Cadence also upgraded its common software toolchain with the introduction of the NeuroWeave SDK. Providing customers with a uniform, scalable and configurable software stack across Tensilica DSPs, controllers and Neo NPUs to address all target applications, the NeuroWeave SDK streamlines product development and enables an easy migration as design requirements evolve. It supports many industry-standard domain-specific ML frameworks, including TensorFlow, ONNX, PyTorch, Caffe2, TensorFlow Lite, MXNet, JAX and others for automated end-to-end code generation; Android Neural Network Compiler; TF Lite Delegates for real-time execution; and TensorFlow Lite Micro for microcontroller-class devices.

“For two decades and with more than 60 billion processors shipped, industry-leading SoC customers have relied on Cadence processor IP for their edge and on-device SoCs. Our Neo NPUs capitalize on this expertise, delivering a leap forward in AI processing and performance,” said David Glasco, vice president of research and development for Tensilica IP at Cadence. “In today’s rapidly evolving landscape, it’s critical that our customers are able to design and deliver AI solutions based on their unique requirements and KPIs without concern about whether future neural networks are supported. Toward this end, we’ve made significant investments in our new AI hardware platform and software toolchain to enable AI at every performance, power and cost point and to drive the rapid deployment of AI-enabled systems.”

“At Labforge, we use a cluster of Cadence Tensilica DSPs in our Bottlenose smart camera product line to enable best-in-class AI processing for power-sensitive edge applications,” said Yassir Rizwan, CEO of Labforge, Inc. “Cadence’s AI software is an integral part of our embedded low power AI solution, and we’re looking forward to leveraging the new capabilities and higher performance offered by Cadence’s new NeuroWeave SDK. With an end-to-end compiler toolchain flow, we can better solve challenging AI problems in automation and robotics—accelerating our time to market to capitalize on generative AI-based application demand and opening new market streams that may not have been possible otherwise.”

The Neo NPUs and the NeuroWeave SDK support Cadence’s Intelligent System Design™ strategy by enabling pervasive intelligence through SoC design excellence.

Availability

The Neo NPUs and the NeuroWeave SDK are expected to be in general availability beginning in December 2023. Early engagements have already started for lead customers. For more information, please visit www.cadence.com/go/NPU.

About Cadence

Cadence is a pivotal leader in electronic design, building upon more than 30 years of computational software expertise. The company applies its underlying Intelligent System Design strategy to deliver software, hardware and IP that turn design concepts into reality. Cadence customers are the world’s most innovative companies, delivering extraordinary electronic products from chips to boards to systems for the most dynamic market applications, including consumer, hyperscale computing, 5G communications, automotive, mobile, aerospace, industrial and healthcare. For nine years in a row, Fortune magazine has named Cadence one of the 100 Best Companies to Work For. Learn more at cadence.com.
Hi Cartagena,

I don't recall a partnership with Cadence. Do you have a reference?

Cadence have a clunky CNN circuit with ALU 125:

US11687831B1 Method, product, and apparatus for a multidimensional processing array for hardware acceleration of convolutional neural network inference 20200630
1694790777066.png
 
  • Like
  • Love
Reactions: 12 users

Frangipani

Regular
Always good to hear from semiconductor analyst Marc Kennis, a long-time BRN supporter (Google, for instance, came up with the following 2016 link: https://www.livewiremarkets.com/wir...tion-australia-s-got-several-part-2-brainchip), whether he is interviewing Peter van der Made or Sean Hehir or writing favourably about Brainchip.

‘Favourably’ is actually quite the understatement, when you read his article about the potential effect that Arm’s IPO could soon have on the performance of certain ASX chip stocks: It will in fact be nigh on impossible to top Marc Kennis’s below verdict on BRN being the “the ultimate AI stock”! 🤩

Oh, and he is evidently also a fan of Weebit Nano.



ASX chip stocks could do well on the back of ARM’s solid 25% trading debut gain​

Marc Kennis Marc Kennis, September 15, 2023

ARM is the biggest IPO of the year so far​

If you like ASX chip stocks, the one thing that couldn’t have escaped your attention this week was ARM’s IPO on Nasdaq. Even if you’re not that active in the markets, you will likely have noticed that the ARM IPO was being talked about everywhere.

Why? Because it’s a massive IPO, the biggest so far in 2023. And it can be considered a bellwether for the IPO market during the rest of the year and into 2024. Additionally, if ARM does well in the next little while, other Tech stocks, including ASX chip stocks, could start to do a bit better again after a few crappy weeks of trading. The Philadelphia Semiconductor Index (SOX) is down more than 7% in the last 2 weeks!

What are the Best Tech stocks to invest in right now?​

Check our buy/sell stock tips


You need a CPU to run a GPU!​

The timing of ARM’s IPO has everything to do with the recent boom in AI (artificial intelligence) stocks, with NVIDIA (NASDAQ:NVDA) leading the charge. You can’t run a GPU chip (Graphics Processing Unit) without a CPU (Central Processing Unit). So, you need chips like the ones ARM designs to run AI systems. Hence, the large appetite for ARM shares in this IPO.
The shares jumped almost 25% this Thursday in the US, its first day of trading on Nasdaq. We believe this solid debut could boost sentiment around ASX chip stocks as well!


Several ASX chip stocks are exposed to AI too

ASX chip stocks are minnows in the grander semiconductor scheme of things. Why would they do well on the back of ARM’s strong trading debut? Well, a few of the ASX-listed semiconductor stocks have exposure to AI in one form or another.

BrainChip (ASX:BRN), for instance, is the ultimate AI stock, in our view, as it is currently commercialising a chipset that runs Spiking Neural Networks (SNN). In essence, BRN’s Akida chip can learn fully autonomously, i.e. it theoretically doesn’t need anyone to tell it what it should learn from the data it is being fed. It can figure this out for itself….the ultimate AI chip. In practice, though, Akida will be given certain parameters depending on the specific application it is being used for.

You can check out some of the research that our friends at Pitt Street Research have written on BrainChip here.


ReRAM can be used in AI applications as well​

Another example of an ASX-listed company that is exposed (or will be exposed) to AI is Weebit Nano (ASX:WBT). WBT has developed a new type of Non-Volatile Memory (NVM), called ReRAM. This is currently being commercialised in embedded memory applications and will be further developed for stand alone memory applications as well.

But ReRAM can also be used in neuromorphic processing, where the memory cells mimic the way a biological brain works, similar to BrainChip’s Akida. Fow WBT, we think this application is still some years away, but it is definitely something that is well within the realm of possibilities for the company.
Pitt Street Research have also done quite some work on Weebit Nano.

Another ReRAM play, 4DS Memory (ASX:4DS), can potentially tap the AI opportunity with its Interface Switching ReRAM. The company recently made some big technical headway and got another step closer to commercialisation, although more development and testing will be needed. Check out some research on 4DS Memory here!

Succesful IPO’s can improve overall market sentiment​

Regardless of the exact end market ASX chip stocks are focussed on, though, we believe a succesful ARM IPO, and potentially more succesful Tech and Chip IPO’s later this year, can potentially turn overall market sentiment for the better. This should benefit ASX-listed Tech stocks in general, especially if and when it becomes more likely that Central Banks around the world, the US Fed in particular, are done hiking interest rates.

In addition, we think we are getting closer to a turn in the global semiconductor cycle, which should initially benefit the memory manufacturers (DRAM, Flash), but also the Logic companies, and subsequently the chip equipment companies.
So, we think investors should get set for 2024 and pile into Tech stocks, semiconductor stocks in particular and don’t forget ASX chip stocks!
 
  • Like
  • Fire
  • Love
Reactions: 49 users

Sirod69

bavarian girl ;-)
Valeo
Valeo
21 Min. •

"From an industrial point of view, AI is a real booster for our competitiveness."

At AI for Industry, organized by Artefact at the Palais Brongniart in Paris, Romain Bruniaux, Valeo Industrial VP, highlighted the potential of artificial intelligence applied to production and logistics. In particular, he spoke of the benefits of predictive maintenance, quality improvement and energy savings. Valeo has already observed up to 15% gains in a pilot plant.

AI for Industry brings together players in the field of artificial intelligence and industry, startups and institutional players, to highlight the use cases and ways in which AI can contribute to the success of the Industry.
1694795413997.png
 
  • Like
  • Love
  • Fire
Reactions: 22 users

Reuben

Founding Member
Md, there will be more than 1480.. some people have it in a few accounts. Our super itself is over 100k....

There are only 1480 people in the world with 100k of shares in this company ? thankyou lord in allowing me to be one of those. that leaves how many people ? :) kudos to the others too that jumped on board below 100k. but I bet the top 20 will happy soon :) now is a good time to be "stardust" who happens to be top 20. who amongst us is stardust? :)
 
Last edited:
  • Like
  • Fire
  • Love
Reactions: 13 users

Baisyet

Regular
  • Like
  • Love
Reactions: 8 users

IloveLamp

Top 20
Screenshot_20230916_070835_LinkedIn.jpg
 
  • Like
  • Fire
Reactions: 5 users

Mt09

Regular

Could Akida be used here?​

I do know we have a partnership with Cadence.
Main parts highlighted in orange!

Cadence Accelerates On-Device and Edge AI Performance and Efficiency with New Neo NPU IP and NeuroWeave SDK for Silicon Design​

SAN JOSE, Calif.— September 14, 2023 -- Cadence Design Systems, Inc. (Nasdaq: CDNS) today unveiled its next-generation AI IP and software tools to address the escalating demand for on-device and edge AI processing. The new highly scalable Cadence® Neo™ Neural Processing Units (NPUs) deliver a wide range of AI performance in a low-energy footprint, bringing new levels of performance and efficiency to AI SoCs. Delivering up to 80 TOPS performance in a single core, the Neo NPUs support both classic and new generative AI models and can offload AI/ML execution from any host processor—including application processors, general-purpose microcontrollers and DSPs—with a simple and scalable AMBA® AXI interconnect. Complementing the AI hardware, the new NeuroWeave™ Software Development Kit (SDK) provides developers with a “one-tool” AI software solution across Cadence AI and Tensilica® IP products for no-code AI development.
“While most of the recent attention on AI has been cloud-focused, there are an incredible range of new possibilities that both classic and generative AI can enable on the edge and within devices,” said Bob O’Donnell, president and chief analyst at TECHnalysis Research. “From consumer to mobile and automotive to enterprise, we’re embarking on a new era of naturally intuitive intelligent devices. For these to come to fruition, both chip designers and device makers need a flexible, scalable combination of hardware and software solutions that allow them to bring the magic of AI to a wide range of power requirements and compute performance, all while leveraging familiar tools. New chip architectures that are optimized to accelerate ML models and software tools with seamless links to popular AI development frameworks are going to be incredibly important parts of this process.”
The flexible Neo NPUs are well suited for ultra-power-sensitive devices as well as high-performance systems with a configurable architecture, enabling SoC architects to integrate an optimal AI inferencing solution in a broad range of products, including intelligent sensors, IoT and mobile devices, cameras, hearables/wearables, PCs, AR/VR headsets and advanced driver-assistance systems (ADAS). New hardware and performance enhancements and key features/capabilities include:
  • Scalability: Single-core solution is scalable from 8 GOPS to 80 TOPS, with further extension to hundreds of TOPS with multicore
  • Broad configuration range: supports 256 to 32K MACs per cycle, allowing SoC architects to optimize their embedded AI solution to meet power, performance and area (PPA) tradeoffs
  • Integrated support for a myriad of network topologies and operators: enables efficient offloading of inferencing tasks from any host processor—including DSPs, general-purpose microcontrollers or application processors—significantly improving system performance and power
  • Ease of deployment: shortens the time to market to meet rapidly evolving next-generation vision, audio, radar, natural language processing (NLP) and generative AI pipelines
  • Flexibility: Support for Int4, Int8, Int16, and FP16 data types across a wide set of operations that form the basis of CNN, RNN and transformer-based networks allows flexibility in neural network performance and accuracy tradeoffs
  • High performance and efficiency: Up to 20X higher performance than the first-generation Cadence AI IP, with 2-5X the inferences per second per area (IPS/mm2) and 5-10X the inferences per second per Watt (IPS/W)
Since software is a critical part of any AI solution, Cadence also upgraded its common software toolchain with the introduction of the NeuroWeave SDK. Providing customers with a uniform, scalable and configurable software stack across Tensilica DSPs, controllers and Neo NPUs to address all target applications, the NeuroWeave SDK streamlines product development and enables an easy migration as design requirements evolve. It supports many industry-standard domain-specific ML frameworks, including TensorFlow, ONNX, PyTorch, Caffe2, TensorFlow Lite, MXNet, JAX and others for automated end-to-end code generation; Android Neural Network Compiler; TF Lite Delegates for real-time execution; and TensorFlow Lite Micro for microcontroller-class devices.

“For two decades and with more than 60 billion processors shipped, industry-leading SoC customers have relied on Cadence processor IP for their edge and on-device SoCs. Our Neo NPUs capitalize on this expertise, delivering a leap forward in AI processing and performance,” said David Glasco, vice president of research and development for Tensilica IP at Cadence. “In today’s rapidly evolving landscape, it’s critical that our customers are able to design and deliver AI solutions based on their unique requirements and KPIs without concern about whether future neural networks are supported. Toward this end, we’ve made significant investments in our new AI hardware platform and software toolchain to enable AI at every performance, power and cost point and to drive the rapid deployment of AI-enabled systems.”

“At Labforge, we use a cluster of Cadence Tensilica DSPs in our Bottlenose smart camera product line to enable best-in-class AI processing for power-sensitive edge applications,” said Yassir Rizwan, CEO of Labforge, Inc. “Cadence’s AI software is an integral part of our embedded low power AI solution, and we’re looking forward to leveraging the new capabilities and higher performance offered by Cadence’s new NeuroWeave SDK. With an end-to-end compiler toolchain flow, we can better solve challenging AI problems in automation and robotics—accelerating our time to market to capitalize on generative AI-based application demand and opening new market streams that may not have been possible otherwise.”

The Neo NPUs and the NeuroWeave SDK support Cadence’s Intelligent System Design™ strategy by enabling pervasive intelligence through SoC design excellence.

Availability

The Neo NPUs and the NeuroWeave SDK are expected to be in general availability beginning in December 2023. Early engagements have already started for lead customers. For more information, please visit www.cadence.com/go/NPU.

About Cadence

Cadence is a pivotal leader in electronic design, building upon more than 30 years of computational software expertise. The company applies its underlying Intelligent System Design strategy to deliver software, hardware and IP that turn design concepts into reality. Cadence customers are the world’s most innovative companies, delivering extraordinary electronic products from chips to boards to systems for the most dynamic market applications, including consumer, hyperscale computing, 5G communications, automotive, mobile, aerospace, industrial and healthcare. For nine years in a row, Fortune magazine has named Cadence one of the 100 Best Companies to Work For. Learn more at cadence.com.
Partnership with Cadence? Where’d ya pull that from? (Please don’t say google bard).
 
  • Like
Reactions: 1 users

wilzy123

Founding Member

Could Akida be used here?​

I do know we have a partnership with Cadence.
Main parts highlighted in orange!

Cadence Accelerates On-Device and Edge AI Performance and Efficiency with New Neo NPU IP and NeuroWeave SDK for Silicon Design​

SAN JOSE, Calif.— September 14, 2023 -- Cadence Design Systems, Inc. (Nasdaq: CDNS) today unveiled its next-generation AI IP and software tools to address the escalating demand for on-device and edge AI processing. The new highly scalable Cadence® Neo™ Neural Processing Units (NPUs) deliver a wide range of AI performance in a low-energy footprint, bringing new levels of performance and efficiency to AI SoCs. Delivering up to 80 TOPS performance in a single core, the Neo NPUs support both classic and new generative AI models and can offload AI/ML execution from any host processor—including application processors, general-purpose microcontrollers and DSPs—with a simple and scalable AMBA® AXI interconnect. Complementing the AI hardware, the new NeuroWeave™ Software Development Kit (SDK) provides developers with a “one-tool” AI software solution across Cadence AI and Tensilica® IP products for no-code AI development.
“While most of the recent attention on AI has been cloud-focused, there are an incredible range of new possibilities that both classic and generative AI can enable on the edge and within devices,” said Bob O’Donnell, president and chief analyst at TECHnalysis Research. “From consumer to mobile and automotive to enterprise, we’re embarking on a new era of naturally intuitive intelligent devices. For these to come to fruition, both chip designers and device makers need a flexible, scalable combination of hardware and software solutions that allow them to bring the magic of AI to a wide range of power requirements and compute performance, all while leveraging familiar tools. New chip architectures that are optimized to accelerate ML models and software tools with seamless links to popular AI development frameworks are going to be incredibly important parts of this process.”
The flexible Neo NPUs are well suited for ultra-power-sensitive devices as well as high-performance systems with a configurable architecture, enabling SoC architects to integrate an optimal AI inferencing solution in a broad range of products, including intelligent sensors, IoT and mobile devices, cameras, hearables/wearables, PCs, AR/VR headsets and advanced driver-assistance systems (ADAS). New hardware and performance enhancements and key features/capabilities include:
  • Scalability: Single-core solution is scalable from 8 GOPS to 80 TOPS, with further extension to hundreds of TOPS with multicore
  • Broad configuration range: supports 256 to 32K MACs per cycle, allowing SoC architects to optimize their embedded AI solution to meet power, performance and area (PPA) tradeoffs
  • Integrated support for a myriad of network topologies and operators: enables efficient offloading of inferencing tasks from any host processor—including DSPs, general-purpose microcontrollers or application processors—significantly improving system performance and power
  • Ease of deployment: shortens the time to market to meet rapidly evolving next-generation vision, audio, radar, natural language processing (NLP) and generative AI pipelines
  • Flexibility: Support for Int4, Int8, Int16, and FP16 data types across a wide set of operations that form the basis of CNN, RNN and transformer-based networks allows flexibility in neural network performance and accuracy tradeoffs
  • High performance and efficiency: Up to 20X higher performance than the first-generation Cadence AI IP, with 2-5X the inferences per second per area (IPS/mm2) and 5-10X the inferences per second per Watt (IPS/W)
Since software is a critical part of any AI solution, Cadence also upgraded its common software toolchain with the introduction of the NeuroWeave SDK. Providing customers with a uniform, scalable and configurable software stack across Tensilica DSPs, controllers and Neo NPUs to address all target applications, the NeuroWeave SDK streamlines product development and enables an easy migration as design requirements evolve. It supports many industry-standard domain-specific ML frameworks, including TensorFlow, ONNX, PyTorch, Caffe2, TensorFlow Lite, MXNet, JAX and others for automated end-to-end code generation; Android Neural Network Compiler; TF Lite Delegates for real-time execution; and TensorFlow Lite Micro for microcontroller-class devices.

“For two decades and with more than 60 billion processors shipped, industry-leading SoC customers have relied on Cadence processor IP for their edge and on-device SoCs. Our Neo NPUs capitalize on this expertise, delivering a leap forward in AI processing and performance,” said David Glasco, vice president of research and development for Tensilica IP at Cadence. “In today’s rapidly evolving landscape, it’s critical that our customers are able to design and deliver AI solutions based on their unique requirements and KPIs without concern about whether future neural networks are supported. Toward this end, we’ve made significant investments in our new AI hardware platform and software toolchain to enable AI at every performance, power and cost point and to drive the rapid deployment of AI-enabled systems.”

“At Labforge, we use a cluster of Cadence Tensilica DSPs in our Bottlenose smart camera product line to enable best-in-class AI processing for power-sensitive edge applications,” said Yassir Rizwan, CEO of Labforge, Inc. “Cadence’s AI software is an integral part of our embedded low power AI solution, and we’re looking forward to leveraging the new capabilities and higher performance offered by Cadence’s new NeuroWeave SDK. With an end-to-end compiler toolchain flow, we can better solve challenging AI problems in automation and robotics—accelerating our time to market to capitalize on generative AI-based application demand and opening new market streams that may not have been possible otherwise.”

The Neo NPUs and the NeuroWeave SDK support Cadence’s Intelligent System Design™ strategy by enabling pervasive intelligence through SoC design excellence.

Availability

The Neo NPUs and the NeuroWeave SDK are expected to be in general availability beginning in December 2023. Early engagements have already started for lead customers. For more information, please visit www.cadence.com/go/NPU.

About Cadence

Cadence is a pivotal leader in electronic design, building upon more than 30 years of computational software expertise. The company applies its underlying Intelligent System Design strategy to deliver software, hardware and IP that turn design concepts into reality. Cadence customers are the world’s most innovative companies, delivering extraordinary electronic products from chips to boards to systems for the most dynamic market applications, including consumer, hyperscale computing, 5G communications, automotive, mobile, aerospace, industrial and healthcare. For nine years in a row, Fortune magazine has named Cadence one of the 100 Best Companies to Work For. Learn more at cadence.com.

This is low value bait and misleading. Yet another untrustworthy forum character.
 
  • Like
  • Fire
Reactions: 4 users

Cartagena

Regular
Hi Cartagena,

I don't recall a partnership with Cadence. Do you have a reference?

Cadence have a clunky CNN circuit with ALU 125:

US11687831B1 Method, product, and apparatus for a multidimensional processing array for hardware acceleration of convolutional neural network inference 20200630
View attachment 44700

Hi Diogenese thanks for your excellent explanation of the circuit which Cadence is using and for correcting me. I am not 100% on the partnership so I've edited my post however some time ago I thought BrainChip and Cadence were working together as I definitely read something online and I'm trying to remember where. Considering I posted that in the late hours of the night, I must have been too tired and maybe over enthusiastic. I will try and find where it was however I do appreciate your input which was mature, respectful and not an attack on my post unlike others here.
 
Last edited:
  • Like
  • Love
  • Fire
Reactions: 22 users

Cartagena

Regular
This is low value bait and misleading. Yet another untrustworthy forum character.
There's no need to attack my post in that manner when I unintentionally made an error, we're all human.
Yes thought there was some connection (innocently) but I have since removed my post. I suggest you remove your post also, which I find immature and unacceptable.
 
Last edited:
  • Like
  • Love
Reactions: 37 users

IloveLamp

Top 20
Screenshot_20230916_084142_LinkedIn.jpg
 
  • Like
  • Love
  • Fire
Reactions: 14 users

IloveLamp

Top 20
Screenshot_20230916_084914_LinkedIn.jpg
 
  • Like
  • Love
Reactions: 6 users
Pull apart the camera and send us a photo of the internal

Would be lying if I said that thought didn’t cross my mind.
 
  • Fire
Reactions: 1 users

Fenris78

Regular
There's no need to attack my post in that manner when I unintentionally made an error, we're all human.
Yes thought there was some connection (innocently) but I have since removed my post. I suggest you remove your post also, which I find immature and unacceptable.
Every one of Wilzy123 posts is inflammatory and unacceptable... I would have hoped that he'd be removed from this forum by Zeebot by now (people are moderated for much less)? No different to BS from hotcrapper... which is why most of us came here with good intentions. He adds zero value.. other than to antagonize others here. Going on my ignore list Wilzy.
 
  • Like
  • Love
  • Fire
Reactions: 45 users

Fenris78

Regular
Loving this news! AI and Cortex M85, as well as Cloudless Cortex-M architecture... stars are definitely seem to be aligning.
 
  • Like
  • Fire
Reactions: 9 users
Top Bottom