BRN Discussion Ongoing

wilzy123

Founding Member
Did we have this one already? I don't remember.

Press Release

BrainChip Earns Australian Patent for Improved Spiking Neural Network​

Photo of Business Wire Business Wire Send an email3 weeks ago
2 minutes read
Brainchip-Essential-Al_Logo_Blk_RGB-1-780x470.jpg


LAGUNA HILLS, Calif.–(BUSINESS WIRE)–BrainChip Holdings Ltd (ASX: BRN, OTCQX: BRCHF, ADR: BCHPY), the world’s first commercial producer of ultra-low power, fully digital, event-based, neuromorphic AI, received its latest Australian patent, further strengthening its IP portfolio related to sustainable and efficient AI technologies.


Patent AU2022287647, “An Improved Spiking Neural Network,” facilitates low-shot learning. Learning is performed by adding neurons to the final layer of a previously trained network to represent a new class, with the neural network weights of the added neuron being trained with only a few samples while the remainder of the network remains unchanged. Applications impacted by this patent include biometric face recognition, speech recognition and anomaly detection in industrial systems.
BrainChip’s AkidaTM IP and MetaTFTM tools seamlessly transforms contemporary neural networks into event-based or spiking networks. This patented technology is uniquely synergistic with the converted spiking networks, enabling the streamlined deployment of Edge learning algorithms and unlocking use cases that conventional AI solutions cannot achieve.
“To push the limits of AI on-chip compute, we need to continuously push the envelope of what is possible with neuromorphic technology,” said Sean Hehir, CEO of BrainChip. “BrainChip’s expanding patent portfolio ensures our freedom to practice our own inventions and to prevent others from infringing on our highly innovative intellectual property. This latest Australian patent is the next step in advancing Edge AI further than ever before.”
BrainChip’s portfolio now comprises 20 issued patents (13x US, 5x AU, 2CN). In addition, there are 23 pending patent applications across the US, Europe, Australia, Canada, Japan, Korea, India and Israel.
Not in Australia... hence the media release.
 

wilzy123

Founding Member

Why are Brainchip shares racing higher after its AGM?​

This struggling semiconductor stock is having a good session. But why?

Warning: Motley Fool!!

Brainchip Holdings Ltd (ASX: BRN) shares are rebounding on Tuesday.

In afternoon trade, the struggling semiconductor company's shares are up 6% to 26.5 cents.

Why are Brainchip shares rising today?​

Investors have been buying the company's shares for a couple of reasons.

One is the release of its annual general meeting (AGM) update this morning. The other is news that the company has managed to avoid a spill of its board at this meeting.

In respect to its AGM update, Brainchip CEO, Sean Hehir, acknowledged that the company hasn't delivered meaningful revenue yet, and was disappointed by this, but remains positive on the future. He said:


Licensing deals​

Hehir believes the company is close to getting an answer from some potential customers after being in discussions for over a year. He said:


The under pressure CEO has been heavily criticised for his big salary and bonuses and distinct lack of commercial success since joining. However, he remains confident on delivering the goods for shareholders. He adds:


Time will tell if this proves to be the case or if the next 12 months will just be more of the same – all hype and no substance.

Board spill avoided​

Brainchip shares may also be rising today after the company avoided a disruptive board spill.

While 33.41% of votes were against its remuneration report, a sizeable 85.59% of votes were against spilling the board.

ROFL. This junk does not belong here.
 
  • Like
  • Fire
Reactions: 6 users

Terroni2105

Founding Member
1716978843710.jpeg



 
  • Like
  • Fire
Reactions: 13 users

Terroni2105

Founding Member
1716978883166.jpeg
 
  • Like
  • Fire
Reactions: 6 users

Terroni2105

Founding Member
1716978942659.jpeg
 
  • Like
  • Fire
Reactions: 5 users

Terroni2105

Founding Member
I couldn’t get all those screenshots on the one post but they are all in order.

first the post by Rudy Pei (BrainChip ML research engineer) and then comments under his post by Chris Jones (BrainChip Senior Product manager)

don’t ask me what any of it means though lol 🤷‍♀️
 
  • Haha
  • Like
  • Fire
Reactions: 14 users
  • Like
  • Fire
Reactions: 9 users

TECH

Regular
And your point is????

Don't forget....that's the pointy, point toasty, you appreciate NDA's and why they are in place in the first place, and as such the curtains
will remain closed until such time as either party concerned agrees to remove them, so the waiting, dot joining and impatience remains
the number one annoyance for many shareholders.

You still whinging about management ? 85% + didn't think it warranted a spill just yet, maybe this time next year the story will be
completely reversed, but not from where I stand.

Anything worth contributing to the forum toasty ? just asking for a friend.

Tech 🥱
 
  • Like
  • Haha
  • Fire
Reactions: 28 users
Don't forget....that's the pointy, point toasty, you appreciate NDA's and why they are in place in the first place, and as such the curtains
will remain closed until such time as either party concerned agrees to remove them, so the waiting, dot joining and impatience remains
the number one annoyance for many shareholders.

You still whinging about management ? 85% + didn't think it warranted a spill just yet, maybe this time next year the story will be
completely reversed, but not from where I stand.

Anything worth contributing to the forum toasty ? just asking for a friend.

Tech 🥱
I hope that someday someone writes a story about the journey we are all on we are all here be it from different places or different times or different perspectives but we are all here. We wait some are ok with it
Some frustrated with it and some so impatient disrespectful and just rude beyond belief.
There’s a story to be told
And it’s a game of poker in a way
How many have thrown in there hand
Or bought back in to just crash and burn.
There will come a time when like most Scares the pain fades and all that is left is a faint mark
Just enough to remember
How many people have come here with good intentions or not.
Investors alike we wait our turn.
Every day is a day closer to the edge
The smell of fresh air rising up from the far edge you can almost taste it feel it or is it just a dream.
 
  • Like
  • Love
  • Fire
Reactions: 28 users
I hope that someday someone writes a story about the journey we are all on we are all here be it from different places or different times or different perspectives but we are all here. We wait some are ok with it
Some frustrated with it and some so impatient disrespectful and just rude beyond belief.
There’s a story to be told
And it’s a game of poker in a way
How many have thrown in there hand
Or bought back in to just crash and burn.
There will come a time when like most Scares the pain fades and all that is left is a faint mark
Just enough to remember
How many people have come here with good intentions or not.
Investors alike we wait our turn.
Every day is a day closer to the edge
The smell of fresh air rising up from the far edge you can almost taste it feel it or is it just a dream.
Are you sure that's fresh air rising up mate? 🤔..

100.gif
 
  • Haha
Reactions: 5 users
Could be the CyberNeuro-RT now live and available :D 🚀 🔥

You'd like to think if selling it, that they will need to be sourcing the 2 "neuromorphic offerings" via direct eg BRN & Intel, via their own licence or via an existing licensee eg Megachips or Renesas?


IMG_20240529_214534.jpg




CyberNeuro-RT​


An AI/ML-driven, highly-scalable, real-time network defense & threat intelligence tool with GPU or low-power neuromorphic chip deployment


6557cbc5f33149a7b16b5923_Play%20Icon%20Container%20(3).png


A Quantum Ventura, Lockheed Martin, and Penn State Innovation
Quantum Ventura’s CyberNeuro-RT (CNRT) technology offering has been developed in partnership with Lockheed Martin Co.’s MFC Division and Pennsylvania State University under partial funding from the U.S. Department of Energy.
Cutting-Edge Unsupervised ML

Scalable Unsupervised Outlier Detection (SUOD)
  • Large-scale heterogeneous outlier detection
  • 6 ML Algo EnsembleModel Approximation for Complex Models
  • Execution Efficiency Improvement for Task Load Balancing in Distributed System
  • Variational Autoencoder (VAE)
Variational Autoencoder (VAE)
  • Encoder-Decoder Architecture
  • Variational => Highly Regularized Encoder
  • ETrained to Minimize Reconstruction Error of initial input and reconstructed output
  • Variational Autoencoder (VAE)
75x Dataset Growth in Under 2 Months
  1. Existing Dataset Ingestion: Proprietary system enables ingestion of any existing network capture dataset with flexible support for any labelling system
  2. From-the-wild Zero Day Sampling: System enables capturing and simulation of novel threats for additional data sampling
  3. Data Generation via Simulation: ThreatATI database and proprietary ingestion system enable sampling and augmentation for cataloged threats from proprietary and public threat databases
Proprietary Pipeline Adapts to Any Dataset
1716990062067.png


Follow Threats Home with Dark Web Tracking
1716990103899.png


At-the-edge Neuromorphic Processing
◯ Two offerings from the leading neuromorphic developers: Intel and Brainchip
◯ Small form factor, magnitudes less power consumption than GPU
◯ On-chip learning for deployment network specific attack detection


6519ca62386d7be9f8aa0cf8_Image.png

Intel Loihi

1716990156250.png

Brainchip Akida

Dashboards Minimizes Operator Fatigue
Robust, Multi-Faceted, User-Friendly Cyber Analyst Dashboard Operator Fatigue Allows Cyber Attacks To Happen
  • Large numbers of false alarms cause real threats to be missed
  • False alarms fatigue the cyber analyst further increasing risk of missed threats

The Cyber Neuro-RT Dashboard Is Designed To Minimize All Sources Of Analyst Fatigue While Presenting Timely And Meaningful Data Insights
  • Al based false alarms are minimized (trained for minimal false positive rate)n for cataloged threats from proprietary and public threat databases
  • Possible threats are ranked by importance and confidence
  • Only the most relevant and likely alarms are actioned upon

65c64c89a6c024d7d69022ea_Logo%20(3).svg


ABOUT US
What we do
Team
Partnership
PRODUCT
CyberNeuro RT
E-RECOV
Diamond Droid
CONNECT


© 2023 Quantum Ventura Inc. All Rights Reserved.
Privacy PolicyTerms of Service
 

Attachments

  • 1716989873932.png
    1716989873932.png
    4.4 KB · Views: 64
  • Like
  • Fire
  • Love
Reactions: 80 users

Tothemoon24

Top 20
The Sky is no longer the Limit


IMG_9005.jpeg
 

Attachments

  • IMG_9005.jpeg
    IMG_9005.jpeg
    1.2 MB · Views: 62
Last edited:
  • Like
  • Love
  • Fire
Reactions: 61 users

Tothemoon24

Top 20
IMG_9006.jpeg



This is the compute platform for the future of AI. 👇

🆕 Introducing Arm Compute Subsystems (CSS) for Client.

Designed for AI smartphones and AI PCs, CSS for Client delivers production ready physical implementations of our new CPUs and GPUs to deliver next-gen AI experiences quickly and easily.

It includes:
🔹 Our latest Armv9.2 CPU cluster, including the Arm Cortex-X925 which delivers the highest year-on-year performance uplift in the history of Cortex-X

🔹 The Arm Immortalis-G925 GPU, our most performant and efficient GPU to date with a 37% uplift in graphics performance

We are also launching new KleidiAI software to provide the simplest way for developers to get the best performance out of Arm CPUs.

So whether you want more AI, more performance or more advanced silicon, you can rely on our new solution to provide the foundation for AI-powered experiences on consumer devices. https://okt.to/PkzCt3
 
  • Like
  • Thinking
  • Wow
Reactions: 12 users

Tothemoon24

Top 20

Redefining Mobile Experiences with AI-Optimized Arm CSS for Client and New Arm Kleidi Software​


By Chris Bergey, SVP and GM of the Client Business, Arm
Artificial Intelligence (AI)SmartphonesSoftware
Share
CSS_Newsroom_1600x900_4-copy-1400x788.jpg

News highlights​

  • New compute solution, Arm Compute Subsystems (CSS) for Client, brings together Armv9 benefits with validated and verified production ready implementations of new Arm CPUs and GPUs on 3nm process nodes to enable silicon partners to rapidly innovate and speed time to market
  • AI-optimized Arm CSS for Client with next generation Cortex-X CPU, delivering highest year-on-year IPC uplift resulting in a 36% increase in performance; new Immortalis GPU brings a 37% uplift in graphics performance
  • New KleidiAI software integrates with popular AI frameworks for seamless developer experiences; KleidiAI with Arm CSS dramatically improves performance of computing applications by leveraging a wide range of Arm’s acceleration technologies (NEON, SVE2 and SME2)
With power efficiency in our DNA, the Arm platform is providing the foundation for the next wave of computing demands as the AI era accelerates. As AI models continue to rapidly evolve, we’re seeing that software begins to outpace hardware which means additional innovation is required at all levels of the compute stack. To meet these growing demands, we’re evolving our solution offering to gain the maximum benefits of leading process nodes and announcing the newest Arm compute solution for AI smartphones and PCs – Arm Compute Subsystems (CSS) for Client.
Arm CSS for Client provides the performance, efficiency and accessibility to deliver leading AI-based experiences and makes it easier and faster for our silicon partners to build Arm-based solutions and get to market quickly. CSS for Client provides the foundational computing elements for flagship SoCs and features the latest Armv9.2 CPUs and Immortalis GPUs, as well as production ready physical implementations for CPU and GPU on 3nm and the latest Corelink System Interconnect and System Memory Management Units (SMMUs).

Unprecedented CPU and GPU performance and efficiency​

CSS for Client delivers a step change in platform capabilities to continue pushing the boundaries of premium mobile experiences. This is the fastest Arm compute platform addressing demanding real-life Android workloads with greater than 30 percent increase on compute and graphics performance and 59 percent faster AI inference for broader AI/ML and computer vision (CV) workloads.
At the heart of CSS for Client is Arm’s most performant, efficient and versatile CPU cluster ever for maximum performance and power efficiency. The new Arm Cortex-X925 delivers the highest year-on-year performance uplift in the history of Cortex-X. Taking advantage of the leading edge 3nm process nodes, assuming a 3.8GHz clock rate and maximum cache size, the result is a massive 36 percent increase in single-thread performance when comparing to 2023 smartphone flagship 4nm SoCs. For AI, Cortex-X925 provides an incredible 41 percent performance uplift to dramatically improve the responsiveness of on-device generative AI, like large language models (LLMs).
The push for leading-edge performance is combined with leading-edge efficiency through our new Arm Cortex-A725 CPU, which delivers a 35 percent improvement in performance efficiency to target AI and mobile gaming use cases. This is supported by a refreshed Arm Cortex-A520 CPU and an updated DSU-120 that provide power efficiency and scalability improvements for consumer devices that adopt the latest Armv9 CPU clusters. Learn more about the new Armv9 CPUs in this blog.
The new Arm Immortalis-G925 GPU, which is our most performant and efficient GPU to date, delivers 37 percent more performance across a wide range of leading mobile gaming applications, as well as 34 percent more performance when measured over multiple AI and ML networks. While Immortalis-G925 is for the flagship smartphone market, the highly scalable new GPU family, including Arm Mali-G725 and Mali-G625 GPUs, targets a broad range of consumer device markets, from premium mobile handsets to smartwatches and XR wearables. Learn more about Arm’s new GPUs in this blog.

Optimizing software for outstanding developer innovation​

We are relentlessly focused on millions of developers worldwide, ensuring they have access to the performance, tools and software libraries required to create the next wave of AI-enabled applications. To enable developers to land these innovations quickly at the highest performance, we’re introducing Arm Kleidi, which includes KleidiAI for AI workloads and KleidiCV for computer vision applications. KleidiAI is a set of compute kernels for developers of AI frameworks, providing them with frictionless access to the best performance possible on Arm CPUs, across a wide range of devices, with support for key Arm architectural features such as NEON, SVE2 and SME2. KleidiAI integrates with popular AI frameworks, such as PyTorch, Tensorflow and MediaPipe, with a view to accelerating the performance of key models including Meta Llama 3 and Phi-3. It is also backwards and forwards compatible to ensure Arm is future fit as we bring additional technologies to market. Learn more about Arm Kleidi in this blog.

The compute platform for the future of AI​

Through the unique combination of leading-edge CPU and GPU technologies, production ready physical implementations and continuous software optimizations, CSS for Client combined with Kleidi software will provide the compute platform for the future of AI, a future that is built on Arm.
 
  • Like
  • Fire
  • Love
Reactions: 14 users
  • Like
  • Love
Reactions: 3 users

Redefining Mobile Experiences with AI-Optimized Arm CSS for Client and New Arm Kleidi Software​


By Chris Bergey, SVP and GM of the Client Business, Arm
Artificial Intelligence (AI)SmartphonesSoftware
Share
CSS_Newsroom_1600x900_4-copy-1400x788.jpg

News highlights​

  • New compute solution, Arm Compute Subsystems (CSS) for Client, brings together Armv9 benefits with validated and verified production ready implementations of new Arm CPUs and GPUs on 3nm process nodes to enable silicon partners to rapidly innovate and speed time to market
  • AI-optimized Arm CSS for Client with next generation Cortex-X CPU, delivering highest year-on-year IPC uplift resulting in a 36% increase in performance; new Immortalis GPU brings a 37% uplift in graphics performance
  • New KleidiAI software integrates with popular AI frameworks for seamless developer experiences; KleidiAI with Arm CSS dramatically improves performance of computing applications by leveraging a wide range of Arm’s acceleration technologies (NEON, SVE2 and SME2)
With power efficiency in our DNA, the Arm platform is providing the foundation for the next wave of computing demands as the AI era accelerates. As AI models continue to rapidly evolve, we’re seeing that software begins to outpace hardware which means additional innovation is required at all levels of the compute stack. To meet these growing demands, we’re evolving our solution offering to gain the maximum benefits of leading process nodes and announcing the newest Arm compute solution for AI smartphones and PCs – Arm Compute Subsystems (CSS) for Client.
Arm CSS for Client provides the performance, efficiency and accessibility to deliver leading AI-based experiences and makes it easier and faster for our silicon partners to build Arm-based solutions and get to market quickly. CSS for Client provides the foundational computing elements for flagship SoCs and features the latest Armv9.2 CPUs and Immortalis GPUs, as well as production ready physical implementations for CPU and GPU on 3nm and the latest Corelink System Interconnect and System Memory Management Units (SMMUs).

Unprecedented CPU and GPU performance and efficiency​

CSS for Client delivers a step change in platform capabilities to continue pushing the boundaries of premium mobile experiences. This is the fastest Arm compute platform addressing demanding real-life Android workloads with greater than 30 percent increase on compute and graphics performance and 59 percent faster AI inference for broader AI/ML and computer vision (CV) workloads.
At the heart of CSS for Client is Arm’s most performant, efficient and versatile CPU cluster ever for maximum performance and power efficiency. The new Arm Cortex-X925 delivers the highest year-on-year performance uplift in the history of Cortex-X. Taking advantage of the leading edge 3nm process nodes, assuming a 3.8GHz clock rate and maximum cache size, the result is a massive 36 percent increase in single-thread performance when comparing to 2023 smartphone flagship 4nm SoCs. For AI, Cortex-X925 provides an incredible 41 percent performance uplift to dramatically improve the responsiveness of on-device generative AI, like large language models (LLMs).
The push for leading-edge performance is combined with leading-edge efficiency through our new Arm Cortex-A725 CPU, which delivers a 35 percent improvement in performance efficiency to target AI and mobile gaming use cases. This is supported by a refreshed Arm Cortex-A520 CPU and an updated DSU-120 that provide power efficiency and scalability improvements for consumer devices that adopt the latest Armv9 CPU clusters. Learn more about the new Armv9 CPUs in this blog.
The new Arm Immortalis-G925 GPU, which is our most performant and efficient GPU to date, delivers 37 percent more performance across a wide range of leading mobile gaming applications, as well as 34 percent more performance when measured over multiple AI and ML networks. While Immortalis-G925 is for the flagship smartphone market, the highly scalable new GPU family, including Arm Mali-G725 and Mali-G625 GPUs, targets a broad range of consumer device markets, from premium mobile handsets to smartwatches and XR wearables. Learn more about Arm’s new GPUs in this blog.

Optimizing software for outstanding developer innovation​

We are relentlessly focused on millions of developers worldwide, ensuring they have access to the performance, tools and software libraries required to create the next wave of AI-enabled applications. To enable developers to land these innovations quickly at the highest performance, we’re introducing Arm Kleidi, which includes KleidiAI for AI workloads and KleidiCV for computer vision applications. KleidiAI is a set of compute kernels for developers of AI frameworks, providing them with frictionless access to the best performance possible on Arm CPUs, across a wide range of devices, with support for key Arm architectural features such as NEON, SVE2 and SME2. KleidiAI integrates with popular AI frameworks, such as PyTorch, Tensorflow and MediaPipe, with a view to accelerating the performance of key models including Meta Llama 3 and Phi-3. It is also backwards and forwards compatible to ensure Arm is future fit as we bring additional technologies to market. Learn more about Arm Kleidi in this blog.

The compute platform for the future of AI​

Through the unique combination of leading-edge CPU and GPU technologies, production ready physical implementations and continuous software optimizations, CSS for Client combined with Kleidi software will provide the compute platform for the future of AI, a future that is built on Arm.
So no Brainchip with arm by the looks ??
Or did I miss something there
 
  • Like
Reactions: 1 users

IloveLamp

Top 20

Redefining Mobile Experiences with AI-Optimized Arm CSS for Client and New Arm Kleidi Software​


By Chris Bergey, SVP and GM of the Client Business, Arm
Artificial Intelligence (AI)SmartphonesSoftware
Share
CSS_Newsroom_1600x900_4-copy-1400x788.jpg

News highlights​

  • New compute solution, Arm Compute Subsystems (CSS) for Client, brings together Armv9 benefits with validated and verified production ready implementations of new Arm CPUs and GPUs on 3nm process nodes to enable silicon partners to rapidly innovate and speed time to market
  • AI-optimized Arm CSS for Client with next generation Cortex-X CPU, delivering highest year-on-year IPC uplift resulting in a 36% increase in performance; new Immortalis GPU brings a 37% uplift in graphics performance
  • New KleidiAI software integrates with popular AI frameworks for seamless developer experiences; KleidiAI with Arm CSS dramatically improves performance of computing applications by leveraging a wide range of Arm’s acceleration technologies (NEON, SVE2 and SME2)
With power efficiency in our DNA, the Arm platform is providing the foundation for the next wave of computing demands as the AI era accelerates. As AI models continue to rapidly evolve, we’re seeing that software begins to outpace hardware which means additional innovation is required at all levels of the compute stack. To meet these growing demands, we’re evolving our solution offering to gain the maximum benefits of leading process nodes and announcing the newest Arm compute solution for AI smartphones and PCs – Arm Compute Subsystems (CSS) for Client.
Arm CSS for Client provides the performance, efficiency and accessibility to deliver leading AI-based experiences and makes it easier and faster for our silicon partners to build Arm-based solutions and get to market quickly. CSS for Client provides the foundational computing elements for flagship SoCs and features the latest Armv9.2 CPUs and Immortalis GPUs, as well as production ready physical implementations for CPU and GPU on 3nm and the latest Corelink System Interconnect and System Memory Management Units (SMMUs).

Unprecedented CPU and GPU performance and efficiency​

CSS for Client delivers a step change in platform capabilities to continue pushing the boundaries of premium mobile experiences. This is the fastest Arm compute platform addressing demanding real-life Android workloads with greater than 30 percent increase on compute and graphics performance and 59 percent faster AI inference for broader AI/ML and computer vision (CV) workloads.
At the heart of CSS for Client is Arm’s most performant, efficient and versatile CPU cluster ever for maximum performance and power efficiency. The new Arm Cortex-X925 delivers the highest year-on-year performance uplift in the history of Cortex-X. Taking advantage of the leading edge 3nm process nodes, assuming a 3.8GHz clock rate and maximum cache size, the result is a massive 36 percent increase in single-thread performance when comparing to 2023 smartphone flagship 4nm SoCs. For AI, Cortex-X925 provides an incredible 41 percent performance uplift to dramatically improve the responsiveness of on-device generative AI, like large language models (LLMs).
The push for leading-edge performance is combined with leading-edge efficiency through our new Arm Cortex-A725 CPU, which delivers a 35 percent improvement in performance efficiency to target AI and mobile gaming use cases. This is supported by a refreshed Arm Cortex-A520 CPU and an updated DSU-120 that provide power efficiency and scalability improvements for consumer devices that adopt the latest Armv9 CPU clusters. Learn more about the new Armv9 CPUs in this blog.
The new Arm Immortalis-G925 GPU, which is our most performant and efficient GPU to date, delivers 37 percent more performance across a wide range of leading mobile gaming applications, as well as 34 percent more performance when measured over multiple AI and ML networks. While Immortalis-G925 is for the flagship smartphone market, the highly scalable new GPU family, including Arm Mali-G725 and Mali-G625 GPUs, targets a broad range of consumer device markets, from premium mobile handsets to smartwatches and XR wearables. Learn more about Arm’s new GPUs in this blog.

Optimizing software for outstanding developer innovation​

We are relentlessly focused on millions of developers worldwide, ensuring they have access to the performance, tools and software libraries required to create the next wave of AI-enabled applications. To enable developers to land these innovations quickly at the highest performance, we’re introducing Arm Kleidi, which includes KleidiAI for AI workloads and KleidiCV for computer vision applications. KleidiAI is a set of compute kernels for developers of AI frameworks, providing them with frictionless access to the best performance possible on Arm CPUs, across a wide range of devices, with support for key Arm architectural features such as NEON, SVE2 and SME2. KleidiAI integrates with popular AI frameworks, such as PyTorch, Tensorflow and MediaPipe, with a view to accelerating the performance of key models including Meta Llama 3 and Phi-3. It is also backwards and forwards compatible to ensure Arm is future fit as we bring additional technologies to market. Learn more about Arm Kleidi in this blog.

The compute platform for the future of AI​

Through the unique combination of leading-edge CPU and GPU technologies, production ready physical implementations and continuous software optimizations, CSS for Client combined with Kleidi software will provide the compute platform for the future of AI, a future that is built on Arm.

Rob Telson has liked multiple mediatek posts on LinkedIn over the last 2 years, I'm sure it means nothing though, right.........?
1000016065.jpg
1000016068.jpg
1000016070.jpg
 
Last edited:
  • Like
  • Love
Reactions: 14 users

BrainShit

Regular
Could be the CyberNeuro-RT now live and available :D 🚀 🔥

You'd like to think if selling it, that they will need to be sourcing the 2 "neuromorphic offerings" via direct eg BRN & Intel, via their own licence or via an existing licensee eg Megachips or Renesas?


View attachment 63995



CyberNeuro-RT​


An AI/ML-driven, highly-scalable, real-time network defense & threat intelligence tool with GPU or low-power neuromorphic chip deployment


6557cbc5f33149a7b16b5923_Play%20Icon%20Container%20(3).png


A Quantum Ventura, Lockheed Martin, and Penn State Innovation
Quantum Ventura’s CyberNeuro-RT (CNRT) technology offering has been developed in partnership with Lockheed Martin Co.’s MFC Division and Pennsylvania State University under partial funding from the U.S. Department of Energy.
Cutting-Edge Unsupervised ML

Scalable Unsupervised Outlier Detection (SUOD)
  • Large-scale heterogeneous outlier detection
  • 6 ML Algo EnsembleModel Approximation for Complex Models
  • Execution Efficiency Improvement for Task Load Balancing in Distributed System
  • Variational Autoencoder (VAE)
Variational Autoencoder (VAE)
  • Encoder-Decoder Architecture
  • Variational => Highly Regularized Encoder
  • ETrained to Minimize Reconstruction Error of initial input and reconstructed output
  • Variational Autoencoder (VAE)
75x Dataset Growth in Under 2 Months
  1. Existing Dataset Ingestion: Proprietary system enables ingestion of any existing network capture dataset with flexible support for any labelling system
  2. From-the-wild Zero Day Sampling: System enables capturing and simulation of novel threats for additional data sampling
  3. Data Generation via Simulation: ThreatATI database and proprietary ingestion system enable sampling and augmentation for cataloged threats from proprietary and public threat databases
Proprietary Pipeline Adapts to Any Dataset
View attachment 63991

Follow Threats Home with Dark Web Tracking
View attachment 63992

At-the-edge Neuromorphic Processing
◯ Two offerings from the leading neuromorphic developers: Intel and Brainchip
◯ Small form factor, magnitudes less power consumption than GPU
◯ On-chip learning for deployment network specific attack detection


6519ca62386d7be9f8aa0cf8_Image.png

Intel Loihi

View attachment 63993
Brainchip Akida

Dashboards Minimizes Operator Fatigue
Robust, Multi-Faceted, User-Friendly Cyber Analyst Dashboard Operator Fatigue Allows Cyber Attacks To Happen
  • Large numbers of false alarms cause real threats to be missed
  • False alarms fatigue the cyber analyst further increasing risk of missed threats

The Cyber Neuro-RT Dashboard Is Designed To Minimize All Sources Of Analyst Fatigue While Presenting Timely And Meaningful Data Insights
  • Al based false alarms are minimized (trained for minimal false positive rate)n for cataloged threats from proprietary and public threat databases
  • Possible threats are ranked by importance and confidence
  • Only the most relevant and likely alarms are actioned upon

65c64c89a6c024d7d69022ea_Logo%20(3).svg


ABOUT US
What we do
Team
Partnership
PRODUCT
CyberNeuro RT
E-RECOV
Diamond Droid
CONNECT


© 2023 Quantum Ventura Inc. All Rights Reserved.
Privacy PolicyTerms of Service

That's indeed the answer of my question... couple posts ago.

BTW: <!DOCTYPE html><!-- Last Published: Fri May 24 2024 17:40:12 GMT+0000 (Coordinated Universal Time) --><html data-wf-domain="www.quantumventura.tech" .....

Very nice find... and we all know that Loihi is not the best choice 😉

While Loihi 2 offers more scalability and programmability, Akida's key advantage is its on-chip learning capability and extreme power efficiency for edge AI applications. This allows Akida to continue learning and adjusting to new data at the edge, without relying on external processors or data transfer. ... necessary for network attack detection... Akida bring also low power consumption as well as lower compute cost to the tabke... Loihi 2 does not have this on-chip learning capability and need a separate CPU. (For my understanding)

While Loihi 2 provides advantages like faster processing, better scalability across chips, and more programmability. But ... Loihi and Loihi 2 chips are currently only available for research and evaluation purposes through Intel's Neuromorphic Research Community (INRC).

.
 
Last edited:
  • Like
  • Love
  • Fire
Reactions: 25 users

Bravo

If ARM was an arm, BRN would be its biceps💪!

Qualcomm’s Boom Highlights AI Shift To The Edge​

Contributor

R. Scott Raynovich is the Founder and Chief Analyst at Futuriom.com


May 29, 2024,03:10pm EDT

A year ago, Qualcomm was not a well-embraced tech stock. In fact, as recently as last October, the company’s shares were dabbling with a 52-week low. The long-time maker of mobile technology and holder of valuable intellectual property was mired in a slump, weighed down by slow growth in China, as well as poor PC and smartphone markets.

Fast-forward to now. There’s this magic called AI. In just six months, Qualcomm shares have gone from a 52-week low to an all-time high on the market’s realization that it has the components and technology to play in the AI market, as devices such as smartphones and PCs become key to delivering AI inferencing—where the output of AI models are delivered to customers on devices. This is what many in the technology industries refer to the “edge”—devices connected to infrastructure.

The launch of Qualcomm’s new Snapdragon X series of chips, which targets AI inferencing, has coalesced nicely with turnarounds in the PC and device markets to give Qualcomm this boost. Qualcomm has also made a series of announcements with key partners such as Microsoft who are adopting its technology for AI processing on consumer devices.

GERMANY-US-INTERNET-AI-ARTIFICIAL-INTELLIGENCE

A photo taken on November 23, 2023 shows the logo of the ChatGPT application developed by US ... [+]


Enthusiasm for the AI Edge​


The Qualcomm example shows how the business media and Wall St. have started picking up on the idea that AI requirements are perhaps more broad than just delivering large language models (LLMs) and chatbots from the cloud. There’s edge AI, private enterprise AI, and vertical AI as well.

The thirst for computing to fuel AI extends to the billions of devices around the world, ranging from cars to cameras, often referred to as the Internet of Things (IoT). Anything connected to infrastructure or network will need more processing power and connectivity to run AI models.


QCOM stock Forbes 5-28-2024

Qualcomm shares recently hit a new high on enthusiasm for edge AI.


What does this mean about AI infrastructure at large? Our recent research and discussions with technology builders say the AI infrastrucuture discussion is about to morph. I think that over the next few years we’ll be talking less about LLMs and chatbots and more about vertically focused AI apps and infrastructure—and private AI for enterprise.


Chatbots are an appealing mass market, but they only address one segment—consumer information. The closest analog is the search market, where Google holds between an 80%-90% share, raking in about $80 billion in quarterly revenue. The current market size for search is estimated to be about $400 billion. The enterprise and industrial technology infrastructure markets represent hundreds of billions more.

The AI market will extend well beyond consumer information and chatbots. It also has diverse applications in data analytics, robotics, healthcare, and finance—to only name a few. Many of these more specific vertical markets may not even need LLMs at all but more specific AI technologies that could include small language models (SLMs) or other custom-designed AI processing software. They’ll have to deliver the results—AI inferencing—across myriad hardware platforms ranging from cars to medical devices.

“We have only scratched the surface of AI as it moves out into verticals, private AI, edge, and distributed cloud. There's more to AI than LLMs and SLMs, and vertical/domain-specific models will dominate the new deployments outside of the large cloud players,” Mike Dvorkin, a cofounder and CTO of cloud networking company Hedgehog, told me in a recent interview. “The opportunity is immense, and it will require new thinking about infrastructure and how it's consumed."

AI To Drive Private AI and Hybrid Infrastructure​

If Dvorkin, a former distinguished engineer at Cisco, is right—the AI edge infrastructure market will be gigantic.
This conversation has popped up in more discussions I’ve witnessed recently, where some technologists have estimated the AI market could flip from 80% modeling and 20% inferencing to the reverse. In addition, CIOs I’ve listened to recently have pointed out that the private AI model will be much more useful in specific industries such as healthcare and finance, where enterprise customers may want to own as much of their own data and models as possible.


For this reason, the AI wave will drive more diverse hybrid and multicloud architectures—including private clouds—as the needs for data, analytics and connectivity spread across multiple infrastructures.
“We have a hybrid cloud model,” said George Maddalino, the CTO of Mastercard, at a recent tech event hosted by the Economist in New York. “We have workloads on prem, workloads on hyperscaler. You can see us traversing from a banks datacenter across a hypserscaler cloud to a retailer in the cloud. By default we end up in an environment that's multicloud.”
Nizar Trigui, CTO with GXO Logistics, also pointed to the idea that AI application connectivity to data will be pervasive, for any location.
“Most of us are going through some kind of digital transformation,” said Trigui. "How do we create more value for the customers? We are creating value out of data in 1,000 warehouses around the world, digitally connected.”

The biggest takeaway from Qualcomm’s recent rise is the enthusiasm for AI everywhere—this means processing and inferencing data wherever it lives. This endeavor will not be limited to infrastructure or models owned exclusively by the hyperscalers, it will spread far and wide across enterprise, edge, and IoT.



 
  • Like
  • Love
  • Fire
Reactions: 42 users
Top Bottom