BRN Discussion Ongoing

wilzy123

Founding Member
Further to my post above when BeEmotion (NVISO imo) was discovered, it appears there has been a refresh and launching soon. Australia, Japan & Switzerland.


View attachment 63934
BeEmotion is the new name for NVISO... and part of BRN demo's at the recent embedded vision expo.

Eat up guys.
1000025826.gif
 
  • Like
  • Haha
Reactions: 10 users

wilzy123

Founding Member
Did we have this one already? I don't remember.

Press Release

BrainChip Earns Australian Patent for Improved Spiking Neural Network​

Photo of Business Wire Business Wire Send an email3 weeks ago
2 minutes read
Brainchip-Essential-Al_Logo_Blk_RGB-1-780x470.jpg


LAGUNA HILLS, Calif.–(BUSINESS WIRE)–BrainChip Holdings Ltd (ASX: BRN, OTCQX: BRCHF, ADR: BCHPY), the world’s first commercial producer of ultra-low power, fully digital, event-based, neuromorphic AI, received its latest Australian patent, further strengthening its IP portfolio related to sustainable and efficient AI technologies.


Patent AU2022287647, “An Improved Spiking Neural Network,” facilitates low-shot learning. Learning is performed by adding neurons to the final layer of a previously trained network to represent a new class, with the neural network weights of the added neuron being trained with only a few samples while the remainder of the network remains unchanged. Applications impacted by this patent include biometric face recognition, speech recognition and anomaly detection in industrial systems.
BrainChip’s AkidaTM IP and MetaTFTM tools seamlessly transforms contemporary neural networks into event-based or spiking networks. This patented technology is uniquely synergistic with the converted spiking networks, enabling the streamlined deployment of Edge learning algorithms and unlocking use cases that conventional AI solutions cannot achieve.
“To push the limits of AI on-chip compute, we need to continuously push the envelope of what is possible with neuromorphic technology,” said Sean Hehir, CEO of BrainChip. “BrainChip’s expanding patent portfolio ensures our freedom to practice our own inventions and to prevent others from infringing on our highly innovative intellectual property. This latest Australian patent is the next step in advancing Edge AI further than ever before.”
BrainChip’s portfolio now comprises 20 issued patents (13x US, 5x AU, 2CN). In addition, there are 23 pending patent applications across the US, Europe, Australia, Canada, Japan, Korea, India and Israel.
Not in Australia... hence the media release.
 

wilzy123

Founding Member

Why are Brainchip shares racing higher after its AGM?​

This struggling semiconductor stock is having a good session. But why?

Warning: Motley Fool!!

Brainchip Holdings Ltd (ASX: BRN) shares are rebounding on Tuesday.

In afternoon trade, the struggling semiconductor company's shares are up 6% to 26.5 cents.

Why are Brainchip shares rising today?​

Investors have been buying the company's shares for a couple of reasons.

One is the release of its annual general meeting (AGM) update this morning. The other is news that the company has managed to avoid a spill of its board at this meeting.

In respect to its AGM update, Brainchip CEO, Sean Hehir, acknowledged that the company hasn't delivered meaningful revenue yet, and was disappointed by this, but remains positive on the future. He said:


Licensing deals​

Hehir believes the company is close to getting an answer from some potential customers after being in discussions for over a year. He said:


The under pressure CEO has been heavily criticised for his big salary and bonuses and distinct lack of commercial success since joining. However, he remains confident on delivering the goods for shareholders. He adds:


Time will tell if this proves to be the case or if the next 12 months will just be more of the same – all hype and no substance.

Board spill avoided​

Brainchip shares may also be rising today after the company avoided a disruptive board spill.

While 33.41% of votes were against its remuneration report, a sizeable 85.59% of votes were against spilling the board.

ROFL. This junk does not belong here.
 
  • Like
  • Fire
Reactions: 6 users

Terroni2105

Founding Member
1716978843710.jpeg



 
  • Like
  • Fire
Reactions: 13 users

Terroni2105

Founding Member
1716978883166.jpeg
 
  • Like
  • Fire
Reactions: 6 users

Terroni2105

Founding Member
1716978942659.jpeg
 
  • Like
  • Fire
Reactions: 5 users

Terroni2105

Founding Member
I couldn’t get all those screenshots on the one post but they are all in order.

first the post by Rudy Pei (BrainChip ML research engineer) and then comments under his post by Chris Jones (BrainChip Senior Product manager)

don’t ask me what any of it means though lol 🤷‍♀️
 
  • Haha
  • Like
  • Fire
Reactions: 13 users
  • Like
  • Fire
Reactions: 9 users

TECH

Regular
And your point is????

Don't forget....that's the pointy, point toasty, you appreciate NDA's and why they are in place in the first place, and as such the curtains
will remain closed until such time as either party concerned agrees to remove them, so the waiting, dot joining and impatience remains
the number one annoyance for many shareholders.

You still whinging about management ? 85% + didn't think it warranted a spill just yet, maybe this time next year the story will be
completely reversed, but not from where I stand.

Anything worth contributing to the forum toasty ? just asking for a friend.

Tech 🥱
 
  • Like
  • Haha
  • Fire
Reactions: 28 users
Don't forget....that's the pointy, point toasty, you appreciate NDA's and why they are in place in the first place, and as such the curtains
will remain closed until such time as either party concerned agrees to remove them, so the waiting, dot joining and impatience remains
the number one annoyance for many shareholders.

You still whinging about management ? 85% + didn't think it warranted a spill just yet, maybe this time next year the story will be
completely reversed, but not from where I stand.

Anything worth contributing to the forum toasty ? just asking for a friend.

Tech 🥱
I hope that someday someone writes a story about the journey we are all on we are all here be it from different places or different times or different perspectives but we are all here. We wait some are ok with it
Some frustrated with it and some so impatient disrespectful and just rude beyond belief.
There’s a story to be told
And it’s a game of poker in a way
How many have thrown in there hand
Or bought back in to just crash and burn.
There will come a time when like most Scares the pain fades and all that is left is a faint mark
Just enough to remember
How many people have come here with good intentions or not.
Investors alike we wait our turn.
Every day is a day closer to the edge
The smell of fresh air rising up from the far edge you can almost taste it feel it or is it just a dream.
 
  • Like
  • Love
  • Fire
Reactions: 28 users
I hope that someday someone writes a story about the journey we are all on we are all here be it from different places or different times or different perspectives but we are all here. We wait some are ok with it
Some frustrated with it and some so impatient disrespectful and just rude beyond belief.
There’s a story to be told
And it’s a game of poker in a way
How many have thrown in there hand
Or bought back in to just crash and burn.
There will come a time when like most Scares the pain fades and all that is left is a faint mark
Just enough to remember
How many people have come here with good intentions or not.
Investors alike we wait our turn.
Every day is a day closer to the edge
The smell of fresh air rising up from the far edge you can almost taste it feel it or is it just a dream.
Are you sure that's fresh air rising up mate? 🤔..

100.gif
 
  • Haha
Reactions: 5 users
Could be the CyberNeuro-RT now live and available :D 🚀 🔥

You'd like to think if selling it, that they will need to be sourcing the 2 "neuromorphic offerings" via direct eg BRN & Intel, via their own licence or via an existing licensee eg Megachips or Renesas?


IMG_20240529_214534.jpg




CyberNeuro-RT​


An AI/ML-driven, highly-scalable, real-time network defense & threat intelligence tool with GPU or low-power neuromorphic chip deployment


6557cbc5f33149a7b16b5923_Play%20Icon%20Container%20(3).png


A Quantum Ventura, Lockheed Martin, and Penn State Innovation
Quantum Ventura’s CyberNeuro-RT (CNRT) technology offering has been developed in partnership with Lockheed Martin Co.’s MFC Division and Pennsylvania State University under partial funding from the U.S. Department of Energy.
Cutting-Edge Unsupervised ML

Scalable Unsupervised Outlier Detection (SUOD)
  • Large-scale heterogeneous outlier detection
  • 6 ML Algo EnsembleModel Approximation for Complex Models
  • Execution Efficiency Improvement for Task Load Balancing in Distributed System
  • Variational Autoencoder (VAE)
Variational Autoencoder (VAE)
  • Encoder-Decoder Architecture
  • Variational => Highly Regularized Encoder
  • ETrained to Minimize Reconstruction Error of initial input and reconstructed output
  • Variational Autoencoder (VAE)
75x Dataset Growth in Under 2 Months
  1. Existing Dataset Ingestion: Proprietary system enables ingestion of any existing network capture dataset with flexible support for any labelling system
  2. From-the-wild Zero Day Sampling: System enables capturing and simulation of novel threats for additional data sampling
  3. Data Generation via Simulation: ThreatATI database and proprietary ingestion system enable sampling and augmentation for cataloged threats from proprietary and public threat databases
Proprietary Pipeline Adapts to Any Dataset
1716990062067.png


Follow Threats Home with Dark Web Tracking
1716990103899.png


At-the-edge Neuromorphic Processing
◯ Two offerings from the leading neuromorphic developers: Intel and Brainchip
◯ Small form factor, magnitudes less power consumption than GPU
◯ On-chip learning for deployment network specific attack detection


6519ca62386d7be9f8aa0cf8_Image.png

Intel Loihi

1716990156250.png

Brainchip Akida

Dashboards Minimizes Operator Fatigue
Robust, Multi-Faceted, User-Friendly Cyber Analyst Dashboard Operator Fatigue Allows Cyber Attacks To Happen
  • Large numbers of false alarms cause real threats to be missed
  • False alarms fatigue the cyber analyst further increasing risk of missed threats

The Cyber Neuro-RT Dashboard Is Designed To Minimize All Sources Of Analyst Fatigue While Presenting Timely And Meaningful Data Insights
  • Al based false alarms are minimized (trained for minimal false positive rate)n for cataloged threats from proprietary and public threat databases
  • Possible threats are ranked by importance and confidence
  • Only the most relevant and likely alarms are actioned upon

65c64c89a6c024d7d69022ea_Logo%20(3).svg


ABOUT US
What we do
Team
Partnership
PRODUCT
CyberNeuro RT
E-RECOV
Diamond Droid
CONNECT


© 2023 Quantum Ventura Inc. All Rights Reserved.
Privacy PolicyTerms of Service
 

Attachments

  • 1716989873932.png
    1716989873932.png
    4.4 KB · Views: 59
  • Like
  • Fire
  • Love
Reactions: 79 users

Tothemoon24

Top 20
The Sky is no longer the Limit


IMG_9005.jpeg
 

Attachments

  • IMG_9005.jpeg
    IMG_9005.jpeg
    1.2 MB · Views: 54
Last edited:
  • Like
  • Love
  • Fire
Reactions: 60 users

Tothemoon24

Top 20
IMG_9006.jpeg



This is the compute platform for the future of AI. 👇

🆕 Introducing Arm Compute Subsystems (CSS) for Client.

Designed for AI smartphones and AI PCs, CSS for Client delivers production ready physical implementations of our new CPUs and GPUs to deliver next-gen AI experiences quickly and easily.

It includes:
🔹 Our latest Armv9.2 CPU cluster, including the Arm Cortex-X925 which delivers the highest year-on-year performance uplift in the history of Cortex-X

🔹 The Arm Immortalis-G925 GPU, our most performant and efficient GPU to date with a 37% uplift in graphics performance

We are also launching new KleidiAI software to provide the simplest way for developers to get the best performance out of Arm CPUs.

So whether you want more AI, more performance or more advanced silicon, you can rely on our new solution to provide the foundation for AI-powered experiences on consumer devices. https://okt.to/PkzCt3
 
  • Like
  • Thinking
  • Wow
Reactions: 12 users

Tothemoon24

Top 20

Redefining Mobile Experiences with AI-Optimized Arm CSS for Client and New Arm Kleidi Software​


By Chris Bergey, SVP and GM of the Client Business, Arm
Artificial Intelligence (AI)SmartphonesSoftware
Share
CSS_Newsroom_1600x900_4-copy-1400x788.jpg

News highlights​

  • New compute solution, Arm Compute Subsystems (CSS) for Client, brings together Armv9 benefits with validated and verified production ready implementations of new Arm CPUs and GPUs on 3nm process nodes to enable silicon partners to rapidly innovate and speed time to market
  • AI-optimized Arm CSS for Client with next generation Cortex-X CPU, delivering highest year-on-year IPC uplift resulting in a 36% increase in performance; new Immortalis GPU brings a 37% uplift in graphics performance
  • New KleidiAI software integrates with popular AI frameworks for seamless developer experiences; KleidiAI with Arm CSS dramatically improves performance of computing applications by leveraging a wide range of Arm’s acceleration technologies (NEON, SVE2 and SME2)
With power efficiency in our DNA, the Arm platform is providing the foundation for the next wave of computing demands as the AI era accelerates. As AI models continue to rapidly evolve, we’re seeing that software begins to outpace hardware which means additional innovation is required at all levels of the compute stack. To meet these growing demands, we’re evolving our solution offering to gain the maximum benefits of leading process nodes and announcing the newest Arm compute solution for AI smartphones and PCs – Arm Compute Subsystems (CSS) for Client.
Arm CSS for Client provides the performance, efficiency and accessibility to deliver leading AI-based experiences and makes it easier and faster for our silicon partners to build Arm-based solutions and get to market quickly. CSS for Client provides the foundational computing elements for flagship SoCs and features the latest Armv9.2 CPUs and Immortalis GPUs, as well as production ready physical implementations for CPU and GPU on 3nm and the latest Corelink System Interconnect and System Memory Management Units (SMMUs).

Unprecedented CPU and GPU performance and efficiency​

CSS for Client delivers a step change in platform capabilities to continue pushing the boundaries of premium mobile experiences. This is the fastest Arm compute platform addressing demanding real-life Android workloads with greater than 30 percent increase on compute and graphics performance and 59 percent faster AI inference for broader AI/ML and computer vision (CV) workloads.
At the heart of CSS for Client is Arm’s most performant, efficient and versatile CPU cluster ever for maximum performance and power efficiency. The new Arm Cortex-X925 delivers the highest year-on-year performance uplift in the history of Cortex-X. Taking advantage of the leading edge 3nm process nodes, assuming a 3.8GHz clock rate and maximum cache size, the result is a massive 36 percent increase in single-thread performance when comparing to 2023 smartphone flagship 4nm SoCs. For AI, Cortex-X925 provides an incredible 41 percent performance uplift to dramatically improve the responsiveness of on-device generative AI, like large language models (LLMs).
The push for leading-edge performance is combined with leading-edge efficiency through our new Arm Cortex-A725 CPU, which delivers a 35 percent improvement in performance efficiency to target AI and mobile gaming use cases. This is supported by a refreshed Arm Cortex-A520 CPU and an updated DSU-120 that provide power efficiency and scalability improvements for consumer devices that adopt the latest Armv9 CPU clusters. Learn more about the new Armv9 CPUs in this blog.
The new Arm Immortalis-G925 GPU, which is our most performant and efficient GPU to date, delivers 37 percent more performance across a wide range of leading mobile gaming applications, as well as 34 percent more performance when measured over multiple AI and ML networks. While Immortalis-G925 is for the flagship smartphone market, the highly scalable new GPU family, including Arm Mali-G725 and Mali-G625 GPUs, targets a broad range of consumer device markets, from premium mobile handsets to smartwatches and XR wearables. Learn more about Arm’s new GPUs in this blog.

Optimizing software for outstanding developer innovation​

We are relentlessly focused on millions of developers worldwide, ensuring they have access to the performance, tools and software libraries required to create the next wave of AI-enabled applications. To enable developers to land these innovations quickly at the highest performance, we’re introducing Arm Kleidi, which includes KleidiAI for AI workloads and KleidiCV for computer vision applications. KleidiAI is a set of compute kernels for developers of AI frameworks, providing them with frictionless access to the best performance possible on Arm CPUs, across a wide range of devices, with support for key Arm architectural features such as NEON, SVE2 and SME2. KleidiAI integrates with popular AI frameworks, such as PyTorch, Tensorflow and MediaPipe, with a view to accelerating the performance of key models including Meta Llama 3 and Phi-3. It is also backwards and forwards compatible to ensure Arm is future fit as we bring additional technologies to market. Learn more about Arm Kleidi in this blog.

The compute platform for the future of AI​

Through the unique combination of leading-edge CPU and GPU technologies, production ready physical implementations and continuous software optimizations, CSS for Client combined with Kleidi software will provide the compute platform for the future of AI, a future that is built on Arm.
 
  • Like
  • Fire
  • Love
Reactions: 14 users
  • Like
  • Love
Reactions: 3 users

Redefining Mobile Experiences with AI-Optimized Arm CSS for Client and New Arm Kleidi Software​


By Chris Bergey, SVP and GM of the Client Business, Arm
Artificial Intelligence (AI)SmartphonesSoftware
Share
CSS_Newsroom_1600x900_4-copy-1400x788.jpg

News highlights​

  • New compute solution, Arm Compute Subsystems (CSS) for Client, brings together Armv9 benefits with validated and verified production ready implementations of new Arm CPUs and GPUs on 3nm process nodes to enable silicon partners to rapidly innovate and speed time to market
  • AI-optimized Arm CSS for Client with next generation Cortex-X CPU, delivering highest year-on-year IPC uplift resulting in a 36% increase in performance; new Immortalis GPU brings a 37% uplift in graphics performance
  • New KleidiAI software integrates with popular AI frameworks for seamless developer experiences; KleidiAI with Arm CSS dramatically improves performance of computing applications by leveraging a wide range of Arm’s acceleration technologies (NEON, SVE2 and SME2)
With power efficiency in our DNA, the Arm platform is providing the foundation for the next wave of computing demands as the AI era accelerates. As AI models continue to rapidly evolve, we’re seeing that software begins to outpace hardware which means additional innovation is required at all levels of the compute stack. To meet these growing demands, we’re evolving our solution offering to gain the maximum benefits of leading process nodes and announcing the newest Arm compute solution for AI smartphones and PCs – Arm Compute Subsystems (CSS) for Client.
Arm CSS for Client provides the performance, efficiency and accessibility to deliver leading AI-based experiences and makes it easier and faster for our silicon partners to build Arm-based solutions and get to market quickly. CSS for Client provides the foundational computing elements for flagship SoCs and features the latest Armv9.2 CPUs and Immortalis GPUs, as well as production ready physical implementations for CPU and GPU on 3nm and the latest Corelink System Interconnect and System Memory Management Units (SMMUs).

Unprecedented CPU and GPU performance and efficiency​

CSS for Client delivers a step change in platform capabilities to continue pushing the boundaries of premium mobile experiences. This is the fastest Arm compute platform addressing demanding real-life Android workloads with greater than 30 percent increase on compute and graphics performance and 59 percent faster AI inference for broader AI/ML and computer vision (CV) workloads.
At the heart of CSS for Client is Arm’s most performant, efficient and versatile CPU cluster ever for maximum performance and power efficiency. The new Arm Cortex-X925 delivers the highest year-on-year performance uplift in the history of Cortex-X. Taking advantage of the leading edge 3nm process nodes, assuming a 3.8GHz clock rate and maximum cache size, the result is a massive 36 percent increase in single-thread performance when comparing to 2023 smartphone flagship 4nm SoCs. For AI, Cortex-X925 provides an incredible 41 percent performance uplift to dramatically improve the responsiveness of on-device generative AI, like large language models (LLMs).
The push for leading-edge performance is combined with leading-edge efficiency through our new Arm Cortex-A725 CPU, which delivers a 35 percent improvement in performance efficiency to target AI and mobile gaming use cases. This is supported by a refreshed Arm Cortex-A520 CPU and an updated DSU-120 that provide power efficiency and scalability improvements for consumer devices that adopt the latest Armv9 CPU clusters. Learn more about the new Armv9 CPUs in this blog.
The new Arm Immortalis-G925 GPU, which is our most performant and efficient GPU to date, delivers 37 percent more performance across a wide range of leading mobile gaming applications, as well as 34 percent more performance when measured over multiple AI and ML networks. While Immortalis-G925 is for the flagship smartphone market, the highly scalable new GPU family, including Arm Mali-G725 and Mali-G625 GPUs, targets a broad range of consumer device markets, from premium mobile handsets to smartwatches and XR wearables. Learn more about Arm’s new GPUs in this blog.

Optimizing software for outstanding developer innovation​

We are relentlessly focused on millions of developers worldwide, ensuring they have access to the performance, tools and software libraries required to create the next wave of AI-enabled applications. To enable developers to land these innovations quickly at the highest performance, we’re introducing Arm Kleidi, which includes KleidiAI for AI workloads and KleidiCV for computer vision applications. KleidiAI is a set of compute kernels for developers of AI frameworks, providing them with frictionless access to the best performance possible on Arm CPUs, across a wide range of devices, with support for key Arm architectural features such as NEON, SVE2 and SME2. KleidiAI integrates with popular AI frameworks, such as PyTorch, Tensorflow and MediaPipe, with a view to accelerating the performance of key models including Meta Llama 3 and Phi-3. It is also backwards and forwards compatible to ensure Arm is future fit as we bring additional technologies to market. Learn more about Arm Kleidi in this blog.

The compute platform for the future of AI​

Through the unique combination of leading-edge CPU and GPU technologies, production ready physical implementations and continuous software optimizations, CSS for Client combined with Kleidi software will provide the compute platform for the future of AI, a future that is built on Arm.
So no Brainchip with arm by the looks ??
Or did I miss something there
 
  • Like
Reactions: 1 users

IloveLamp

Top 20

Redefining Mobile Experiences with AI-Optimized Arm CSS for Client and New Arm Kleidi Software​


By Chris Bergey, SVP and GM of the Client Business, Arm
Artificial Intelligence (AI)SmartphonesSoftware
Share
CSS_Newsroom_1600x900_4-copy-1400x788.jpg

News highlights​

  • New compute solution, Arm Compute Subsystems (CSS) for Client, brings together Armv9 benefits with validated and verified production ready implementations of new Arm CPUs and GPUs on 3nm process nodes to enable silicon partners to rapidly innovate and speed time to market
  • AI-optimized Arm CSS for Client with next generation Cortex-X CPU, delivering highest year-on-year IPC uplift resulting in a 36% increase in performance; new Immortalis GPU brings a 37% uplift in graphics performance
  • New KleidiAI software integrates with popular AI frameworks for seamless developer experiences; KleidiAI with Arm CSS dramatically improves performance of computing applications by leveraging a wide range of Arm’s acceleration technologies (NEON, SVE2 and SME2)
With power efficiency in our DNA, the Arm platform is providing the foundation for the next wave of computing demands as the AI era accelerates. As AI models continue to rapidly evolve, we’re seeing that software begins to outpace hardware which means additional innovation is required at all levels of the compute stack. To meet these growing demands, we’re evolving our solution offering to gain the maximum benefits of leading process nodes and announcing the newest Arm compute solution for AI smartphones and PCs – Arm Compute Subsystems (CSS) for Client.
Arm CSS for Client provides the performance, efficiency and accessibility to deliver leading AI-based experiences and makes it easier and faster for our silicon partners to build Arm-based solutions and get to market quickly. CSS for Client provides the foundational computing elements for flagship SoCs and features the latest Armv9.2 CPUs and Immortalis GPUs, as well as production ready physical implementations for CPU and GPU on 3nm and the latest Corelink System Interconnect and System Memory Management Units (SMMUs).

Unprecedented CPU and GPU performance and efficiency​

CSS for Client delivers a step change in platform capabilities to continue pushing the boundaries of premium mobile experiences. This is the fastest Arm compute platform addressing demanding real-life Android workloads with greater than 30 percent increase on compute and graphics performance and 59 percent faster AI inference for broader AI/ML and computer vision (CV) workloads.
At the heart of CSS for Client is Arm’s most performant, efficient and versatile CPU cluster ever for maximum performance and power efficiency. The new Arm Cortex-X925 delivers the highest year-on-year performance uplift in the history of Cortex-X. Taking advantage of the leading edge 3nm process nodes, assuming a 3.8GHz clock rate and maximum cache size, the result is a massive 36 percent increase in single-thread performance when comparing to 2023 smartphone flagship 4nm SoCs. For AI, Cortex-X925 provides an incredible 41 percent performance uplift to dramatically improve the responsiveness of on-device generative AI, like large language models (LLMs).
The push for leading-edge performance is combined with leading-edge efficiency through our new Arm Cortex-A725 CPU, which delivers a 35 percent improvement in performance efficiency to target AI and mobile gaming use cases. This is supported by a refreshed Arm Cortex-A520 CPU and an updated DSU-120 that provide power efficiency and scalability improvements for consumer devices that adopt the latest Armv9 CPU clusters. Learn more about the new Armv9 CPUs in this blog.
The new Arm Immortalis-G925 GPU, which is our most performant and efficient GPU to date, delivers 37 percent more performance across a wide range of leading mobile gaming applications, as well as 34 percent more performance when measured over multiple AI and ML networks. While Immortalis-G925 is for the flagship smartphone market, the highly scalable new GPU family, including Arm Mali-G725 and Mali-G625 GPUs, targets a broad range of consumer device markets, from premium mobile handsets to smartwatches and XR wearables. Learn more about Arm’s new GPUs in this blog.

Optimizing software for outstanding developer innovation​

We are relentlessly focused on millions of developers worldwide, ensuring they have access to the performance, tools and software libraries required to create the next wave of AI-enabled applications. To enable developers to land these innovations quickly at the highest performance, we’re introducing Arm Kleidi, which includes KleidiAI for AI workloads and KleidiCV for computer vision applications. KleidiAI is a set of compute kernels for developers of AI frameworks, providing them with frictionless access to the best performance possible on Arm CPUs, across a wide range of devices, with support for key Arm architectural features such as NEON, SVE2 and SME2. KleidiAI integrates with popular AI frameworks, such as PyTorch, Tensorflow and MediaPipe, with a view to accelerating the performance of key models including Meta Llama 3 and Phi-3. It is also backwards and forwards compatible to ensure Arm is future fit as we bring additional technologies to market. Learn more about Arm Kleidi in this blog.

The compute platform for the future of AI​

Through the unique combination of leading-edge CPU and GPU technologies, production ready physical implementations and continuous software optimizations, CSS for Client combined with Kleidi software will provide the compute platform for the future of AI, a future that is built on Arm.

Rob Telson has liked multiple mediatek posts on LinkedIn over the last 2 years, I'm sure it means nothing though, right.........?
1000016065.jpg
1000016068.jpg
1000016070.jpg
 
Last edited:
  • Like
  • Love
Reactions: 14 users

BrainShit

Regular
Could be the CyberNeuro-RT now live and available :D 🚀 🔥

You'd like to think if selling it, that they will need to be sourcing the 2 "neuromorphic offerings" via direct eg BRN & Intel, via their own licence or via an existing licensee eg Megachips or Renesas?


View attachment 63995



CyberNeuro-RT​


An AI/ML-driven, highly-scalable, real-time network defense & threat intelligence tool with GPU or low-power neuromorphic chip deployment


6557cbc5f33149a7b16b5923_Play%20Icon%20Container%20(3).png


A Quantum Ventura, Lockheed Martin, and Penn State Innovation
Quantum Ventura’s CyberNeuro-RT (CNRT) technology offering has been developed in partnership with Lockheed Martin Co.’s MFC Division and Pennsylvania State University under partial funding from the U.S. Department of Energy.
Cutting-Edge Unsupervised ML

Scalable Unsupervised Outlier Detection (SUOD)
  • Large-scale heterogeneous outlier detection
  • 6 ML Algo EnsembleModel Approximation for Complex Models
  • Execution Efficiency Improvement for Task Load Balancing in Distributed System
  • Variational Autoencoder (VAE)
Variational Autoencoder (VAE)
  • Encoder-Decoder Architecture
  • Variational => Highly Regularized Encoder
  • ETrained to Minimize Reconstruction Error of initial input and reconstructed output
  • Variational Autoencoder (VAE)
75x Dataset Growth in Under 2 Months
  1. Existing Dataset Ingestion: Proprietary system enables ingestion of any existing network capture dataset with flexible support for any labelling system
  2. From-the-wild Zero Day Sampling: System enables capturing and simulation of novel threats for additional data sampling
  3. Data Generation via Simulation: ThreatATI database and proprietary ingestion system enable sampling and augmentation for cataloged threats from proprietary and public threat databases
Proprietary Pipeline Adapts to Any Dataset
View attachment 63991

Follow Threats Home with Dark Web Tracking
View attachment 63992

At-the-edge Neuromorphic Processing
◯ Two offerings from the leading neuromorphic developers: Intel and Brainchip
◯ Small form factor, magnitudes less power consumption than GPU
◯ On-chip learning for deployment network specific attack detection


6519ca62386d7be9f8aa0cf8_Image.png

Intel Loihi

View attachment 63993
Brainchip Akida

Dashboards Minimizes Operator Fatigue
Robust, Multi-Faceted, User-Friendly Cyber Analyst Dashboard Operator Fatigue Allows Cyber Attacks To Happen
  • Large numbers of false alarms cause real threats to be missed
  • False alarms fatigue the cyber analyst further increasing risk of missed threats

The Cyber Neuro-RT Dashboard Is Designed To Minimize All Sources Of Analyst Fatigue While Presenting Timely And Meaningful Data Insights
  • Al based false alarms are minimized (trained for minimal false positive rate)n for cataloged threats from proprietary and public threat databases
  • Possible threats are ranked by importance and confidence
  • Only the most relevant and likely alarms are actioned upon

65c64c89a6c024d7d69022ea_Logo%20(3).svg


ABOUT US
What we do
Team
Partnership
PRODUCT
CyberNeuro RT
E-RECOV
Diamond Droid
CONNECT


© 2023 Quantum Ventura Inc. All Rights Reserved.
Privacy PolicyTerms of Service

That's indeed the answer of my question... couple posts ago.

BTW: <!DOCTYPE html><!-- Last Published: Fri May 24 2024 17:40:12 GMT+0000 (Coordinated Universal Time) --><html data-wf-domain="www.quantumventura.tech" .....

Very nice find... and we all know that Loihi is not the best choice 😉

While Loihi 2 offers more scalability and programmability, Akida's key advantage is its on-chip learning capability and extreme power efficiency for edge AI applications. This allows Akida to continue learning and adjusting to new data at the edge, without relying on external processors or data transfer. ... necessary for network attack detection... Akida bring also low power consumption as well as lower compute cost to the tabke... Loihi 2 does not have this on-chip learning capability and need a separate CPU. (For my understanding)

While Loihi 2 provides advantages like faster processing, better scalability across chips, and more programmability. But ... Loihi and Loihi 2 chips are currently only available for research and evaluation purposes through Intel's Neuromorphic Research Community (INRC).

.
 
Last edited:
  • Like
  • Love
  • Fire
Reactions: 25 users
BeEmotion is the new name for NVISO... and part of BRN demo's at the recent embedded vision expo.

Eat up guys.
View attachment 63956
Heard it all before, until IP,'S Hit less is better
 
Top Bottom