BRN Discussion Ongoing

I hope that someday someone writes a story about the journey we are all on we are all here be it from different places or different times or different perspectives but we are all here. We wait some are ok with it
Some frustrated with it and some so impatient disrespectful and just rude beyond belief.
There’s a story to be told
And it’s a game of poker in a way
How many have thrown in there hand
Or bought back in to just crash and burn.
There will come a time when like most Scares the pain fades and all that is left is a faint mark
Just enough to remember
How many people have come here with good intentions or not.
Investors alike we wait our turn.
Every day is a day closer to the edge
The smell of fresh air rising up from the far edge you can almost taste it feel it or is it just a dream.
Are you sure that's fresh air rising up mate? 🤔..

100.gif
 
  • Haha
Reactions: 5 users
Could be the CyberNeuro-RT now live and available :D 🚀 🔥

You'd like to think if selling it, that they will need to be sourcing the 2 "neuromorphic offerings" via direct eg BRN & Intel, via their own licence or via an existing licensee eg Megachips or Renesas?


IMG_20240529_214534.jpg




CyberNeuro-RT​


An AI/ML-driven, highly-scalable, real-time network defense & threat intelligence tool with GPU or low-power neuromorphic chip deployment


6557cbc5f33149a7b16b5923_Play%20Icon%20Container%20(3).png


A Quantum Ventura, Lockheed Martin, and Penn State Innovation
Quantum Ventura’s CyberNeuro-RT (CNRT) technology offering has been developed in partnership with Lockheed Martin Co.’s MFC Division and Pennsylvania State University under partial funding from the U.S. Department of Energy.
Cutting-Edge Unsupervised ML

Scalable Unsupervised Outlier Detection (SUOD)
  • Large-scale heterogeneous outlier detection
  • 6 ML Algo EnsembleModel Approximation for Complex Models
  • Execution Efficiency Improvement for Task Load Balancing in Distributed System
  • Variational Autoencoder (VAE)
Variational Autoencoder (VAE)
  • Encoder-Decoder Architecture
  • Variational => Highly Regularized Encoder
  • ETrained to Minimize Reconstruction Error of initial input and reconstructed output
  • Variational Autoencoder (VAE)
75x Dataset Growth in Under 2 Months
  1. Existing Dataset Ingestion: Proprietary system enables ingestion of any existing network capture dataset with flexible support for any labelling system
  2. From-the-wild Zero Day Sampling: System enables capturing and simulation of novel threats for additional data sampling
  3. Data Generation via Simulation: ThreatATI database and proprietary ingestion system enable sampling and augmentation for cataloged threats from proprietary and public threat databases
Proprietary Pipeline Adapts to Any Dataset
1716990062067.png


Follow Threats Home with Dark Web Tracking
1716990103899.png


At-the-edge Neuromorphic Processing
◯ Two offerings from the leading neuromorphic developers: Intel and Brainchip
◯ Small form factor, magnitudes less power consumption than GPU
◯ On-chip learning for deployment network specific attack detection


6519ca62386d7be9f8aa0cf8_Image.png

Intel Loihi

1716990156250.png

Brainchip Akida

Dashboards Minimizes Operator Fatigue
Robust, Multi-Faceted, User-Friendly Cyber Analyst Dashboard Operator Fatigue Allows Cyber Attacks To Happen
  • Large numbers of false alarms cause real threats to be missed
  • False alarms fatigue the cyber analyst further increasing risk of missed threats

The Cyber Neuro-RT Dashboard Is Designed To Minimize All Sources Of Analyst Fatigue While Presenting Timely And Meaningful Data Insights
  • Al based false alarms are minimized (trained for minimal false positive rate)n for cataloged threats from proprietary and public threat databases
  • Possible threats are ranked by importance and confidence
  • Only the most relevant and likely alarms are actioned upon

65c64c89a6c024d7d69022ea_Logo%20(3).svg


ABOUT US
What we do
Team
Partnership
PRODUCT
CyberNeuro RT
E-RECOV
Diamond Droid
CONNECT


© 2023 Quantum Ventura Inc. All Rights Reserved.
Privacy PolicyTerms of Service
 

Attachments

  • 1716989873932.png
    1716989873932.png
    4.4 KB · Views: 54
  • Like
  • Fire
  • Love
Reactions: 79 users

Tothemoon24

Top 20
The Sky is no longer the Limit


IMG_9005.jpeg
 

Attachments

  • IMG_9005.jpeg
    IMG_9005.jpeg
    1.2 MB · Views: 50
Last edited:
  • Like
  • Love
  • Fire
Reactions: 60 users

Tothemoon24

Top 20
IMG_9006.jpeg



This is the compute platform for the future of AI. 👇

🆕 Introducing Arm Compute Subsystems (CSS) for Client.

Designed for AI smartphones and AI PCs, CSS for Client delivers production ready physical implementations of our new CPUs and GPUs to deliver next-gen AI experiences quickly and easily.

It includes:
🔹 Our latest Armv9.2 CPU cluster, including the Arm Cortex-X925 which delivers the highest year-on-year performance uplift in the history of Cortex-X

🔹 The Arm Immortalis-G925 GPU, our most performant and efficient GPU to date with a 37% uplift in graphics performance

We are also launching new KleidiAI software to provide the simplest way for developers to get the best performance out of Arm CPUs.

So whether you want more AI, more performance or more advanced silicon, you can rely on our new solution to provide the foundation for AI-powered experiences on consumer devices. https://okt.to/PkzCt3
 
  • Like
  • Thinking
  • Wow
Reactions: 12 users

Tothemoon24

Top 20

Redefining Mobile Experiences with AI-Optimized Arm CSS for Client and New Arm Kleidi Software​


By Chris Bergey, SVP and GM of the Client Business, Arm
Artificial Intelligence (AI)SmartphonesSoftware
Share
CSS_Newsroom_1600x900_4-copy-1400x788.jpg

News highlights​

  • New compute solution, Arm Compute Subsystems (CSS) for Client, brings together Armv9 benefits with validated and verified production ready implementations of new Arm CPUs and GPUs on 3nm process nodes to enable silicon partners to rapidly innovate and speed time to market
  • AI-optimized Arm CSS for Client with next generation Cortex-X CPU, delivering highest year-on-year IPC uplift resulting in a 36% increase in performance; new Immortalis GPU brings a 37% uplift in graphics performance
  • New KleidiAI software integrates with popular AI frameworks for seamless developer experiences; KleidiAI with Arm CSS dramatically improves performance of computing applications by leveraging a wide range of Arm’s acceleration technologies (NEON, SVE2 and SME2)
With power efficiency in our DNA, the Arm platform is providing the foundation for the next wave of computing demands as the AI era accelerates. As AI models continue to rapidly evolve, we’re seeing that software begins to outpace hardware which means additional innovation is required at all levels of the compute stack. To meet these growing demands, we’re evolving our solution offering to gain the maximum benefits of leading process nodes and announcing the newest Arm compute solution for AI smartphones and PCs – Arm Compute Subsystems (CSS) for Client.
Arm CSS for Client provides the performance, efficiency and accessibility to deliver leading AI-based experiences and makes it easier and faster for our silicon partners to build Arm-based solutions and get to market quickly. CSS for Client provides the foundational computing elements for flagship SoCs and features the latest Armv9.2 CPUs and Immortalis GPUs, as well as production ready physical implementations for CPU and GPU on 3nm and the latest Corelink System Interconnect and System Memory Management Units (SMMUs).

Unprecedented CPU and GPU performance and efficiency​

CSS for Client delivers a step change in platform capabilities to continue pushing the boundaries of premium mobile experiences. This is the fastest Arm compute platform addressing demanding real-life Android workloads with greater than 30 percent increase on compute and graphics performance and 59 percent faster AI inference for broader AI/ML and computer vision (CV) workloads.
At the heart of CSS for Client is Arm’s most performant, efficient and versatile CPU cluster ever for maximum performance and power efficiency. The new Arm Cortex-X925 delivers the highest year-on-year performance uplift in the history of Cortex-X. Taking advantage of the leading edge 3nm process nodes, assuming a 3.8GHz clock rate and maximum cache size, the result is a massive 36 percent increase in single-thread performance when comparing to 2023 smartphone flagship 4nm SoCs. For AI, Cortex-X925 provides an incredible 41 percent performance uplift to dramatically improve the responsiveness of on-device generative AI, like large language models (LLMs).
The push for leading-edge performance is combined with leading-edge efficiency through our new Arm Cortex-A725 CPU, which delivers a 35 percent improvement in performance efficiency to target AI and mobile gaming use cases. This is supported by a refreshed Arm Cortex-A520 CPU and an updated DSU-120 that provide power efficiency and scalability improvements for consumer devices that adopt the latest Armv9 CPU clusters. Learn more about the new Armv9 CPUs in this blog.
The new Arm Immortalis-G925 GPU, which is our most performant and efficient GPU to date, delivers 37 percent more performance across a wide range of leading mobile gaming applications, as well as 34 percent more performance when measured over multiple AI and ML networks. While Immortalis-G925 is for the flagship smartphone market, the highly scalable new GPU family, including Arm Mali-G725 and Mali-G625 GPUs, targets a broad range of consumer device markets, from premium mobile handsets to smartwatches and XR wearables. Learn more about Arm’s new GPUs in this blog.

Optimizing software for outstanding developer innovation​

We are relentlessly focused on millions of developers worldwide, ensuring they have access to the performance, tools and software libraries required to create the next wave of AI-enabled applications. To enable developers to land these innovations quickly at the highest performance, we’re introducing Arm Kleidi, which includes KleidiAI for AI workloads and KleidiCV for computer vision applications. KleidiAI is a set of compute kernels for developers of AI frameworks, providing them with frictionless access to the best performance possible on Arm CPUs, across a wide range of devices, with support for key Arm architectural features such as NEON, SVE2 and SME2. KleidiAI integrates with popular AI frameworks, such as PyTorch, Tensorflow and MediaPipe, with a view to accelerating the performance of key models including Meta Llama 3 and Phi-3. It is also backwards and forwards compatible to ensure Arm is future fit as we bring additional technologies to market. Learn more about Arm Kleidi in this blog.

The compute platform for the future of AI​

Through the unique combination of leading-edge CPU and GPU technologies, production ready physical implementations and continuous software optimizations, CSS for Client combined with Kleidi software will provide the compute platform for the future of AI, a future that is built on Arm.
 
  • Like
  • Fire
  • Love
Reactions: 14 users
  • Like
  • Love
Reactions: 3 users

Redefining Mobile Experiences with AI-Optimized Arm CSS for Client and New Arm Kleidi Software​


By Chris Bergey, SVP and GM of the Client Business, Arm
Artificial Intelligence (AI)SmartphonesSoftware
Share
CSS_Newsroom_1600x900_4-copy-1400x788.jpg

News highlights​

  • New compute solution, Arm Compute Subsystems (CSS) for Client, brings together Armv9 benefits with validated and verified production ready implementations of new Arm CPUs and GPUs on 3nm process nodes to enable silicon partners to rapidly innovate and speed time to market
  • AI-optimized Arm CSS for Client with next generation Cortex-X CPU, delivering highest year-on-year IPC uplift resulting in a 36% increase in performance; new Immortalis GPU brings a 37% uplift in graphics performance
  • New KleidiAI software integrates with popular AI frameworks for seamless developer experiences; KleidiAI with Arm CSS dramatically improves performance of computing applications by leveraging a wide range of Arm’s acceleration technologies (NEON, SVE2 and SME2)
With power efficiency in our DNA, the Arm platform is providing the foundation for the next wave of computing demands as the AI era accelerates. As AI models continue to rapidly evolve, we’re seeing that software begins to outpace hardware which means additional innovation is required at all levels of the compute stack. To meet these growing demands, we’re evolving our solution offering to gain the maximum benefits of leading process nodes and announcing the newest Arm compute solution for AI smartphones and PCs – Arm Compute Subsystems (CSS) for Client.
Arm CSS for Client provides the performance, efficiency and accessibility to deliver leading AI-based experiences and makes it easier and faster for our silicon partners to build Arm-based solutions and get to market quickly. CSS for Client provides the foundational computing elements for flagship SoCs and features the latest Armv9.2 CPUs and Immortalis GPUs, as well as production ready physical implementations for CPU and GPU on 3nm and the latest Corelink System Interconnect and System Memory Management Units (SMMUs).

Unprecedented CPU and GPU performance and efficiency​

CSS for Client delivers a step change in platform capabilities to continue pushing the boundaries of premium mobile experiences. This is the fastest Arm compute platform addressing demanding real-life Android workloads with greater than 30 percent increase on compute and graphics performance and 59 percent faster AI inference for broader AI/ML and computer vision (CV) workloads.
At the heart of CSS for Client is Arm’s most performant, efficient and versatile CPU cluster ever for maximum performance and power efficiency. The new Arm Cortex-X925 delivers the highest year-on-year performance uplift in the history of Cortex-X. Taking advantage of the leading edge 3nm process nodes, assuming a 3.8GHz clock rate and maximum cache size, the result is a massive 36 percent increase in single-thread performance when comparing to 2023 smartphone flagship 4nm SoCs. For AI, Cortex-X925 provides an incredible 41 percent performance uplift to dramatically improve the responsiveness of on-device generative AI, like large language models (LLMs).
The push for leading-edge performance is combined with leading-edge efficiency through our new Arm Cortex-A725 CPU, which delivers a 35 percent improvement in performance efficiency to target AI and mobile gaming use cases. This is supported by a refreshed Arm Cortex-A520 CPU and an updated DSU-120 that provide power efficiency and scalability improvements for consumer devices that adopt the latest Armv9 CPU clusters. Learn more about the new Armv9 CPUs in this blog.
The new Arm Immortalis-G925 GPU, which is our most performant and efficient GPU to date, delivers 37 percent more performance across a wide range of leading mobile gaming applications, as well as 34 percent more performance when measured over multiple AI and ML networks. While Immortalis-G925 is for the flagship smartphone market, the highly scalable new GPU family, including Arm Mali-G725 and Mali-G625 GPUs, targets a broad range of consumer device markets, from premium mobile handsets to smartwatches and XR wearables. Learn more about Arm’s new GPUs in this blog.

Optimizing software for outstanding developer innovation​

We are relentlessly focused on millions of developers worldwide, ensuring they have access to the performance, tools and software libraries required to create the next wave of AI-enabled applications. To enable developers to land these innovations quickly at the highest performance, we’re introducing Arm Kleidi, which includes KleidiAI for AI workloads and KleidiCV for computer vision applications. KleidiAI is a set of compute kernels for developers of AI frameworks, providing them with frictionless access to the best performance possible on Arm CPUs, across a wide range of devices, with support for key Arm architectural features such as NEON, SVE2 and SME2. KleidiAI integrates with popular AI frameworks, such as PyTorch, Tensorflow and MediaPipe, with a view to accelerating the performance of key models including Meta Llama 3 and Phi-3. It is also backwards and forwards compatible to ensure Arm is future fit as we bring additional technologies to market. Learn more about Arm Kleidi in this blog.

The compute platform for the future of AI​

Through the unique combination of leading-edge CPU and GPU technologies, production ready physical implementations and continuous software optimizations, CSS for Client combined with Kleidi software will provide the compute platform for the future of AI, a future that is built on Arm.
So no Brainchip with arm by the looks ??
Or did I miss something there
 
  • Like
Reactions: 1 users

IloveLamp

Top 20

Redefining Mobile Experiences with AI-Optimized Arm CSS for Client and New Arm Kleidi Software​


By Chris Bergey, SVP and GM of the Client Business, Arm
Artificial Intelligence (AI)SmartphonesSoftware
Share
CSS_Newsroom_1600x900_4-copy-1400x788.jpg

News highlights​

  • New compute solution, Arm Compute Subsystems (CSS) for Client, brings together Armv9 benefits with validated and verified production ready implementations of new Arm CPUs and GPUs on 3nm process nodes to enable silicon partners to rapidly innovate and speed time to market
  • AI-optimized Arm CSS for Client with next generation Cortex-X CPU, delivering highest year-on-year IPC uplift resulting in a 36% increase in performance; new Immortalis GPU brings a 37% uplift in graphics performance
  • New KleidiAI software integrates with popular AI frameworks for seamless developer experiences; KleidiAI with Arm CSS dramatically improves performance of computing applications by leveraging a wide range of Arm’s acceleration technologies (NEON, SVE2 and SME2)
With power efficiency in our DNA, the Arm platform is providing the foundation for the next wave of computing demands as the AI era accelerates. As AI models continue to rapidly evolve, we’re seeing that software begins to outpace hardware which means additional innovation is required at all levels of the compute stack. To meet these growing demands, we’re evolving our solution offering to gain the maximum benefits of leading process nodes and announcing the newest Arm compute solution for AI smartphones and PCs – Arm Compute Subsystems (CSS) for Client.
Arm CSS for Client provides the performance, efficiency and accessibility to deliver leading AI-based experiences and makes it easier and faster for our silicon partners to build Arm-based solutions and get to market quickly. CSS for Client provides the foundational computing elements for flagship SoCs and features the latest Armv9.2 CPUs and Immortalis GPUs, as well as production ready physical implementations for CPU and GPU on 3nm and the latest Corelink System Interconnect and System Memory Management Units (SMMUs).

Unprecedented CPU and GPU performance and efficiency​

CSS for Client delivers a step change in platform capabilities to continue pushing the boundaries of premium mobile experiences. This is the fastest Arm compute platform addressing demanding real-life Android workloads with greater than 30 percent increase on compute and graphics performance and 59 percent faster AI inference for broader AI/ML and computer vision (CV) workloads.
At the heart of CSS for Client is Arm’s most performant, efficient and versatile CPU cluster ever for maximum performance and power efficiency. The new Arm Cortex-X925 delivers the highest year-on-year performance uplift in the history of Cortex-X. Taking advantage of the leading edge 3nm process nodes, assuming a 3.8GHz clock rate and maximum cache size, the result is a massive 36 percent increase in single-thread performance when comparing to 2023 smartphone flagship 4nm SoCs. For AI, Cortex-X925 provides an incredible 41 percent performance uplift to dramatically improve the responsiveness of on-device generative AI, like large language models (LLMs).
The push for leading-edge performance is combined with leading-edge efficiency through our new Arm Cortex-A725 CPU, which delivers a 35 percent improvement in performance efficiency to target AI and mobile gaming use cases. This is supported by a refreshed Arm Cortex-A520 CPU and an updated DSU-120 that provide power efficiency and scalability improvements for consumer devices that adopt the latest Armv9 CPU clusters. Learn more about the new Armv9 CPUs in this blog.
The new Arm Immortalis-G925 GPU, which is our most performant and efficient GPU to date, delivers 37 percent more performance across a wide range of leading mobile gaming applications, as well as 34 percent more performance when measured over multiple AI and ML networks. While Immortalis-G925 is for the flagship smartphone market, the highly scalable new GPU family, including Arm Mali-G725 and Mali-G625 GPUs, targets a broad range of consumer device markets, from premium mobile handsets to smartwatches and XR wearables. Learn more about Arm’s new GPUs in this blog.

Optimizing software for outstanding developer innovation​

We are relentlessly focused on millions of developers worldwide, ensuring they have access to the performance, tools and software libraries required to create the next wave of AI-enabled applications. To enable developers to land these innovations quickly at the highest performance, we’re introducing Arm Kleidi, which includes KleidiAI for AI workloads and KleidiCV for computer vision applications. KleidiAI is a set of compute kernels for developers of AI frameworks, providing them with frictionless access to the best performance possible on Arm CPUs, across a wide range of devices, with support for key Arm architectural features such as NEON, SVE2 and SME2. KleidiAI integrates with popular AI frameworks, such as PyTorch, Tensorflow and MediaPipe, with a view to accelerating the performance of key models including Meta Llama 3 and Phi-3. It is also backwards and forwards compatible to ensure Arm is future fit as we bring additional technologies to market. Learn more about Arm Kleidi in this blog.

The compute platform for the future of AI​

Through the unique combination of leading-edge CPU and GPU technologies, production ready physical implementations and continuous software optimizations, CSS for Client combined with Kleidi software will provide the compute platform for the future of AI, a future that is built on Arm.

Rob Telson has liked multiple mediatek posts on LinkedIn over the last 2 years, I'm sure it means nothing though, right.........?
1000016065.jpg
1000016068.jpg
1000016070.jpg
 
Last edited:
  • Like
  • Love
Reactions: 14 users

BrainShit

Regular
Could be the CyberNeuro-RT now live and available :D 🚀 🔥

You'd like to think if selling it, that they will need to be sourcing the 2 "neuromorphic offerings" via direct eg BRN & Intel, via their own licence or via an existing licensee eg Megachips or Renesas?


View attachment 63995



CyberNeuro-RT​


An AI/ML-driven, highly-scalable, real-time network defense & threat intelligence tool with GPU or low-power neuromorphic chip deployment


6557cbc5f33149a7b16b5923_Play%20Icon%20Container%20(3).png


A Quantum Ventura, Lockheed Martin, and Penn State Innovation
Quantum Ventura’s CyberNeuro-RT (CNRT) technology offering has been developed in partnership with Lockheed Martin Co.’s MFC Division and Pennsylvania State University under partial funding from the U.S. Department of Energy.
Cutting-Edge Unsupervised ML

Scalable Unsupervised Outlier Detection (SUOD)
  • Large-scale heterogeneous outlier detection
  • 6 ML Algo EnsembleModel Approximation for Complex Models
  • Execution Efficiency Improvement for Task Load Balancing in Distributed System
  • Variational Autoencoder (VAE)
Variational Autoencoder (VAE)
  • Encoder-Decoder Architecture
  • Variational => Highly Regularized Encoder
  • ETrained to Minimize Reconstruction Error of initial input and reconstructed output
  • Variational Autoencoder (VAE)
75x Dataset Growth in Under 2 Months
  1. Existing Dataset Ingestion: Proprietary system enables ingestion of any existing network capture dataset with flexible support for any labelling system
  2. From-the-wild Zero Day Sampling: System enables capturing and simulation of novel threats for additional data sampling
  3. Data Generation via Simulation: ThreatATI database and proprietary ingestion system enable sampling and augmentation for cataloged threats from proprietary and public threat databases
Proprietary Pipeline Adapts to Any Dataset
View attachment 63991

Follow Threats Home with Dark Web Tracking
View attachment 63992

At-the-edge Neuromorphic Processing
◯ Two offerings from the leading neuromorphic developers: Intel and Brainchip
◯ Small form factor, magnitudes less power consumption than GPU
◯ On-chip learning for deployment network specific attack detection


6519ca62386d7be9f8aa0cf8_Image.png

Intel Loihi

View attachment 63993
Brainchip Akida

Dashboards Minimizes Operator Fatigue
Robust, Multi-Faceted, User-Friendly Cyber Analyst Dashboard Operator Fatigue Allows Cyber Attacks To Happen
  • Large numbers of false alarms cause real threats to be missed
  • False alarms fatigue the cyber analyst further increasing risk of missed threats

The Cyber Neuro-RT Dashboard Is Designed To Minimize All Sources Of Analyst Fatigue While Presenting Timely And Meaningful Data Insights
  • Al based false alarms are minimized (trained for minimal false positive rate)n for cataloged threats from proprietary and public threat databases
  • Possible threats are ranked by importance and confidence
  • Only the most relevant and likely alarms are actioned upon

65c64c89a6c024d7d69022ea_Logo%20(3).svg


ABOUT US
What we do
Team
Partnership
PRODUCT
CyberNeuro RT
E-RECOV
Diamond Droid
CONNECT


© 2023 Quantum Ventura Inc. All Rights Reserved.
Privacy PolicyTerms of Service

That's indeed the answer of my question... couple posts ago.

BTW: <!DOCTYPE html><!-- Last Published: Fri May 24 2024 17:40:12 GMT+0000 (Coordinated Universal Time) --><html data-wf-domain="www.quantumventura.tech" .....

Very nice find... and we all know that Loihi is not the best choice 😉

While Loihi 2 offers more scalability and programmability, Akida's key advantage is its on-chip learning capability and extreme power efficiency for edge AI applications. This allows Akida to continue learning and adjusting to new data at the edge, without relying on external processors or data transfer. ... necessary for network attack detection... Akida bring also low power consumption as well as lower compute cost to the tabke... Loihi 2 does not have this on-chip learning capability and need a separate CPU. (For my understanding)

While Loihi 2 provides advantages like faster processing, better scalability across chips, and more programmability. But ... Loihi and Loihi 2 chips are currently only available for research and evaluation purposes through Intel's Neuromorphic Research Community (INRC).

.
 
Last edited:
  • Like
  • Love
  • Fire
Reactions: 25 users

Bravo

If ARM was an arm, BRN would be its biceps💪!

Qualcomm’s Boom Highlights AI Shift To The Edge​

Contributor

R. Scott Raynovich is the Founder and Chief Analyst at Futuriom.com


May 29, 2024,03:10pm EDT

A year ago, Qualcomm was not a well-embraced tech stock. In fact, as recently as last October, the company’s shares were dabbling with a 52-week low. The long-time maker of mobile technology and holder of valuable intellectual property was mired in a slump, weighed down by slow growth in China, as well as poor PC and smartphone markets.

Fast-forward to now. There’s this magic called AI. In just six months, Qualcomm shares have gone from a 52-week low to an all-time high on the market’s realization that it has the components and technology to play in the AI market, as devices such as smartphones and PCs become key to delivering AI inferencing—where the output of AI models are delivered to customers on devices. This is what many in the technology industries refer to the “edge”—devices connected to infrastructure.

The launch of Qualcomm’s new Snapdragon X series of chips, which targets AI inferencing, has coalesced nicely with turnarounds in the PC and device markets to give Qualcomm this boost. Qualcomm has also made a series of announcements with key partners such as Microsoft who are adopting its technology for AI processing on consumer devices.

GERMANY-US-INTERNET-AI-ARTIFICIAL-INTELLIGENCE

A photo taken on November 23, 2023 shows the logo of the ChatGPT application developed by US ... [+]


Enthusiasm for the AI Edge​


The Qualcomm example shows how the business media and Wall St. have started picking up on the idea that AI requirements are perhaps more broad than just delivering large language models (LLMs) and chatbots from the cloud. There’s edge AI, private enterprise AI, and vertical AI as well.

The thirst for computing to fuel AI extends to the billions of devices around the world, ranging from cars to cameras, often referred to as the Internet of Things (IoT). Anything connected to infrastructure or network will need more processing power and connectivity to run AI models.


QCOM stock Forbes 5-28-2024

Qualcomm shares recently hit a new high on enthusiasm for edge AI.


What does this mean about AI infrastructure at large? Our recent research and discussions with technology builders say the AI infrastrucuture discussion is about to morph. I think that over the next few years we’ll be talking less about LLMs and chatbots and more about vertically focused AI apps and infrastructure—and private AI for enterprise.


Chatbots are an appealing mass market, but they only address one segment—consumer information. The closest analog is the search market, where Google holds between an 80%-90% share, raking in about $80 billion in quarterly revenue. The current market size for search is estimated to be about $400 billion. The enterprise and industrial technology infrastructure markets represent hundreds of billions more.

The AI market will extend well beyond consumer information and chatbots. It also has diverse applications in data analytics, robotics, healthcare, and finance—to only name a few. Many of these more specific vertical markets may not even need LLMs at all but more specific AI technologies that could include small language models (SLMs) or other custom-designed AI processing software. They’ll have to deliver the results—AI inferencing—across myriad hardware platforms ranging from cars to medical devices.

“We have only scratched the surface of AI as it moves out into verticals, private AI, edge, and distributed cloud. There's more to AI than LLMs and SLMs, and vertical/domain-specific models will dominate the new deployments outside of the large cloud players,” Mike Dvorkin, a cofounder and CTO of cloud networking company Hedgehog, told me in a recent interview. “The opportunity is immense, and it will require new thinking about infrastructure and how it's consumed."

AI To Drive Private AI and Hybrid Infrastructure​

If Dvorkin, a former distinguished engineer at Cisco, is right—the AI edge infrastructure market will be gigantic.
This conversation has popped up in more discussions I’ve witnessed recently, where some technologists have estimated the AI market could flip from 80% modeling and 20% inferencing to the reverse. In addition, CIOs I’ve listened to recently have pointed out that the private AI model will be much more useful in specific industries such as healthcare and finance, where enterprise customers may want to own as much of their own data and models as possible.


For this reason, the AI wave will drive more diverse hybrid and multicloud architectures—including private clouds—as the needs for data, analytics and connectivity spread across multiple infrastructures.
“We have a hybrid cloud model,” said George Maddalino, the CTO of Mastercard, at a recent tech event hosted by the Economist in New York. “We have workloads on prem, workloads on hyperscaler. You can see us traversing from a banks datacenter across a hypserscaler cloud to a retailer in the cloud. By default we end up in an environment that's multicloud.”
Nizar Trigui, CTO with GXO Logistics, also pointed to the idea that AI application connectivity to data will be pervasive, for any location.
“Most of us are going through some kind of digital transformation,” said Trigui. "How do we create more value for the customers? We are creating value out of data in 1,000 warehouses around the world, digitally connected.”

The biggest takeaway from Qualcomm’s recent rise is the enthusiasm for AI everywhere—this means processing and inferencing data wherever it lives. This endeavor will not be limited to infrastructure or models owned exclusively by the hyperscalers, it will spread far and wide across enterprise, edge, and IoT.



 
  • Like
  • Love
  • Fire
Reactions: 41 users

Wags

Regular
Somedays BRN makes my head hurt.

Today is one of those days.

season 6 stop making my head hurt GIF by Shameless
 
  • Haha
  • Like
  • Fire
Reactions: 14 users

HopalongPetrovski

I'm Spartacus!
  • Haha
  • Like
Reactions: 8 users

jtardif999

Regular
That's indeed the answer of my question... couple posts ago.

BTW: <!DOCTYPE html><!-- Last Published: Fri May 24 2024 17:40:12 GMT+0000 (Coordinated Universal Time) --><html data-wf-domain="www.quantumventura.tech" .....

Very nice find... and we all know that Loihi is not the best choice 😉

While Loihi 2 offers more scalability and programmability, Akida's key advantage is its on-chip learning capability and extreme power efficiency for edge AI applications. This allows Akida to continue learning and adjusting to new data at the edge, without relying on external processors or data transfer. ... necessary for network attack detection... Akida bring also low power consumption as well as lower compute cost to the tabke... Loihi 2 does not have this on-chip learning capability and need a separate CPU. (For my understanding)

While Loihi 2 provides advantages like faster processing, better scalability across chips, and more programmability. But ... Loihi and Loihi 2 chips are currently only available for research and evaluation purposes through Intel's Neuromorphic Research Community (INRC).

.
Why would Loihi scale better than Akida? Akida being available as IP puts it in a scaling league of its own don’t you think? The node offerings mean it can be embedded as small as two nodes or as large as 256 nodes. I don’t think BrainChip have ever left Akida in the Data Centre off the table either. Versatility is definitely a thing being able to offer up both hardware and software versions of both TENNs and Akida - something that I’m pretty sure Loihi is not capable of. TENNs makes our product line extremely agile imo.
 
Last edited:
  • Like
  • Fire
Reactions: 22 users

Cirat

Regular
  • Like
  • Fire
Reactions: 7 users

Bravo

If ARM was an arm, BRN would be its biceps💪!
Not sure if this has anything to do with us but is evidence of further interest in utilising facial recognition for vehicle access.


Ford Motor patent filing for facial recognition vehicle entry system published​

May 29, 2024, 5:52 pm EDT | Abhishek Jadhav
CATEGORIES Access Control | Biometrics News | Facial Recognition
Ford Motor patent filing for facial recognition vehicle entry system published

A patent filing from the Ford Motor Company for a facial recognition vehicle entry system has been published by the U.S. Patent and Trademark Organization. This technology utilizes both biometric and a non-biometric fallback authentication method to allow access to a vehicle.


The system described in the patent application is designed to integrate dual authentication modes, ensuring that only authorized individuals can gain access to the vehicle. The system includes various components, including image sensors, lockout devices, and a controller for the authentication process.
During primary authentication based on face biometrics, the system captures a real-time image of the person attempting to access the vehicle. The captured image is then analyzed to determine if it matches any stored facial patterns associated with authorized users. If a match is found, the controller grants access to the vehicle by commanding the lockout device to unlock.
If the primary biometric authentication fails and the captured image doesn’t match any stored facial patterns, the system prompts for secondary authentication. It then captures a second image and analyzes it for a non-biometric code. The system offers adjustable security levels based on the complexity of the secondary code. For instance, the code can be alphanumeric, gestures, or graphical depictions. The document depicts the use of gesture recognition to recognize a sequence of hand movements as an authentication method.
For activity logging, whenever a secondary user gains access using a secondary code, the system captures and stores an image of this event. This stored image enables the primary user to review and verify the activities and identities of those who accessed the vehicle.
“The invention may be practiced in any vehicle with exterior cameras that are tied to a vehicle controller with image processing capability. Facial recognition may be retained as the primary device-free authentication modality, while a secondary authentication option can reinstate some benefits associated with the use of keypad code entry,” the patent application explains.
Ford Global Technologies was awarded a patent for a facial recognition system to identify drivers and unlock car doors in 2022. The system could start the vehicle, monitor the health conditions of occupants, and even identify and assess the threat level of animals outside the vehicle.
 
Last edited:
  • Like
  • Love
  • Fire
Reactions: 19 users

Newk R

Regular
I've been off line for a while. Just got out of hospital after major kidney surgery. The pain has been quite staggering to be honest.
Anyway, for a bit of therapy, I thought I'd take a quick look here, and it really has helped. The pain of the operation has paled into insignificance and is now a distant memory.
 
  • Like
  • Love
  • Haha
Reactions: 41 users

Diogenese

Top 20
Not sure if this has anything to do with us but is evidence of further interest in utilising facial recognition for vehicle access.


Ford Motor patent filing for facial recognition vehicle entry system published​

May 29, 2024, 5:52 pm EDT | Abhishek Jadhav
CATEGORIES Access Control | Biometrics News | Facial Recognition
Ford Motor patent filing for facial recognition vehicle entry system published

A patent filing from the Ford Motor Company for a facial recognition vehicle entry system has been published by the U.S. Patent and Trademark Organization. This technology utilizes both biometric and a non-biometric fallback authentication method to allow access to a vehicle.


The system described in the patent application is designed to integrate dual authentication modes, ensuring that only authorized individuals can gain access to the vehicle. The system includes various components, including image sensors, lockout devices, and a controller for the authentication process.
During primary authentication based on face biometrics, the system captures a real-time image of the person attempting to access the vehicle. The captured image is then analyzed to determine if it matches any stored facial patterns associated with authorized users. If a match is found, the controller grants access to the vehicle by commanding the lockout device to unlock.
If the primary biometric authentication fails and the captured image doesn’t match any stored facial patterns, the system prompts for secondary authentication. It then captures a second image and analyzes it for a non-biometric code. The system offers adjustable security levels based on the complexity of the secondary code. For instance, the code can be alphanumeric, gestures, or graphical depictions. The document depicts the use of gesture recognition to recognize a sequence of hand movements as an authentication method.
For activity logging, whenever a secondary user gains access using a secondary code, the system captures and stores an image of this event. This stored image enables the primary user to review and verify the activities and identities of those who accessed the vehicle.
“The invention may be practiced in any vehicle with exterior cameras that are tied to a vehicle controller with image processing capability. Facial recognition may be retained as the primary device-free authentication modality, while a secondary authentication option can reinstate some benefits associated with the use of keypad code entry,” the patent application explains.
Ford Global Technologies was awarded a patent for a facial recognition system to identify drivers and unlock car doors in 2022. The system could start the vehicle, monitor the health conditions of occupants, and even identify and assess the threat level of animals outside the vehicle.
Cigarless once again.


US2024166165A1 FACIAL RECOGNITION ENTRY SYSTEM WITH SECONDARY AUTHENTICATION 20221121 Publication: US2024166165A1·2024-05-23

1717042451027.png




[0023] FIG. 2 shows authentication controller 16 in greater detail. A main processor 30 includes logic 31 which directs operation according to the processes described herein. A program block 32 performs facial recognition and/or gesture recognition by comparing captured images to prestored templates (biometric and nonbiometric).

It looks like Ford are using software to compare stored images with the camera output (processor 30; program block 32). There is no mention of NNs or AI.

Still that does not conclusively rule out using Akida simulation software, but it does not closely describe a NN application. But doing an old fashioned image comparison would use a lot of power. Wouldn't it be funny if the car recognizes the driver, but then has a flat battery from the effort.

TeNNs?

After all there is the hypothesis that Valeo and Mercedes could be using Akida simulation software ...
 
  • Like
  • Haha
  • Love
Reactions: 17 users
Get well soon.. just prayed for you.


I've been off line for a while. Just got out of hospital after major kidney surgery. The pain has been quite staggering to be honest.
Anyway, for a bit of therapy, I thought I'd take a quick look here, and it really has helped. The pain of the operation has paled into insignificance and is now a distant memory.
 
  • Love
  • Like
  • Fire
Reactions: 11 users
Could be the CyberNeuro-RT now live and available :D 🚀 🔥

You'd like to think if selling it, that they will need to be sourcing the 2 "neuromorphic offerings" via direct eg BRN & Intel, via their own licence or via an existing licensee eg Megachips or Renesas?


View attachment 63995



CyberNeuro-RT​


An AI/ML-driven, highly-scalable, real-time network defense & threat intelligence tool with GPU or low-power neuromorphic chip deployment


6557cbc5f33149a7b16b5923_Play%20Icon%20Container%20(3).png


A Quantum Ventura, Lockheed Martin, and Penn State Innovation
Quantum Ventura’s CyberNeuro-RT (CNRT) technology offering has been developed in partnership with Lockheed Martin Co.’s MFC Division and Pennsylvania State University under partial funding from the U.S. Department of Energy.
Cutting-Edge Unsupervised ML

Scalable Unsupervised Outlier Detection (SUOD)
  • Large-scale heterogeneous outlier detection
  • 6 ML Algo EnsembleModel Approximation for Complex Models
  • Execution Efficiency Improvement for Task Load Balancing in Distributed System
  • Variational Autoencoder (VAE)
Variational Autoencoder (VAE)
  • Encoder-Decoder Architecture
  • Variational => Highly Regularized Encoder
  • ETrained to Minimize Reconstruction Error of initial input and reconstructed output
  • Variational Autoencoder (VAE)
75x Dataset Growth in Under 2 Months
  1. Existing Dataset Ingestion: Proprietary system enables ingestion of any existing network capture dataset with flexible support for any labelling system
  2. From-the-wild Zero Day Sampling: System enables capturing and simulation of novel threats for additional data sampling
  3. Data Generation via Simulation: ThreatATI database and proprietary ingestion system enable sampling and augmentation for cataloged threats from proprietary and public threat databases
Proprietary Pipeline Adapts to Any Dataset
View attachment 63991

Follow Threats Home with Dark Web Tracking
View attachment 63992

At-the-edge Neuromorphic Processing
◯ Two offerings from the leading neuromorphic developers: Intel and Brainchip
◯ Small form factor, magnitudes less power consumption than GPU
◯ On-chip learning for deployment network specific attack detection


6519ca62386d7be9f8aa0cf8_Image.png

Intel Loihi

View attachment 63993
Brainchip Akida

Dashboards Minimizes Operator Fatigue
Robust, Multi-Faceted, User-Friendly Cyber Analyst Dashboard Operator Fatigue Allows Cyber Attacks To Happen
  • Large numbers of false alarms cause real threats to be missed
  • False alarms fatigue the cyber analyst further increasing risk of missed threats

The Cyber Neuro-RT Dashboard Is Designed To Minimize All Sources Of Analyst Fatigue While Presenting Timely And Meaningful Data Insights
  • Al based false alarms are minimized (trained for minimal false positive rate)n for cataloged threats from proprietary and public threat databases
  • Possible threats are ranked by importance and confidence
  • Only the most relevant and likely alarms are actioned upon

65c64c89a6c024d7d69022ea_Logo%20(3).svg


ABOUT US
What we do
Team
Partnership
PRODUCT
CyberNeuro RT
E-RECOV
Diamond Droid
CONNECT


© 2023 Quantum Ventura Inc. All Rights Reserved.
Privacy PolicyTerms of Service
Further to my prev QV post...it appears they also set up a new company in Japan mid last year.

This site goes into more explanation on the product and architecture and where we sit in the system integration flow.



1717043460134.png



1717043574251.png



1717043607990.png



1717043884846.png



1717043963326.png



1717043988216.png
 
  • Like
  • Fire
  • Love
Reactions: 32 users
I've been off line for a while. Just got out of hospital after major kidney surgery. The pain has been quite staggering to be honest.
Anyway, for a bit of therapy, I thought I'd take a quick look here, and it really has helped. The pain of the operation has paled into insignificance and is now a distant memory.
1717045199800.gif
 
  • Haha
  • Like
Reactions: 3 users
Top Bottom