BRN Discussion Ongoing

IloveLamp

Top 20
Screenshot_20230729_064932_LinkedIn.jpg
 
  • Like
Reactions: 18 users

cosors

👀
I might be mistaken, but it seems to me that somebody in Germany has been buying BIG today via Tradegate.
Yes it is more than nothing but it is only 63k€ or ~A$104,500.
 
  • Like
  • Fire
Reactions: 9 users

TopCat

Regular

Here’s how neuromorphic processing can transform cyber risk at the edge. Start with a neuromorphic processing unit (NPU) built on a high-end field-programmable gate array (FPGA) integrated circuit customized to accelerate key workloads. Add a few dozen terabytes of local SSD storage. The result is an NPU-based, self-searching storage appliance that can perform extremely fast searches of very large datasets – at the edge and at very low power.

Just how quickly can NPU technology search a large dataset? Combine multiple NPU appliances in a rack, and you can search 1 PB of data in about 12 minutes. To achieve that result with traditional technology, you’d need 62 server racks – and a very large budget. In testing, the NPU appliance rack requires 84% lower CapEx, 99% lower OpEx, and 99% less power.


Making the Use Case for NPU Appliances

The NPU search technology was developed in collaboration with Sandia National Laboratories, an R&D lab of the Department of Energy. Today Sandia is actively using multiple NPU systems for cyber defense and other use cases.


There are other potential use cases for an NPU appliance. For instance, one Fortune 50 company used the technology for data labeling to train a machine learning algorithm. The organization reduced the time required from one month to 22 minutes. In the meantime, for federal agencies and the military, neuromorphic processing and self-searching storage is an achievable, cost-effective solution for protecting sensitive data and slashing cyber risk at the edge.
 
  • Like
  • Fire
  • Wow
Reactions: 8 users

Quiltman

Regular

I applaud the foresight of Peter, Anil and the team to have the vision to narrow in on Edge AI and related applications when developing their strategy on where to focus BrainChips resources & the market of biggest opportunity , all those years ago.

In my opinion they nailed it.

With that vision, and ability to plan & protect ( via IP ) our unique role within this market over many years, I'm confident we will carve a large slice of the pie, enough to make holders on this forum very happy with their own foresight & investment.

From this article :

“I really believe this is the beginning of a tsunami wave,” he told EE Times in an exclusive interview. “We’re going to see a tsunami of products coming with ML functionality: It’s only going to increase, and it’s going to attract a lot of attention.”

STMicro has roughly a quarter of the microcontroller (MCU) market today, shipping between five and 10 million STM32 MCUs every day. According to El-Ouazzane, over the next five years, 500 million of those MCUs will be running some form of tinyML or AI workloads.

TinyML, which refers to running AI or machine learning inference on otherwise generic MCUs, “will become the largest endpoint market in the world,” he said.

El-Ouazzane—who previously served as CEO of edge AI chip startup Movidius and COO of Intel’s AI product group—and his team at STMicro have been hard at work the last few years bringing AI capabilities to the company’s portfolio.


While I believe [tinyML] is the biggest market in the making, I’m also humbled by the fact that we have gone through three to five years of education of management of companies who make fans, pumps, inverters, washing machine drum companies—all those people are coming to it,” he said. “We live in the world of ChatGPT, but all these laggards are finally coming to use AI. It was my vision for Movidius back in the day. I thought it would happen… it is taking a long time, but we see it coming now.”

However, El-Ouazzane is clear that the N6 is not the end goal for STMicro in AI.

“If we nominally say we want to reach our performance-per-Watt end goal between 2025 and 2030, you can assume N6 is one-tenth of the way there,” he said. “That’s the amount of boost you’re going to see in the coming years. The N6 is a kick-ass product, and it is getting a lot of traction in AV-centric use cases, but there is an explosion of performance coming: There will be neural networks on microcontrollers fusing vision, audio and time series data.
 
  • Like
  • Fire
  • Love
Reactions: 60 users

TopCat

Regular

Here’s how neuromorphic processing can transform cyber risk at the edge. Start with a neuromorphic processing unit (NPU) built on a high-end field-programmable gate array (FPGA) integrated circuit customized to accelerate key workloads. Add a few dozen terabytes of local SSD storage. The result is an NPU-based, self-searching storage appliance that can perform extremely fast searches of very large datasets – at the edge and at very low power.

Just how quickly can NPU technology search a large dataset? Combine multiple NPU appliances in a rack, and you can search 1 PB of data in about 12 minutes. To achieve that result with traditional technology, you’d need 62 server racks – and a very large budget. In testing, the NPU appliance rack requires 84% lower CapEx, 99% lower OpEx, and 99% less power.


Making the Use Case for NPU Appliances

The NPU search technology was developed in collaboration with Sandia National Laboratories, an R&D lab of the Department of Energy. Today Sandia is actively using multiple NPU systems for cyber defense and other use cases.


There are other potential use cases for an NPU appliance. For instance, one Fortune 50 company used the technology for data labeling to train a machine learning algorithm. The organization reduced the time required from one month to 22 minutes. In the meantime, for federal agencies and the military, neuromorphic processing and self-searching storage is an achievable, cost-effective solution for protecting sensitive data and slashing cyber risk at the edge.

“The NPU search technology was developed in collaboration with Sandia National Laboratories, an R&D lab of the Department of Energy. Today Sandia is actively using multiple NPU systems for cyber defense and other use cases.”

Interesting considering our relationship with Quantum Ventura 🤔


Department of Energy: "Cyber threat-detection using neuromorphic computing" - SBIR Phase 1
 
  • Like
  • Love
  • Fire
Reactions: 17 users

IloveLamp

Top 20
  • Like
  • Love
Reactions: 7 users

JB49

Regular
“The NPU search technology was developed in collaboration with Sandia National Laboratories, an R&D lab of the Department of Energy. Today Sandia is actively using multiple NPU systems for cyber defense and other use cases.”

Interesting considering our relationship with Quantum Ventura 🤔


Department of Energy: "Cyber threat-detection using neuromorphic computing" - SBIR Phase 1
Sandia use Intel Loihi
 
  • Like
Reactions: 2 users

MrNick

Regular
Sandia use Intel Loihi
I wonder how that’s going for them…
 
  • Haha
  • Like
Reactions: 8 users

Bravo

If ARM was an arm, BRN would be its biceps💪!
As AI becomes pervasive EVERYTHING is going to happen at the EDGE!!!

A MUST WATCH video IMO.

Cristiano Amon describing how huge the generative AI opportunity is and how great it will be for the semi conductor industry as many of the use cases are going to happen on devices, on the edge.



22 June 2023 Qualcomm CEO, Cristiano Amon Bloomberg Technology Summit.
 
Last edited:
  • Like
  • Fire
  • Love
Reactions: 25 users

HopalongPetrovski

I'm Spartacus!
363347147_10230973820951721_7726183353570372280_n.jpg


To all the wonderful dot joiners out there.
God bless you and your little cotton sox.
🤣
Thank you all for sharing.
 
  • Like
  • Haha
  • Love
Reactions: 27 users

TopCat

Regular
Sandia use Intel Loihi
Hopefully they’ll come around to us. I’m sure they’ve heard good things….

Kristofor D. Carlson, PhD received his PhD in Physics from Purdue University. Kristofor spent four years as a postdoctoral scholar at UC Irvine where he studied spiking neural networks, evolutionary algorithms, and neuromorphic systems. Afterwards, he worked as a postdoctoral appointee at Sandia National Laboratories for two years where he studied uncertainty quantification and neuromorphic computing. Kristofor has worked at Brainchip as a research scientist for six years and has been Manager of Applied Research for the past two years. At BrainChip he focuses on developing and optimizing neuromorphic and machine learning algorithms for deployment on BrainChip’s latest neural network architectures.
 
  • Like
  • Fire
  • Love
Reactions: 23 users

Bravo

If ARM was an arm, BRN would be its biceps💪!
As AI becomes pervasive EVERYTHING is going to happen at the EDGE!!!

A MUST WATCH video IMO.

Cristiano Amon describing how huge the generative AI opportunity is and how great it will be for the semi conductor industry as many of the use cases are going to happen on devices, on the edge.



22 June 2023 Qualcomm CEO, Cristiano Amon Bloomberg Technology Summit.


One concern I have about this video is that Cristiano explains at 6.25 mins approx. something about how Qualcomm has spent a decade on this before it was popular. And what they did was to come up with the ability to do very high processing, high performance accelerated computing on the device running a very large number of parameters without compromising the battery life of the phone. He goes on to describe how they've developed some very unique technology which is the most efficient accelerated computing from a performance per watt which they're bringing to all their devices (i.e. the new SnapDragon processor this year to run in excess of 10 billion parameters, Windows on Arm to run over 20 billion parameters and for a vehicle 40-60 billion parameters all on the device.

It may be just my impression, but I thought Cristiano was a bit evasive when at 8.40 mins the interviewer suggested that there was much scepticism coming from engineers about how all of us will be running generative AI on the device WITHOUT INTERNET connection because the engineers doubt the process of power and they doubt the work on the algorithms. The interviewer actually asks "Where did Qualcomm do the innovation?" Without really explaining what the innovation is, Cristiano seems to skirt around the topic by describing in general terms what hybrid AI is.

Just trying to get a feel for where we fit, or might into this picture? I am still a bit confused as to Qualcomm's technology in comparison to ours. If Qualcomm is so advanced at the edge, why haven't they been shown as a competitor on the slides that have been presented previously comparing Akida's performance to True North, etc?
 
  • Like
  • Thinking
  • Love
Reactions: 30 users

HopalongPetrovski

I'm Spartacus!
One concern I have about this video is that Cristiano explains at 6.25 mins approx. something about how Qualcomm has spent a decade on this before it was popular. And what they did was to come up with the ability to do very high processing, high performance accelerated computing on the device running a very large number of parameters without compromising the battery life of the phone. He goes on to describe how they've developed some very unique technology which is the most efficient accelerated computing from a performance per watt which they're bringing to all their devices (i.e. the new SnapDragon processor this year to run in excess of 10 billion parameters, Windows on Arm to run over 20 billion parameters and for a vehicle 40-60 billion parameters all on the device.

It may be just my impression, but I thought Cristiano was a bit evasive when at 8.40 mins the interviewer suggested that there was much scepticism coming from engineers about how all of us will be running generative AI on the device WITHOUT INTERNET connection because the engineers doubt the process of power and they doubt the work on the algorithms. The interviewer actually asks "Where did Qualcomm do the innovation?" Without really explaining what the innovation is, Cristiano seems to skirt around the topic by describing in general terms what hybrid AI is.

Just trying to get a feel for where we fit, or might into this picture? I am still a bit confused as to Qualcomm's technology in comparison to ours. If Qualcomm is so advanced at the edge, why haven't they been shown as a competitor on the slides that have been presented previously comparing Akida's performance to True North, etc?
I dunno B.
Either we are in there somehow, or Qualcomm has found another method for achieving similar results?
What Cristiano is describing sounds very like what we do.
Hopefully we are somehow deeply embedded in their process and as with a myriad of applications coming to market soon, all we'll initialy know off, is the resultant revenue stream.
At this point in the marketing cycle, if I had an extraordinary product hoping to get the jump on my competition, and it was enhanced by Akida, which is also available COTS to anyone out there with the dollars and good intent, I wouldn't be openly discussing my secret sauce and would be obfuscating as required in order for the market to believe we are the golden child.
Of course it may be that we have no involvement in the Qualcomm product and end up in competition or in litigation with them, but these too, are likely scenarios in our future.
Even 50% of the estimated trillion dollar AIoT market will buy us enough kitty litter, green goblin boots and tinned tuna to fill our yachts many times over. 🤣
 
  • Like
  • Love
  • Haha
Reactions: 22 users

rgupta

Regular
I
One concern I have about this video is that Cristiano explains at 6.25 mins approx. something about how Qualcomm has spent a decade on this before it was popular. And what they did was to come up with the ability to do very high processing, high performance accelerated computing on the device running a very large number of parameters without compromising the battery life of the phone. He goes on to describe how they've developed some very unique technology which is the most efficient accelerated computing from a performance per watt which they're bringing to all their devices (i.e. the new SnapDragon processor this year to run in excess of 10 billion parameters, Windows on Arm to run over 20 billion parameters and for a vehicle 40-60 billion parameters all on the device.

It may be just my impression, but I thought Cristiano was a bit evasive when at 8.40 mins the interviewer suggested that there was much scepticism coming from engineers about how all of us will be running generative AI on the device WITHOUT INTERNET connection because the engineers doubt the process of power and they doubt the work on the algorithms. The interviewer actually asks "Where did Qualcomm do the innovation?" Without really explaining what the innovation is, Cristiano seems to skirt around the topic by describing in general terms what hybrid AI is.

Just trying to get a feel for where we fit, or might into this picture? I am still a bit confused as to Qualcomm's technology in comparison to ours. If Qualcomm is so advanced at the edge, why haven't they been shown as a competitor on the slides that have been presented previously comparing Akida's performance to True North, etc?
I still feel a good chance Qualcomm is using our technology. Otherwise why Qualcomm approached prophesee and done a partnership after prophesee becomes partner to brainchip. Qualcomm could had approached prophesee before us and closed those doors to brainchip a lot more earlier.
But all the answers lie with time in the future
Dyor
 
  • Like
  • Thinking
Reactions: 14 users
I dunno B.
Either we are in there somehow, or Qualcomm has found another method for achieving similar results?
What Cristiano is describing sounds very like what we do.
Hopefully we are somehow deeply embedded in their process and as with a myriad of applications coming to market soon, all we'll initialy know off, is the resultant revenue stream.
At this point in the marketing cycle, if I had an extraordinary product hoping to get the jump on my competition, and it was enhanced by Akida, which is also available COTS to anyone out there with the dollars and good intent, I wouldn't be openly discussing my secret sauce and would be obfuscating as required in order for the market to believe we are the golden child.
Of course it may be that we have no involvement in the Qualcomm product and end up in competition or in litigation with them, but these too, are likely scenarios in our future.
Even 50% of the estimated trillion dollar AIoT market will buy us enough kitty litter, green goblin boots and tinned tuna to fill our yachts many times over. 🤣
Timely discussion and considerations on IP.

Article from a few hours ago makes an interesting read as there are some sharks out there that aren't necessarily a competitor.

Discusses instances moving into the AI IP arena now.

Hadn't been aware of this element of industries re IP.


 
  • Like
  • Wow
  • Sad
Reactions: 8 users
Hmmmmm.....:unsure:.....c'mon IFS.....ya know ya wanna FFS :LOL:

From a few hours ago.




JULY 27, 2023 BY LIDIA PERSKA

Intel CEO: AI to be Integrated into All Intel Products​

Intel CEO Pat Gelsinger announced during the company’s Q2 2023 earnings call that Intel is planning to incorporate artificial intelligence (AI) into every product it develops. This comes as Intel prepares to release Meteor Lake, its first consumer chip with a built-in neural processor for machine learning tasks.

Previously, Intel had hinted that only their premium Ultra chips would feature AI coprocessors. However, Gelsinger’s statement suggests that AI will eventually be integrated into all of Intel’s offerings.

Gelsinger often emphasizes the “superpowers” of technology companies, which typically include AI and cloud capabilities. However, he now suggests that AI and cloud are not mutually exclusive. Gelsinger points out that certain AI-powered tasks, such as real-time language translation in video calls, real-time transcription, automation inference, and content generation, need to be done on the client device rather than relying on the cloud. He highlights the importance of edge computing, where AI processing occurs locally, rather than relying on round-tripping data to the cloud.

Gelsinger envisions AI integration in various domains, including consumer devices, enterprise data centers, retail, manufacturing, and industrial use cases. He even mentions the potential for AI to be integrated into hearing aids.

This strategy is crucial for Intel to compete with Nvidia, the dominant player in AI chips powering cloud services. While Nvidia has seen immense success in the AI market, Intel aims to find its own path by integrating AI into their products. This aligns with the growing demand for edge computing and the desire for more localized AI processing.

Furthermore, Gelsinger’s remarks highlight the shift in the tech industry towards AI-driven innovation. Microsoft, for example, has embraced AI, with the forthcoming Windows 12 rumored to integrate Intel’s Meteor Lake chip with its built-in neural engine. Similarly, Microsoft’s AI-powered Copilot tool is expected to revolutionize document editing.

Overall, Intel’s plans to incorporate
I dunno B.
Either we are in there somehow, or Qualcomm has found another method for achieving similar results?
What Cristiano is describing sounds very like what we do.
Hopefully we are somehow deeply embedded in their process and as with a myriad of applications coming to market soon, all we'll initialy know off, is the resultant revenue stream.
At this point in the marketing cycle, if I had an extraordinary product hoping to get the jump on my competition, and it was enhanced by Akida, which is also available COTS to anyone out there with the dollars and good intent, I wouldn't be openly discussing my secret sauce and would be obfuscating as required in order for the market to believe we are the golden child.
Of course it may be that we have no involvement in the Qualcomm product and end up in competition or in litigation with them, but these too, are likely scenarios in our future.
Even 50% of the estimated trillion dollar AIoT market will buy us enough kitty litter, green goblin boots and tinned tuna to fill our yachts many times over. 🤣
I agree, you want it in the market first then get the mass adoption, then everyone wants to know how, then the 2nd movers and so on start spruiking their products saying, “see we have the same revolutionary tech they have to make this amazing product”
 
  • Like
Reactions: 6 users

Diogenese

Top 20
One concern I have about this video is that Cristiano explains at 6.25 mins approx. something about how Qualcomm has spent a decade on this before it was popular. And what they did was to come up with the ability to do very high processing, high performance accelerated computing on the device running a very large number of parameters without compromising the battery life of the phone. He goes on to describe how they've developed some very unique technology which is the most efficient accelerated computing from a performance per watt which they're bringing to all their devices (i.e. the new SnapDragon processor this year to run in excess of 10 billion parameters, Windows on Arm to run over 20 billion parameters and for a vehicle 40-60 billion parameters all on the device.

It may be just my impression, but I thought Cristiano was a bit evasive when at 8.40 mins the interviewer suggested that there was much scepticism coming from engineers about how all of us will be running generative AI on the device WITHOUT INTERNET connection because the engineers doubt the process of power and they doubt the work on the algorithms. The interviewer actually asks "Where did Qualcomm do the innovation?" Without really explaining what the innovation is, Cristiano seems to skirt around the topic by describing in general terms what hybrid AI is.

Just trying to get a feel for where we fit, or might into this picture? I am still a bit confused as to Qualcomm's technology in comparison to ours. If Qualcomm is so advanced at the edge, why haven't they been shown as a competitor on the slides that have been presented previously comparing Akida's performance to True North, etc?
Hi Bravo,

Back in February, I posted the Snapdragon Hexagon spec sheet:

https://www.qualcomm.com/content/da...ocuments/Snapdragon-8-Gen-2-Product-Brief.pdf

Artificial Intelligence
Qualcomm® Adreno™ GPU Qualcomm® Kryo™ CPU Qualcomm® Hexagon™ Processor
• Fused AI Accelerator Architecture
• Hexagon Tensor Accelerator
• Hexagon Vector eXtensions
• Hexagon Scalar Accelerator
• Hexagon Direct Link
• Support for mix precision (INT8+INT16)
• Support for all precisions (INT4, INT8, INT16, FP16)
Micro Tile Inferencing Qualcomm® Sensing Hub
• Dual AI Processors for audio and sensors
• Always-Sensing camera


Our AI Engine includes the Qualcomm® Hexagon™ Processor, with revolutionary micro tile inferencing and faster Tensor accelerators for up to 4.35x1 faster AI performance than its predecessor. Plus, support for INT4 precision boosts performance-per-watt by 60% for sustained AI inferencing.


MicroTiles are part of transformers, such as ViT, which is said to be more efficient than LSTM.
 
  • Like
  • Thinking
  • Wow
Reactions: 18 users

HopalongPetrovski

I'm Spartacus!
Timely discussion and considerations on IP.

Article from a few hours ago makes an interesting read as there are some sharks out there that aren't necessarily a competitor.

Discusses instances moving into the AI IP arena now.

Hadn't been aware of this element of industries re IP.


Great. Now we have Patent Trolls to add into the mix. 🤣
Hurry up and Bring it, BrainChip!
We’ve been through the wringer.
Now’s the time to kick them tyres and light them fires.
Those burning short’s should give us a nice turbocharged ride back up into the multi dollar range where we belong. 🤣
 
  • Like
  • Haha
  • Love
Reactions: 22 users
Top Bottom