cosors
👀
Yes it is more than nothing but it is only 63k€ or ~A$104,500.I might be mistaken, but it seems to me that somebody in Germany has been buying BIG today via Tradegate.
Yes it is more than nothing but it is only 63k€ or ~A$104,500.I might be mistaken, but it seems to me that somebody in Germany has been buying BIG today via Tradegate.
How Neuromorphic Processing and Self-Searching Storage Can Slash Cyber Risk for Federal Agencies
The amount of information organizations must process at the edge has exploded. This is especially true for federal agencies and the military, whichwww.datanami.com
Here’s how neuromorphic processing can transform cyber risk at the edge. Start with a neuromorphic processing unit (NPU) built on a high-end field-programmable gate array (FPGA) integrated circuit customized to accelerate key workloads. Add a few dozen terabytes of local SSD storage. The result is an NPU-based, self-searching storage appliance that can perform extremely fast searches of very large datasets – at the edge and at very low power.
Just how quickly can NPU technology search a large dataset? Combine multiple NPU appliances in a rack, and you can search 1 PB of data in about 12 minutes. To achieve that result with traditional technology, you’d need 62 server racks – and a very large budget. In testing, the NPU appliance rack requires 84% lower CapEx, 99% lower OpEx, and 99% less power.
Making the Use Case for NPU Appliances
The NPU search technology was developed in collaboration with Sandia National Laboratories, an R&D lab of the Department of Energy. Today Sandia is actively using multiple NPU systems for cyber defense and other use cases.
There are other potential use cases for an NPU appliance. For instance, one Fortune 50 company used the technology for data labeling to train a machine learning algorithm. The organization reduced the time required from one month to 22 minutes. In the meantime, for federal agencies and the military, neuromorphic processing and self-searching storage is an achievable, cost-effective solution for protecting sensitive data and slashing cyber risk at the edge.
Sandia use Intel Loihi“The NPU search technology was developed in collaboration with Sandia National Laboratories, an R&D lab of the Department of Energy. Today Sandia is actively using multiple NPU systems for cyber defense and other use cases.”
Interesting considering our relationship with Quantum Ventura
Quantum Ventura
www.quantumventura.com
Department of Energy: "Cyber threat-detection using neuromorphic computing" - SBIR Phase 1
I wonder how that’s going for them…Sandia use Intel Loihi
Hopefully they’ll come around to us. I’m sure they’ve heard good things….Sandia use Intel Loihi
As AI becomes pervasive EVERYTHING is going to happen at the EDGE!!!
A MUST WATCH video IMO.
Cristiano Amon describing how huge the generative AI opportunity is and how great it will be for the semi conductor industry as many of the use cases are going to happen on devices, on the edge.
22 June 2023 Qualcomm CEO, Cristiano Amon Bloomberg Technology Summit.
I dunno B.One concern I have about this video is that Cristiano explains at 6.25 mins approx. something about how Qualcomm has spent a decade on this before it was popular. And what they did was to come up with the ability to do very high processing, high performance accelerated computing on the device running a very large number of parameters without compromising the battery life of the phone. He goes on to describe how they've developed some very unique technology which is the most efficient accelerated computing from a performance per watt which they're bringing to all their devices (i.e. the new SnapDragon processor this year to run in excess of 10 billion parameters, Windows on Arm to run over 20 billion parameters and for a vehicle 40-60 billion parameters all on the device.
It may be just my impression, but I thought Cristiano was a bit evasive when at 8.40 mins the interviewer suggested that there was much scepticism coming from engineers about how all of us will be running generative AI on the device WITHOUT INTERNET connection because the engineers doubt the process of power and they doubt the work on the algorithms. The interviewer actually asks "Where did Qualcomm do the innovation?" Without really explaining what the innovation is, Cristiano seems to skirt around the topic by describing in general terms what hybrid AI is.
Just trying to get a feel for where we fit, or might into this picture? I am still a bit confused as to Qualcomm's technology in comparison to ours. If Qualcomm is so advanced at the edge, why haven't they been shown as a competitor on the slides that have been presented previously comparing Akida's performance to True North, etc?
I still feel a good chance Qualcomm is using our technology. Otherwise why Qualcomm approached prophesee and done a partnership after prophesee becomes partner to brainchip. Qualcomm could had approached prophesee before us and closed those doors to brainchip a lot more earlier.One concern I have about this video is that Cristiano explains at 6.25 mins approx. something about how Qualcomm has spent a decade on this before it was popular. And what they did was to come up with the ability to do very high processing, high performance accelerated computing on the device running a very large number of parameters without compromising the battery life of the phone. He goes on to describe how they've developed some very unique technology which is the most efficient accelerated computing from a performance per watt which they're bringing to all their devices (i.e. the new SnapDragon processor this year to run in excess of 10 billion parameters, Windows on Arm to run over 20 billion parameters and for a vehicle 40-60 billion parameters all on the device.
It may be just my impression, but I thought Cristiano was a bit evasive when at 8.40 mins the interviewer suggested that there was much scepticism coming from engineers about how all of us will be running generative AI on the device WITHOUT INTERNET connection because the engineers doubt the process of power and they doubt the work on the algorithms. The interviewer actually asks "Where did Qualcomm do the innovation?" Without really explaining what the innovation is, Cristiano seems to skirt around the topic by describing in general terms what hybrid AI is.
Just trying to get a feel for where we fit, or might into this picture? I am still a bit confused as to Qualcomm's technology in comparison to ours. If Qualcomm is so advanced at the edge, why haven't they been shown as a competitor on the slides that have been presented previously comparing Akida's performance to True North, etc?
Timely discussion and considerations on IP.I dunno B.
Either we are in there somehow, or Qualcomm has found another method for achieving similar results?
What Cristiano is describing sounds very like what we do.
Hopefully we are somehow deeply embedded in their process and as with a myriad of applications coming to market soon, all we'll initialy know off, is the resultant revenue stream.
At this point in the marketing cycle, if I had an extraordinary product hoping to get the jump on my competition, and it was enhanced by Akida, which is also available COTS to anyone out there with the dollars and good intent, I wouldn't be openly discussing my secret sauce and would be obfuscating as required in order for the market to believe we are the golden child.
Of course it may be that we have no involvement in the Qualcomm product and end up in competition or in litigation with them, but these too, are likely scenarios in our future.
Even 50% of the estimated trillion dollar AIoT market will buy us enough kitty litter, green goblin boots and tinned tuna to fill our yachts many times over.
Hmmmmm..........c'mon IFS.....ya know ya wanna FFS
From a few hours ago.
JULY 27, 2023 BY LIDIA PERSKA
Intel CEO: AI to be Integrated into All Intel Products
Intel CEO Pat Gelsinger announced during the company’s Q2 2023 earnings call that Intel is planning to incorporate artificial intelligence (AI) into every product it develops. This comes as Intel prepares to release Meteor Lake, its first consumer chip with a built-in neural processor for machine learning tasks.
Previously, Intel had hinted that only their premium Ultra chips would feature AI coprocessors. However, Gelsinger’s statement suggests that AI will eventually be integrated into all of Intel’s offerings.
Gelsinger often emphasizes the “superpowers” of technology companies, which typically include AI and cloud capabilities. However, he now suggests that AI and cloud are not mutually exclusive. Gelsinger points out that certain AI-powered tasks, such as real-time language translation in video calls, real-time transcription, automation inference, and content generation, need to be done on the client device rather than relying on the cloud. He highlights the importance of edge computing, where AI processing occurs locally, rather than relying on round-tripping data to the cloud.
Gelsinger envisions AI integration in various domains, including consumer devices, enterprise data centers, retail, manufacturing, and industrial use cases. He even mentions the potential for AI to be integrated into hearing aids.
This strategy is crucial for Intel to compete with Nvidia, the dominant player in AI chips powering cloud services. While Nvidia has seen immense success in the AI market, Intel aims to find its own path by integrating AI into their products. This aligns with the growing demand for edge computing and the desire for more localized AI processing.
Furthermore, Gelsinger’s remarks highlight the shift in the tech industry towards AI-driven innovation. Microsoft, for example, has embraced AI, with the forthcoming Windows 12 rumored to integrate Intel’s Meteor Lake chip with its built-in neural engine. Similarly, Microsoft’s AI-powered Copilot tool is expected to revolutionize document editing.
Overall, Intel’s plans to incorporate
I agree, you want it in the market first then get the mass adoption, then everyone wants to know how, then the 2nd movers and so on start spruiking their products saying, “see we have the same revolutionary tech they have to make this amazing product”I dunno B.
Either we are in there somehow, or Qualcomm has found another method for achieving similar results?
What Cristiano is describing sounds very like what we do.
Hopefully we are somehow deeply embedded in their process and as with a myriad of applications coming to market soon, all we'll initialy know off, is the resultant revenue stream.
At this point in the marketing cycle, if I had an extraordinary product hoping to get the jump on my competition, and it was enhanced by Akida, which is also available COTS to anyone out there with the dollars and good intent, I wouldn't be openly discussing my secret sauce and would be obfuscating as required in order for the market to believe we are the golden child.
Of course it may be that we have no involvement in the Qualcomm product and end up in competition or in litigation with them, but these too, are likely scenarios in our future.
Even 50% of the estimated trillion dollar AIoT market will buy us enough kitty litter, green goblin boots and tinned tuna to fill our yachts many times over.
Hi Bravo,One concern I have about this video is that Cristiano explains at 6.25 mins approx. something about how Qualcomm has spent a decade on this before it was popular. And what they did was to come up with the ability to do very high processing, high performance accelerated computing on the device running a very large number of parameters without compromising the battery life of the phone. He goes on to describe how they've developed some very unique technology which is the most efficient accelerated computing from a performance per watt which they're bringing to all their devices (i.e. the new SnapDragon processor this year to run in excess of 10 billion parameters, Windows on Arm to run over 20 billion parameters and for a vehicle 40-60 billion parameters all on the device.
It may be just my impression, but I thought Cristiano was a bit evasive when at 8.40 mins the interviewer suggested that there was much scepticism coming from engineers about how all of us will be running generative AI on the device WITHOUT INTERNET connection because the engineers doubt the process of power and they doubt the work on the algorithms. The interviewer actually asks "Where did Qualcomm do the innovation?" Without really explaining what the innovation is, Cristiano seems to skirt around the topic by describing in general terms what hybrid AI is.
Just trying to get a feel for where we fit, or might into this picture? I am still a bit confused as to Qualcomm's technology in comparison to ours. If Qualcomm is so advanced at the edge, why haven't they been shown as a competitor on the slides that have been presented previously comparing Akida's performance to True North, etc?
Great. Now we have Patent Trolls to add into the mix.Timely discussion and considerations on IP.
Article from a few hours ago makes an interesting read as there are some sharks out there that aren't necessarily a competitor.
Discusses instances moving into the AI IP arena now.
Hadn't been aware of this element of industries re IP.
The anatomy of a patent litigation target | TechCrunch
What does a patent litigation target look like in 2023? And is there anything that might make one startup more at risk of lawsuits?techcrunch.com
IHi Bravo,
Back in February, I posted the Snapdragon Hexagon spec sheet:
https://www.qualcomm.com/content/da...ocuments/Snapdragon-8-Gen-2-Product-Brief.pdf
Artificial Intelligence
Qualcomm® Adreno™ GPU Qualcomm® Kryo™ CPU Qualcomm® Hexagon™ Processor
• Fused AI Accelerator Architecture
• Hexagon Tensor Accelerator
• Hexagon Vector eXtensions
• Hexagon Scalar Accelerator
• Hexagon Direct Link
• Support for mix precision (INT8+INT16)
• Support for all precisions (INT4, INT8, INT16, FP16)
• Micro Tile Inferencing Qualcomm® Sensing Hub
• Dual AI Processors for audio and sensors
• Always-Sensing camera
Our AI Engine includes the Qualcomm® Hexagon™ Processor, with revolutionary micro tile inferencing and faster Tensor accelerators for up to 4.35x1 faster AI performance than its predecessor. Plus, support for INT4 precision boosts performance-per-watt by 60% for sustained AI inferencing.
MicroTiles are part of transformers, such as ViT, which is said to be more efficient than LSTM.
Hi Bravo,
Back in February, I posted the Snapdragon Hexagon spec sheet:
https://www.qualcomm.com/content/da...ocuments/Snapdragon-8-Gen-2-Product-Brief.pdf
Artificial Intelligence
Qualcomm® Adreno™ GPU Qualcomm® Kryo™ CPU Qualcomm® Hexagon™ Processor
• Fused AI Accelerator Architecture
• Hexagon Tensor Accelerator
• Hexagon Vector eXtensions
• Hexagon Scalar Accelerator
• Hexagon Direct Link
• Support for mix precision (INT8+INT16)
• Support for all precisions (INT4, INT8, INT16, FP16)
• Micro Tile Inferencing Qualcomm® Sensing Hub
• Dual AI Processors for audio and sensors
• Always-Sensing camera
Our AI Engine includes the Qualcomm® Hexagon™ Processor, with revolutionary micro tile inferencing and faster Tensor accelerators for up to 4.35x1 faster AI performance than its predecessor. Plus, support for INT4 precision boosts performance-per-watt by 60% for sustained AI inferencing.
MicroTiles are part of transformers, such as ViT, which is said to be more efficient than LSTM.