BRN Discussion Ongoing

Bravo

If ARM was an arm, BRN would be its biceps💪!
Wonder why they put us at the top of the list? Heheheh!



Screenshot 2024-09-24 at 10.09.35 am.png





Neuromorphic Computing Market to Reach $20.4 Billion by 2031 By Top Research Firm​

2024-09-23
3D Technology


Onkar Patil

Guest Post By
Onkar PatilInformation Technology Markets2024-09-23
According to Persistence Market Research, the global neuromorphic computing market is projected to grow from USD 5.4 billion in 2024 to USD 20.4 billion by 2031, with a CAGR of 20.9%, fueled by advancements in hardware and applications in robotics, healthcare, and autonomous vehicles

Introduction: The Rise of Neuromorphic Computing
The global neuromorphic computing market is poised for significant growth, projected to expand from US$5.4 billion in 2024 to US$20.4 billion by 2031, achieving a robust CAGR of 20.9% during the forecast period. Key trends driving this growth include advancements in neuromorphic hardware and a shift beyond traditional AI applications into sectors like robotics, autonomous vehicles, and healthcare diagnostics.
The development of efficient neuromorphic algorithms for processing complex data patterns is also gaining momentum. Consumer electronics are expected to capture a substantial revenue share, with North America leading the market.
Historically, the market has grown at a CAGR of 16.7% from 2018 to 2023, underscoring its rapid evolution and expanding applications.
Understanding Neuromorphic Computing
Neuromorphic computing refers to the design of computer systems inspired by the structure and function of the human brain. Unlike traditional computing systems that rely on binary processing, neuromorphic systems use spiking neural networks to process data in a way that resembles human cognition.
This paradigm shift enables these systems to learn, adapt, and perform complex tasks with a high degree of efficiency.
The Components of Neuromorphic Systems
Neuromorphic systems typically consist of specialized hardware and software designed to emulate neural processes. Key components include:
  • Neurons and Synapses: Basic units of processing, mimicking the biological counterparts in the brain.
  • Spike-Timing Dependent Plasticity (STDP): A learning rule that adjusts the strength of connections based on the timing of neuron spikes.
  • Event-Driven Architecture: Processing is triggered by changes in the environment, allowing for real-time data processing with minimal power consumption.
Elevate your business strategy with comprehensive market data.

Request a sample report now:
www.persistencemarketresearch.com/samples/34726

Factors Driving Market Growth
Several factors are driving the growth of the neuromorphic computing market, each contributing to the technology's increasing adoption across various sectors.
Demand for Energy-Efficient Computing
As data centers and computing systems become increasingly energy-intensive, the need for energy-efficient alternatives is paramount. Neuromorphic computing's ability to perform complex computations with significantly lower power consumption compared to traditional systems makes it an attractive option for organizations looking to reduce their carbon footprint and operational costs.
Advances in Artificial Intelligence and Machine Learning
The rapid advancements in artificial intelligence (AI) and machine learning (ML) are creating a fertile ground for neuromorphic computing. These technologies require sophisticated algorithms capable of processing large amounts of data quickly and accurately.
Neuromorphic systems, with their inherent ability to learn and adapt, are uniquely positioned to enhance AI and ML applications, leading to greater efficiency and effectiveness.
Increasing Investment in Research and Development
The neuromorphic computing sector is witnessing significant investments from both public and private sectors. Governments and organizations are allocating funds to research and development initiatives aimed at exploring the full potential of neuromorphic architectures.
This influx of capital is driving innovation and accelerating the deployment of neuromorphic technologies across various industries.
Key Applications of Neuromorphic Computing
The potential applications of neuromorphic computing are vast and varied, spanning multiple sectors. Here are some of the key areas where this technology is making significant strides:
Robotics and Autonomous Systems
Neuromorphic computing plays a crucial role in enhancing the capabilities of robots and autonomous systems. By enabling machines to process sensory information in real-time, neuromorphic architectures improve decision-making and adaptability, making them more effective in dynamic environments.
Healthcare and Medical Diagnostics
In healthcare, neuromorphic computing is being utilized to enhance medical diagnostics and patient monitoring systems. By processing vast amounts of data from medical devices and imaging systems, neuromorphic technologies can identify patterns and anomalies more quickly, leading to improved patient outcomes and more efficient care delivery.
Smart Devices and the Internet of Things (IoT)
As the IoT continues to expand, the need for intelligent processing solutions becomes increasingly critical. Neuromorphic computing offers a powerful solution for smart devices, allowing them to learn from user interactions and environmental changes.
This capability enhances functionality and provides a more personalized experience for users.
Regional Insights: Where is the Growth Happening?
The neuromorphic computing market is not limited to a specific geographical region; instead, it is experiencing growth across the globe. However, certain regions are emerging as key players in this space.
North America: A Leader in Innovation
North America is at the forefront of neuromorphic computing innovation, driven by significant investment in research and development from both private companies and government agencies. The presence of leading tech companies and research institutions is fostering collaboration and accelerating advancements in neuromorphic technologies.
Europe: A Growing Hub for Research
Europe is also emerging as a crucial player in the neuromorphic computing market. With initiatives such as the Human Brain Project, European researchers are pushing the boundaries of what is possible with neuromorphic systems.
The region's focus on AI and machine learning is further propelling growth in this sector.
Asia-Pacific: The Next Frontier
The Asia-Pacific region is expected to witness substantial growth in the neuromorphic computing market. Countries like China and Japan are investing heavily in AI research and development, positioning themselves as leaders in adopting neuromorphic technologies.
The growing demand for advanced computing solutions in industries such as robotics and healthcare is driving this growth.
Challenges and Considerations
Despite the promising outlook for the neuromorphic computing market, several challenges need to be addressed.
Technical Complexity
The technical complexity of designing and implementing neuromorphic systems presents a significant hurdle for widespread adoption. Organizations may face challenges in integrating these systems with existing infrastructure, requiring substantial investment in training and development.
Standardization and Compatibility
The lack of standardization in neuromorphic architectures can hinder interoperability between different systems. Establishing industry standards is essential to facilitate collaboration and ensure compatibility among various neuromorphic technologies.
Ethical Considerations
As with any advanced technology, neuromorphic computing raises ethical considerations regarding privacy, security, and potential misuse. Addressing these concerns will be critical in building public trust and ensuring responsible deployment of neuromorphic systems.
Key Players:
  • BrainChip Holdings Ltd.
  • Intel Corporation
  • Qualcomm
  • SynSense AG
  • Samsung Electronics Co. Ltd
  • IBM Corporation
  • SK Hynix Inc.
  • General Vision Inc.
  • GrAI Matter Labs
  • Innatera Nanosystems
The Future of Neuromorphic Computing
Looking ahead, the future of neuromorphic computing appears bright. With advancements in hardware and software, combined with increasing investment in research and development, the potential for neuromorphic systems is vast.
As organizations continue to seek more efficient and intelligent solutions, the demand for neuromorphic computing is expected to surge.
Collaboration Between Academia and Industry
To realize the full potential of neuromorphic computing, collaboration between academia and industry will be vital. Researchers can drive innovation while industry partners can facilitate the practical application of these technologies, creating a symbiotic relationship that benefits both sectors.
Continued Investment and Research
Ongoing investment in neuromorphic research will be crucial for addressing the current challenges and unlocking new applications. As organizations recognize the potential benefits of neuromorphic systems, we can expect to see a significant increase in funding and resources dedicated to this field.
Conclusion: A Transformative Force in Computing
The neuromorphic computing market is on the brink of explosive growth, with projections indicating a market size of $20.4 billion by 2031. As this technology continues to evolve, its applications across various sectors will expand, driving innovation and transforming the way we process information.
Embracing neuromorphic computing will not only enhance efficiency but also pave the way for a more intelligent and adaptive future.


 

Attachments

  • 1727136551013.png
    1727136551013.png
    71 bytes · Views: 80
  • Like
  • Love
  • Fire
Reactions: 46 users

Bravo

If ARM was an arm, BRN would be its biceps💪!

AI Explained: Edge AI Computing Brings Tech to Devices​

By PYMNTS | September 23, 2024
|
AI edge computing


Edge AI computing brings the brain of artificial intelligence (AI) directly to your devices, making them smarter, faster and more private.
From self-driving cars navigating city streets to smartphones instantly translating foreign languages, AI is increasingly moving out of centralized data centers and onto the devices we use daily. This shift toward “edge AI” represents a significant evolution in how AI is deployed and used, promising faster response times, improved privacy and the ability to operate in environments with limited connectivity.
Edge AI computing brings AI capabilities directly to devices and local networks rather than relying on distant cloud servers. This allows for faster processing, reduced latency and improved privacy since data doesn’t need to travel far from where it’s collected and used.
The impact on commerce could be particularly profound. Retailers are experimenting with AI-powered cameras and sensors to create cashierless stores, where customers can simply pick up items and walk out, with payment processed automatically. Online shopping could become more personalized, with AI-enabled devices offering real-time recommendations based on a user’s behavior and preferences. Smart shelves with embedded AI could dynamically adjust pricing based on demand and inventory levels in brick-and-mortar stores, potentially revolutionizing traditional retail strategies.

The Rise of AI at the Edge

Edge computing isn’t a new concept, but its marriage with AI is opening up possibilities that were once the realm of science fiction. By processing data locally on devices rather than sending it to the cloud, edge AI can reduce latency from seconds to milliseconds, improve privacy by keeping sensitive data on the device and operate in environments with limited or no internet connectivity.
One prominent application is in autonomous vehicles. Tesla’s Full Self-Driving computer, powered by a custom AI chip, can process 2,300 frames per second from the car’s cameras, making split-second decisions crucial for safe navigation. This local processing allows Tesla vehicles to operate even in areas with poor cellular coverage, a critical feature for the widespread adoption of self-driving technology.
In our pockets, smartphones can increasingly run complex AI models locally. This on-device processing speeds up these features and enhances user privacy by keeping personal data off the cloud.
Google’s latest Pixel phone showcases the power of on-device AI with features like Live Translate, which can translate speech in real time without an internet connection. The Pixel’s custom Tensor chip can process natural language at a rate of 600 words per minute, a capability that would have required a server farm just a few years ago.
The true potential of edge AI may lie in its ability to transform entire cities. In Singapore, a network of AI-enabled cameras and sensors is being deployed as part of a “Smart Nation” initiative. These devices can monitor everything from traffic flow to public safety, processing data locally to respond to incidents in real-time while minimizing the transmission of sensitive information.
Despite its potential, the rise of edge AI is challenging. Hardware limitations mean edge devices often can’t run the most advanced AI models. This has led to a race among chipmakers to develop more robust, energy-efficient AI processors. Nvidia’s Jetson line of AI computers can deliver up to 275 trillion operations per second while consuming as little as 5 watts of power, making them suitable for a wide range of edge devices.
The proliferation of AI-enabled devices raises questions about surveillance and data ownership. The growing number of decisions AI makes at the edge necessitates increased transparency and accountability in these systems.

The Future of AI at the Edge

The momentum behind edge AI shows no signs of slowing. In healthcare, companies like Medtronic are developing AI-enabled insulin pumps that can monitor blood glucose levels and adjust insulin delivery automatically, potentially revolutionizing diabetes management.
Nvidia’s Clara AGX AI computing platform enables AI-powered medical devices to process high-resolution medical imaging data locally, speeding up diagnoses and improving patient privacy.
In agriculture, John Deere’s See & Spray technology uses onboard AI to distinguish between crops and weeds, allowing for precise herbicide application and potentially reducing chemical use by up to 90%.
Edge AI will continue to evolve, and we can expect to see even more innovative applications emerge. The possibilities are vast, from smart homes that can predict and respond to our needs to industrial equipment that can self-diagnose and prevent failures before they occur.

 
  • Like
  • Fire
  • Love
Reactions: 17 users

Bravo

If ARM was an arm, BRN would be its biceps💪!
Data Center Knowledge Logo
How LLMs on the Edge Could Help Solve the AI Data Center Problem
Locally run AI systems, known as LLMs on the edge, could help ease the strain on data centers, but it may take some time before this approach goes mainstream.
Picture of Drew Robb
Drew Robb
September 18, 2024
7 Min Read
LLMs on the edge involves AI running natively on PCs, tablets, laptops, and smartphones, reducing strain on data centers

LLMs on the edge: AI running natively on PCs and smartphones can reduce the strain on data centers.Image: Data Center Knowledge / Alamy

There has been plenty of coverage on the problem AI poses to data center power. One way to ease the strain is through the use of ‘LLMs on the edge’, which enables AI systems to run natively on PCs, tablets, laptops, and smartphones.
The obvious benefits of LLMs on the edge include lowering the cost of LLM training, reduced latency in querying the LLM, enhanced user privacy, and improved reliability.
If they’re able to ease the pressure on data centers by reducing processing power needs, LLMs on the edge could have the potential to eliminate the need for multi-gigawatt-scale AI data center factories. But is this approach really feasible?

With growing discussions around moving the LLMs that underpin generative AI to the edge, we take a closer look at whether this shift can truly reduce the data center strain.

Smartphones Lead the Way in Edge AI​

Michael Azoff, chief analyst for cloud and data center research practice at Omdia, says the AI-on-the-edge use case that is moving the fastest is lightweight LLMs on smartphones.
Huawei has developed different sizes of its LLM Pangu 5.0 and the smallest version has been integrated with its smartphone operating system, HarmonyOS. Devices running this include the Huawei Mate 30 Pro 5G.
Samsung, meanwhile, has developed Gauss LLM that is used in Samsung Galaxy AI, which operates in its flagship Samsung S24 smartphone. Its AI features include live translation, converting voice to text and summarizing notes, circle to search, and photo and message assistance.

Samsung has also moved into mass production of its LPDDR5X DRAM semiconductors. These 12-nanometer chips process memory workloads directly on the device, enabling the phone’s operating system to work faster with storage devices to more efficiently handle AI workloads.
LLM-on-the-Edge-1.jpg

Smartphone manufacturers are experimenting with LLMs on the edge.
Overall, smartphone manufacturers are working hard to make LLMs smaller. Instead of ChatGPT-3’s 175 billion parameters, they are trying to reduce them to around two billion parameters.
Intel and AMD are involved in AI at the edge, too. AMD is working on notebook chips capable of running 30 billion-parameter LLMs locally at speed. Similarly, Intel has assembled a partner ecosystem that is hard at work developing the AI PC. These AI-enabled devices may be pricier than regular models. But the markup may not be as high as expected, and it is likely to come down sharply as adoption ramps up.
“The expensive part of AI at the edge is mostly on the training,” Azoff told Data Center Knowledge. “A trained model used in inference mode does

He believes early deployments are likely to be for scenarios where errors and ‘hallucinations’ don't matter so much, and where there is unlikely to be much risk of reputational damage.
Examples include enhanced recommendation engines, AI-powered internet searches, and creating illustrations or designs. Here, users are relied on to detect suspect responses or poorly represented images and designs.

Data Center Implications for LLMs on the Edge​

With data centers preparing for a massive ramp-up in density and power needs to support the growth of AI, what might the LLMs on the edge trend mean for digital infrastructure facilities?
In the foreseeable future, models running on the edge will continue to be trained in the data center. Thus, the heavy traffic currently hitting data centers from AI is unlikely to wane in the short term. But the models being trained within data centers are already changing. Yes, the massive ones from the likes of OpenAI, Google, and Amazon will continue. But smaller, more focused LLMs are in their ascendency.
“By 2027, more than 50% of the GenAI models that enterprises use will be specific to either an industry or business function – up from approximately 1% in 2023,” Arun Chandrasekaran, an analyst at Gartner, told Data Center Knowledge. “Domain models can be smaller, less computationally intensive, and lower the hallucination risks associated with general-purpose models.”

The development work being done to reduce the size and processing intensity of GenAI will spill over into even more efficient edge LLMs that can run on a range of devices. Once edge LLMs gain momentum, they promise to reduce the amount of AI processing that needs to be done in a centralized data center. It is all a matter of scale.
For now, LLM training largely dominates GenAI as the models are still being created or refined. But imagine hundreds of millions of users using LLMs locally on smartphones and PCs, and the queries having to be processed through large data centers. At scale, that amount of traffic could overwhelm data centers. Thus, the value of LLMs on the edge may not be realized until they enter the mainstream.

LLMs on the Edge: Security and Privacy​

Anyone interacting with an LLM in the cloud is potentially exposing the organization to privacy questions and the potential for a cybersecurity breach.
As more queries and prompts are being done outside the enterprise, there are going to be questions about who has access to that data. After all, users are asking AI systems all sorts of questions about their health, finances, and businesses.
To do so, these users often enter personally identifiable information (PII), sensitive healthcare data, customer information, or even corporate secrets.
The move toward smaller LLMs that can either be contained within the enterprise data center – and thus not running in the cloud – or that can run on local devices is a way to bypass many of the ongoing security and privacy concerns posed by broad usage of LLMs such as ChatGPT.

“Security and privacy on the edge are really important if you are using AI as your personal assistant, and you're going to be dealing with confidential information, sensitive information that you don't want to be made public,” said Azoff.

Timeline for Edge LLMs​

LLMs on the edge won’t become apparent immediately – except for a few specialized use cases. But the edge trend appears unstoppable.
Forrester’s Infrastructure Hardware Survey revealed that 67% of infrastructure hardware decision-makers in organizations have adopted edge intelligence or were in the process of doing so. About one in three companies will also collect and perform AI analysis of edge environments to empower employees with higher- and faster-value insight.
“Enterprises want to collect relevant input from mobile, IoT, and other devices to provide customers with relevant use-case-driven insights when they request them or need greater value,” said Michele Goetz, a business insights analyst at Forrester Research.
“We should see edge LLMs running on smartphones and laptops in large numbers within two to three years.”
Pruning the models to reach a more manageable number of parameters is one obvious way to make them more feasible on the edge. Further, developers are shifting the GenAI model from the GPU to the CPU, reducing the processing footprint, and building standards for compiling.
As well as the smartphone applications noted above, the use cases that lead the way will be those that are achievable despite limited connectivity and bandwidth, according to Goetz.
Field engineering and operations in industries such as utilities, mining, and transportation maintenance are already personal device-oriented and ready for LLM augmentation. As there is business value in such edge LLM applications, paying more for an LLM-capable field device or phone is expected to be less of an issue.

Widespread consumer and business use of LLMs on the edge will have to wait until hardware prices come down as adoption ramps up. For example, Apple Vision Pro is mainly deployed in business solutions where the price tag can be justified.
Other use cases on the near horizon include telecom and network management, smart buildings, and factory automation. More advanced used cases for LLMs on the edge – such as immersive retail and autonomous vehicles – will have to wait five years or more, according to Goetz.
“Before we can see LLMs on personal devices flourish, there will be a growth in specialized LLMs for specific industries and business processes,” the analyst said.
“Once these are developed, it is easier to scale them out for adoption because you aren’t training and tuning a model, shrinking it, and deploying it all at the same time.”

 
  • Like
  • Love
Reactions: 22 users

Bravo

If ARM was an arm, BRN would be its biceps💪!
Might have to do some more digging here!!!!I think this could be very interesting!



NVIDIA and Hitachi Rail (parent company Hitachi) are collaborating to:
allow volumes of data to be processed at the ‘edge’ (on the trains or infrastructure) in real time, with only relevant information sent back to the operational control centers. This enables an unprecedented improvement in the speed that actionable insights reach transport operators, as previously it could take up to ten days for data to be processed in maintenance locations.

BTW, Hitachi Semiconductor and Mitsubishi Electric were the founding partners in Renesas in 2002. Hitachi sold its stake in Reneas earlier this year.



September 23, 2024 23:00 ET| Source: Hitachi Rail Limite
Screenshot 2024-09-24 at 1.39.17 pm.png









And here's a blog from 2020 that shows Hitchai was working with Intel on neuromorphic hardware.


Neuromorphic Computing For Data and Edge Computing​

By Hubert Yoshida posted 10-28-2020 17:45​






0 Like
0EM2S000002XOkw.png

In previous post, I have written about Data Centric Computing, the movement to offload data management functions from CPUs to smart NICs and FPGAs, or DPUs (Data Processing Units) as NVIDIA calls them so that the CPUs could focus more of their power on application processing.

Another approach to Data Centric Computing is the use of computational storage as explained in a post by Stacy Peterson in SearchStorage where computation is moved closer to storage to reduce the amount of storage that moves between storage and compute. This is being driven by the need to reduce latency in IoT and edge devices that are required to handle massive amounts of data. Steve Garbrecht explains how Lumada Edge brings DataOps to the Edge in his post.

Hitachi is also working with Intel in developing neuromorphic hardware to distribute processing across various infrastructure elements which could mean less reliance on centralized systems that require constant high (expensive) bandwidths. Neuromorphic hardware is an electronic device which mimics the natural biological structures of our nervous system. It is an attempt to replicate the cognitive abilities of our brains to process information faster and more efficiently than computers due to the architecture of our neural system.

This sound a little far out, but in March of this year, Intel announced the Pohoiki Springs system, shown here, which comprises about 770 neuromorphic research chips, each with 130,000 neurons, inside a chassis the size of five standard server. It has a computational capacity of about 100 million neurons, roughly similar to the brain of a mole-rat.

0EM2S000002XNGl.png


Unlike traditional CPUs, in the Pohoiki Springs system, the memory and computing elements are intertwined rather than separate. That minimizes the distance that data has to travel, because in traditional computing architectures, data has to flow back and forth between memory and computing.

With neuromorphic computing, it is possible to train machine-learning models using a fraction of the data it takes to train them on traditional computing hardware. That means the models learn similarly to the way human babies learn, by seeing an image or toy once and being able to recognize it forever. The models can also learn from the data, nearly instantaneously, ultimately making predictions that could be more accurate than those made by traditional machine-learning models.

Last year Hitachi joined the Intel Neuromorphic Research Community (INRC). Hitachi has joined forces with Accenture, Airbus, GE, Intel and other INRC members to create proof-of-concept applications that will bring the most value to their businesses. Intel will leverage the insights that come from this customer-centric research to inform the designs of future processors and systems. These engagements will ensure Intel remains strategically positioned at the forefront of neuromorphic technology commercialization.

Hitachi is unique in the way it combines information technologies (IT) including AI, big data analytics and other digital technologies; operational technologies (OT) for system control and operation; and an extensive range of products. Through its Social Innovation Business, Hitachi is providing digital solutions to help resolve challenges faced by customers and society.

“Intel’s Loihi and Spiking Neural Networks (Loihi is the research chip in the Pohoiki Springs System which includes 130,000 neurons optimized for spiking neural network) have the potential to recognize and understand the time series data of many high-resolution cameras and sensors quickly,” said Norikatsu Takaura, chief researcher of the Research & Development Group at Hitachi Ltd. “Neuromorphic computing and its technology stack will improve the scalability and flexibility of edge computing systems.”

In order to gain insight into electrical circuits and biological processes, neuromorphic engineers require interdisciplinary knowledge of biology, physics, math, which plays to the strength of Hitachi’s Social Innovation business. This is a fast growing area. Analysts forecast the neuromorphic computing market could rise from $69 million in 2024 to $5 billion in 2029 – and $21.3 billion in 2034.

https://community.hitachivantara.co...morphic-computing-for-data-and-edge-computing

Screenshot 2024-09-24 at 1.46.04 pm.png
 
Last edited:
  • Like
  • Fire
  • Love
Reactions: 31 users
LLM at the edge AI in smartphones
Come on BRN
 
  • Like
  • Fire
Reactions: 4 users

Diogenese

Top 20
Love it @Terroni2105!

NVIDIA's Jetson meets BrainCHip's AKIDA 1500.

Can someone please send this to Jensen Huang ASAP?!





View attachment 69733
Hi Bravo,

It seems that EdgeX are using Akida 1500(?) as the low power, low precision wake-up and, I'm guessing, Jetson for the full (software) classification? The 1500 does need an external processor for configuration etc., ie, no integrated ARM Cortex.

We know Nvidia has been working with Mercedes.

We know Mercedes is proposing to use software for sensor signal processing.

I reckon they will be using TeNNs/Akida 2 simulation software, presumably running on an Nvidia processor, so Nvidia should be familiar with Akida from their work with Mercedes. TeNNs light processor load and quasi-real-time performance make it feasible to use it in software format.

So, if Nvidia were to be interested in developing an Akida/Nvidia SoCC, I think it would be at the edge to start with, as with Mercedes ADAS processor. for example

But I don't know whether that would involve Sensor/SNN fusion or SNN/processor fusion, or a hybrid. Presumably they have had the processor/software version working for some time, and this may be the chosen near-term model while the technology continues to develop.

Of course, Akida SoC is updatable to the extent that the models can be upgraded using federated learning, somehting which PvdM foresaw a decade ago.
 
  • Like
  • Fire
  • Love
Reactions: 50 users

Moneytalks

Member
  • Like
  • Thinking
  • Wow
Reactions: 14 users

itsol4605

Regular
Hi Bravo,

It seems that EdgeX are using Akida 1500(?) as the low power, low precision wake-up and, I'm guessing, Jetson for the full (software) classification? The 1500 does need an external processor for configuration etc., ie, no integrated ARM Cortex.

We know Nvidia has been working with Mercedes.

We know Mercedes is proposing to use software for sensor signal processing.

I reckon they will be using TeNNs/Akida 2 simulation software, presumably running on an Nvidia processor, so Nvidia should be familiar with Akida from their work with Mercedes. TeNNs light processor load and quasi-real-time performance make it feasible to use it in software format.

So, if Nvidia were to be interested in developing an Akida/Nvidia SoCC, I think it would be at the edge to start with, as with Mercedes ADAS processor. for example

But I don't know whether that would involve Sensor/SNN fusion or SNN/processor fusion, or a hybrid. Presumably they have had the processor/software version working for some time, and this may be the chosen near-term model while the technology continues to develop.

Of course, Akida SoC is updatable to the extent that the models can be upgraded using federated learning, somehting which PvdM foresaw a decade ago.
"... they will be using TeNNs/Akida 2 simulation software, presumably running on an Nvidia processor..."

Using 'TeNNs/Akida 2 simulation software' free of charge??
 
  • Like
Reactions: 7 users

Diogenese

Top 20
"... they will be using TeNNs/Akida 2 simulation software, presumably running on an Nvidia processor..."

Using 'TeNNs/Akida 2 simulation software' free of charge??
Just to put that in context, the quote was qualified by "I reckon", ie, it is an informed guess, not an established fact.

As to a software licence, Sean recently revealed there is a new algorithm product line to add to the IP. Again, I reckon commercial use of the simulation software would attract a licence fee for each copy of the software, just as any other commercial software, plus ongoing maintenance fees.
 
  • Like
  • Fire
  • Love
Reactions: 33 users
Wonder why they put us at the top of the list? Heheheh!



View attachment 69743




Neuromorphic Computing Market to Reach $20.4 Billion by 2031 By Top Research Firm​

2024-09-23
3D Technology


Onkar Patil

Guest Post By
Onkar PatilInformation Technology Markets2024-09-23
According to Persistence Market Research, the global neuromorphic computing market is projected to grow from USD 5.4 billion in 2024 to USD 20.4 billion by 2031, with a CAGR of 20.9%, fueled by advancements in hardware and applications in robotics, healthcare, and autonomous vehicles

Introduction: The Rise of Neuromorphic Computing
The global neuromorphic computing market is poised for significant growth, projected to expand from US$5.4 billion in 2024 to US$20.4 billion by 2031, achieving a robust CAGR of 20.9% during the forecast period. Key trends driving this growth include advancements in neuromorphic hardware and a shift beyond traditional AI applications into sectors like robotics, autonomous vehicles, and healthcare diagnostics.
The development of efficient neuromorphic algorithms for processing complex data patterns is also gaining momentum. Consumer electronics are expected to capture a substantial revenue share, with North America leading the market.
Historically, the market has grown at a CAGR of 16.7% from 2018 to 2023, underscoring its rapid evolution and expanding applications.
Understanding Neuromorphic Computing
Neuromorphic computing refers to the design of computer systems inspired by the structure and function of the human brain. Unlike traditional computing systems that rely on binary processing, neuromorphic systems use spiking neural networks to process data in a way that resembles human cognition.
This paradigm shift enables these systems to learn, adapt, and perform complex tasks with a high degree of efficiency.
The Components of Neuromorphic Systems
Neuromorphic systems typically consist of specialized hardware and software designed to emulate neural processes. Key components include:
  • Neurons and Synapses: Basic units of processing, mimicking the biological counterparts in the brain.
  • Spike-Timing Dependent Plasticity (STDP): A learning rule that adjusts the strength of connections based on the timing of neuron spikes.
  • Event-Driven Architecture: Processing is triggered by changes in the environment, allowing for real-time data processing with minimal power consumption.
Elevate your business strategy with comprehensive market data.

Request a sample report now:
www.persistencemarketresearch.com/samples/34726

Factors Driving Market Growth
Several factors are driving the growth of the neuromorphic computing market, each contributing to the technology's increasing adoption across various sectors.
Demand for Energy-Efficient Computing
As data centers and computing systems become increasingly energy-intensive, the need for energy-efficient alternatives is paramount. Neuromorphic computing's ability to perform complex computations with significantly lower power consumption compared to traditional systems makes it an attractive option for organizations looking to reduce their carbon footprint and operational costs.
Advances in Artificial Intelligence and Machine Learning
The rapid advancements in artificial intelligence (AI) and machine learning (ML) are creating a fertile ground for neuromorphic computing. These technologies require sophisticated algorithms capable of processing large amounts of data quickly and accurately.
Neuromorphic systems, with their inherent ability to learn and adapt, are uniquely positioned to enhance AI and ML applications, leading to greater efficiency and effectiveness.
Increasing Investment in Research and Development
The neuromorphic computing sector is witnessing significant investments from both public and private sectors. Governments and organizations are allocating funds to research and development initiatives aimed at exploring the full potential of neuromorphic architectures.
This influx of capital is driving innovation and accelerating the deployment of neuromorphic technologies across various industries.
Key Applications of Neuromorphic Computing
The potential applications of neuromorphic computing are vast and varied, spanning multiple sectors. Here are some of the key areas where this technology is making significant strides:
Robotics and Autonomous Systems
Neuromorphic computing plays a crucial role in enhancing the capabilities of robots and autonomous systems. By enabling machines to process sensory information in real-time, neuromorphic architectures improve decision-making and adaptability, making them more effective in dynamic environments.
Healthcare and Medical Diagnostics
In healthcare, neuromorphic computing is being utilized to enhance medical diagnostics and patient monitoring systems. By processing vast amounts of data from medical devices and imaging systems, neuromorphic technologies can identify patterns and anomalies more quickly, leading to improved patient outcomes and more efficient care delivery.
Smart Devices and the Internet of Things (IoT)
As the IoT continues to expand, the need for intelligent processing solutions becomes increasingly critical. Neuromorphic computing offers a powerful solution for smart devices, allowing them to learn from user interactions and environmental changes.
This capability enhances functionality and provides a more personalized experience for users.
Regional Insights: Where is the Growth Happening?
The neuromorphic computing market is not limited to a specific geographical region; instead, it is experiencing growth across the globe. However, certain regions are emerging as key players in this space.
North America: A Leader in Innovation
North America is at the forefront of neuromorphic computing innovation, driven by significant investment in research and development from both private companies and government agencies. The presence of leading tech companies and research institutions is fostering collaboration and accelerating advancements in neuromorphic technologies.
Europe: A Growing Hub for Research
Europe is also emerging as a crucial player in the neuromorphic computing market. With initiatives such as the Human Brain Project, European researchers are pushing the boundaries of what is possible with neuromorphic systems.
The region's focus on AI and machine learning is further propelling growth in this sector.
Asia-Pacific: The Next Frontier
The Asia-Pacific region is expected to witness substantial growth in the neuromorphic computing market. Countries like China and Japan are investing heavily in AI research and development, positioning themselves as leaders in adopting neuromorphic technologies.
The growing demand for advanced computing solutions in industries such as robotics and healthcare is driving this growth.
Challenges and Considerations
Despite the promising outlook for the neuromorphic computing market, several challenges need to be addressed.
Technical Complexity
The technical complexity of designing and implementing neuromorphic systems presents a significant hurdle for widespread adoption. Organizations may face challenges in integrating these systems with existing infrastructure, requiring substantial investment in training and development.
Standardization and Compatibility
The lack of standardization in neuromorphic architectures can hinder interoperability between different systems. Establishing industry standards is essential to facilitate collaboration and ensure compatibility among various neuromorphic technologies.
Ethical Considerations
As with any advanced technology, neuromorphic computing raises ethical considerations regarding privacy, security, and potential misuse. Addressing these concerns will be critical in building public trust and ensuring responsible deployment of neuromorphic systems.
Key Players:
  • BrainChip Holdings Ltd.
  • Intel Corporation
  • Qualcomm
  • SynSense AG
  • Samsung Electronics Co. Ltd
  • IBM Corporation
  • SK Hynix Inc.
  • General Vision Inc.
  • GrAI Matter Labs
  • Innatera Nanosystems
The Future of Neuromorphic Computing
Looking ahead, the future of neuromorphic computing appears bright. With advancements in hardware and software, combined with increasing investment in research and development, the potential for neuromorphic systems is vast.
As organizations continue to seek more efficient and intelligent solutions, the demand for neuromorphic computing is expected to surge.
Collaboration Between Academia and Industry
To realize the full potential of neuromorphic computing, collaboration between academia and industry will be vital. Researchers can drive innovation while industry partners can facilitate the practical application of these technologies, creating a symbiotic relationship that benefits both sectors.
Continued Investment and Research
Ongoing investment in neuromorphic research will be crucial for addressing the current challenges and unlocking new applications. As organizations recognize the potential benefits of neuromorphic systems, we can expect to see a significant increase in funding and resources dedicated to this field.
Conclusion: A Transformative Force in Computing
The neuromorphic computing market is on the brink of explosive growth, with projections indicating a market size of $20.4 billion by 2031. As this technology continues to evolve, its applications across various sectors will expand, driving innovation and transforming the way we process information.
Embracing neuromorphic computing will not only enhance efficiency but also pave the way for a more intelligent and adaptive future.


All of a sudden I am getting a very warm surge of blood pounding through my veins.
The more I think about it I get cold sweats

Is it about to blow jimmy????
 
  • Haha
  • Like
Reactions: 14 users

IloveLamp

Top 20
1000018542.jpg
 
Last edited:
  • Like
  • Thinking
  • Wow
Reactions: 14 users
Steve Brightfields Presso from the recent AI Hardware & Edge AI Summit,

Hadn't noticed if posted but haven't read every post as been bit busy.

Link to Kiasco who posted it or I scrolled it below for quick reading.


BRN Presso Sept 24.jpg
 
Last edited:
  • Like
  • Love
  • Fire
Reactions: 81 users
  • Like
  • Thinking
Reactions: 10 users

Diogenese

Top 20
Steve Brightfields Presso from the recent AI Hardware & Edge AI Summit,

Hadn't noticed posted but haven't read every post as been bit busy.

Link to Kiasco who posted it or I scrolled it below for quick reading.


View attachment 69769
That is a very impressive presentation.

I particularly like the penultimate slide:

"New model algorithms in software".

This is a faster way to implementation.
 
  • Like
  • Love
  • Fire
Reactions: 41 users

Diogenese

Top 20
That is a very impressive presentation.

I particularly like the penultimate slide:

"New model algorithms in software".

This is a faster way to implementation.
... also "Akida Pico", surely this is for hearing aids.
 
  • Like
  • Fire
Reactions: 31 users
... also "Akida Pico", surely this is for hearing aids.
Was trying to find anything related to it.

Is it hearing?...got no idea but as someone like yourself would know, Pico is used in units of measurement, very small, but I also see it is an acronym also used in evidence based medicine...could something like this framework be used to guide creation of models for Akida 2.0 / Tenns?

Who knows where that came from in the presso?


Evidence Based Medicine: The PICO Framework​


Natural language questions typically do not contain the elements of clearly formulated question and can lead to difficulty when trying to find an answer.2,3 A modest investment of time to consider what you need to find out and construct a focused question will yield a more effective and efficient search for evidence, helping you to more quickly locate the best available evidence to inform your decision.

The PICO Framework​

The PICO framework is the most commonly used model for structuring clinical questions because it captures each key element required for a focused question. PICO stands for:
  • Patient or problem
  • Intervention or exposure
  • Comparison or control
  • Outcome(s)
 
  • Like
  • Fire
  • Love
Reactions: 20 users

Diogenese

Top 20
OOoo-oooH-ooh!

Obviously you think this means that there is a chance that they're incoproating OUR new model algorithms ???

I think I was the first to give this a ❤️

Hopefully that means I snuck in first for a more in depth description of why it COULD BE US??
I'd like to think that "THEY" are using our software, but who is "THEY"?

My list includes Valeo and Mercedes in their up-coming product releases. Prophesee is another likely user for their high definition DVS (Synsense being adopted for the el cheapo DVS). In fact, all the EAPs will already have been trying out Akida 2/TeNNs software.

What I'd like to know is, who took over the tape-out of Akida 2:

Intel with their 18A gate-all-around process,
Nvidia with their ADAS processor,
ARM, with whom we have a long-standing relationship,
Socionext,
TSMC,
TATA,
MegaChips ... ?

I think it has to be at that sort of level.
 
  • Like
  • Love
  • Fire
Reactions: 44 users

Diogenese

Top 20
Was trying to find anything related to it.

Is it hearing?...got no idea but as someone like yourself would know, Pico is used in units of measurement, very small, but I also see it is an acronym also used in evidence based medicine...could something like this framework be used to guide creation of models for Akida 2.0 / Tenns?

Who knows where that came from in the presso?


Evidence Based Medicine: The PICO Framework​


Natural language questions typically do not contain the elements of clearly formulated question and can lead to difficulty when trying to find an answer.2,3 A modest investment of time to consider what you need to find out and construct a focused question will yield a more effective and efficient search for evidence, helping you to more quickly locate the best available evidence to inform your decision.

The PICO Framework​

The PICO framework is the most commonly used model for structuring clinical questions because it captures each key element required for a focused question. PICO stands for:
  • Patient or problem
  • Intervention or exposure
  • Comparison or control
  • Outcome(s)
Hi FMF,

Being an engineer, I'm a literal sort of person. I took it to mean "very small", ie, something which could fit in the ear, or on a MCU that fits in the ear. The noise cancelling would be very useful for that. It reduces the processing MACs to less than one tenth compared with known systems, so it would require less processor capacity.
 
Last edited:
  • Like
  • Love
  • Fire
Reactions: 34 users
Top Bottom