BRN Discussion Ongoing

Boab

I wish I could paint like Vincent
You're not the only one who feels that way..

"Disney" is not the "Entertainment" business they once were and will continue down, if they follow their current "direction" in my opinion.

View attachment 69715

They have destroyed the "Star Wars" franchise they bought, with their "woke" BS..

Much of Hollywood, is the same.
Spot on mate.
 
  • Like
  • Thinking
Reactions: 7 users

7für7

Top 20
You're not the only one who feels that way..

"Disney" is not the "Entertainment" business they once were and will continue down, if they follow their current "direction" in my opinion.

They have destroyed the "Star Wars" franchise they bought, with their "woke" BS..

Much of Hollywood, is the same.
I agree 100% ! From the moment when they announced that they bought starwars…. I was imagine something like that! (It’s AI generated you can not find it on internet) and still I believe they will bring something like that soon
IMG_6445.jpeg
 
  • Like
  • Haha
Reactions: 3 users

manny100

Regular

"About Unigen Cupcake​

Cupcake V3
Cupcake Edge AI Server
Unigen’s Cupcake Edge AI Server delivers a reliable, high-performance, low-latency, low-power platform for Machine Learning and Inference AI in a compact and rugged enclosure. Cupcake integrates a flexible combination of I/O Interfaces and expansion capabilities to capture and process video and multiple types of signals through its Power-Over-Ethernet (POE) ports, and then delivers the processed data to the client either over a wired or wireless network. Neural Networks are supported by the leading ISV providers allowing for a highly customizable solution for multiple applications. Cupcake is a small form factor fanless design in a ruggedized case perfect for environments where Visual Security is important (e.g., secure buildings, transportation, warehouses, or public spaces). External interfaces included are Ethernet, POE, HDMI, USB 3.0, USB Type-C, CANbus, RS232, SDCard, antennas for WIFI, and internal interfaces for optional M.2 SATA III, M.2 NVMe and SO-DIMMs. The flexibility in IO renders the Cupcake Edge AI Server suitable for multiple applications and markets."
From the Unigen website (link below). Note my bolded above mentions Neural Networks are supported by ISV providers. Could this be BRN or Renesas, or a licensee???? Its fanless - could this be AKIDA??? Likely but i am not sure whether AKIDA was an option or in all Cupcakes.?
 
  • Like
  • Fire
  • Thinking
Reactions: 15 users

7für7

Top 20

"About Unigen Cupcake​

Cupcake V3
Cupcake Edge AI Server
Unigen’s Cupcake Edge AI Server delivers a reliable, high-performance, low-latency, low-power platform for Machine Learning and Inference AI in a compact and rugged enclosure. Cupcake integrates a flexible combination of I/O Interfaces and expansion capabilities to capture and process video and multiple types of signals through its Power-Over-Ethernet (POE) ports, and then delivers the processed data to the client either over a wired or wireless network. Neural Networks are supported by the leading ISV providers allowing for a highly customizable solution for multiple applications. Cupcake is a small form factor fanless design in a ruggedized case perfect for environments where Visual Security is important (e.g., secure buildings, transportation, warehouses, or public spaces). External interfaces included are Ethernet, POE, HDMI, USB 3.0, USB Type-C, CANbus, RS232, SDCard, antennas for WIFI, and internal interfaces for optional M.2 SATA III, M.2 NVMe and SO-DIMMs. The flexibility in IO renders the Cupcake Edge AI Server suitable for multiple applications and markets."
From the Unigen website (link below). Note my bolded above mentions Neural Networks are supported by ISV providers. Could this be BRN or Renesas, or a licensee???? Its fanless - could this be AKIDA??? Likely but i am not sure whether AKIDA was an option or in all Cupcakes.?
Holy... at first I thought I was seeing BrainChip logos on the device... but when I zoomed in, it turned out to be just Phillips head screws. 😵‍💫🫤
 
  • Haha
  • Like
Reactions: 11 users

Terroni2105

Founding Member
EDGX displaying their work with Akida at the recent SPAICE conference

1727091717566.jpeg

1727091796174.png





 
  • Like
  • Love
  • Fire
Reactions: 70 users

7für7

Top 20
Yes but why no mention?
 
  • Like
Reactions: 2 users

KMuzza

Mad Scientist
EDGX displaying their work with Akida at the recent SPAICE conference

View attachment 69719
View attachment 69720




Yes but why no mention?


.Well this is what makes me a LTH.

1727094317017.png

1727094508690.png



But - YES ONE HAS TO BE HONEST- YOU CANNOT "AKIDA BALLISTA UBQTS" very much @ 16cents.

Why no ASX announcements on this.
Our time will eventually arrive.
Yes -the technology is great but the A I world moves so fast- !!!- ( Remember when we were 5 years ahead of everyone)- 😎😎

AKIDA BALLISTA UBQTS
 
Last edited:
  • Like
  • Fire
  • Love
Reactions: 22 users

KMuzza

Mad Scientist
.Well this is what makes me a LTH.

View attachment 69723
View attachment 69724


But - YES ONE HAS TO BE HONEST- YOU CANNOT "AKIDA BALLISTA UBQTS" very much @ 16cents.

Why no ASX announcements on this.
Our time will eventually arrive.
Yes -the technology is great but the A I world moves so fast- !!!- ( Rember when we were 5 years ahead of everyone)- 😎😎

AKIDA BALLISTA UBQTS
1727095941803.png
 
  • Like
  • Fire
  • Love
Reactions: 14 users

7für7

Top 20
.Well this is what makes me a LTH.

View attachment 69723
View attachment 69724


But - YES ONE HAS TO BE HONEST- YOU CANNOT "AKIDA BALLISTA UBQTS" very much @ 16cents.

Why no ASX announcements on this.
Our time will eventually arrive.
Yes -the technology is great but the A I world moves so fast- !!!- ( Remember when we were 5 years ahead of everyone)- 😎😎

AKIDA BALLISTA UBQTS
I know there is a picture with a akida on it. But why he doesn’t mentioned brainchip or akida on his long LinkedIn post? Why they don’t name the child by his name?
 
  • Like
  • Haha
Reactions: 5 users

Bravo

If ARM was an arm, BRN would be its biceps💪!
EDGX displaying their work with Akida at the recent SPAICE conference

View attachment 69719
View attachment 69720






Love it @Terroni2105!

NVIDIA's Jetson meets BrainCHip's AKIDA 1500.

Can someone please send this to Jensen Huang ASAP?!





Screenshot 2024-09-24 at 9.37.24 am.png
 
  • Like
  • Love
  • Fire
Reactions: 38 users

Bravo

If ARM was an arm, BRN would be its biceps💪!

Building Brain-Inspired Networks for the Future

23 September 2024
Brain Inspired Networks

Tools​

Typography​

  • Smaller Small Medium Big Bigger
  • Default Helvetica Segoe Georgia Times
  • Reading Mode

Share This​


As artificial intelligence (AI) evolves, it is no surprise that it is drawing inspiration from one of the most sophisticated systems in existence: the human brain. Recent advances in brain-inspired networks are pushing the boundaries of how we think about computing and communication, and they could hold the key to more efficient, scalable, and adaptive systems. These networks mimic the way biological brains process information, enabling the development of machines that can learn, adapt, and perform complex tasks more effectively than traditional models.
At the core of brain-inspired networks is the concept of spiking neural networks (SNNs). Unlike traditional neural networks, which rely on continuous signals, SNNs use discrete, time-dependent spikes to transmit information, similar to how neurons in the brain communicate through electrical impulses. This method of communication is both energy-efficient and fast, making it an ideal model for developing low-power, high-speed computing systems. As explained in a recent study published in Nature Communications, SNNs operate by encoding information in the timing and frequency of spikes, which allows them to perform complex computations with a minimal energy footprint.
Additionally, researchers are exploring how to integrate synaptic plasticity—the brain's ability to strengthen or weaken synapses based on experience—into artificial networks. This concept is vital for creating systems that can adapt and improve over time.


Brain-Inspired Design for Sustainable AI
The environmental impact of AI is becoming a growing concern, as data centers and supercomputers consume massive amounts of energy. Neuromorphic computing offers a promising solution to this challenge by significantly reducing the energy consumption of AI systems. Microsoft's research on brain-inspired AI highlights the potential for neuromorphic architectures to deliver more sustainable and energy-efficient technologies.
At Microsoft Research Asia, in collaboration with Fudan University, Shanghai Jiao Tong University, and the Okinawa Institute of Technology, three notable projects are underway. One introduces a neural network that simulates the way the brain learns and computes information (CircuitNet); another enhances the accuracy and efficiency of predictive models for future events (SNN Framework); and a third improves AI’s proficiency in language processing and pattern prediction (CPG-PE).
Current AI systems are incredibly resource-intensive. Training a large AI model can require hundreds of megawatt-hours of electricity, leading to substantial carbon emissions. Neuromorphic systems, by contrast, mimic the brain’s highly efficient processes, consuming only a fraction of the energy required by traditional AI models. This energy efficiency is critical not only for the sustainability of AI but also for expanding its applications in resource-constrained environments, such as mobile devices and embedded systems. These developments make brain-inspired networks a promising avenue for AI that is not only more capable but also more environmentally friendly.
One company at the forefront of neuromorphic computing is Intel, which has introduced the Loihi neuromorphic chip. Intel's Loihi chip mimics the way the human brain processes information, offering significant energy savings compared to traditional AI processors. Intel's Loihi platform focuses on advancing AI by reducing the power needed for real-time, continuous learning, which makes it an ideal solution for energy-efficient AI systems. The company is researching and developing neuromorphic systems that could drastically cut down the environmental footprint of AI technologies in fields like robotics, healthcare, and smart devices.


Applications of Brain-Inspired Networks
The potential applications of brain-inspired networks are vast, ranging from healthcare to autonomous vehicles and beyond. In healthcare, neuromorphic systems could be used to develop advanced diagnostic tools that mimic the decision-making capabilities of human doctors. By processing vast amounts of data from medical records, imaging studies, and genetic information, these systems could provide more accurate diagnoses and treatment recommendations.

Nature Machine Intelligence published a joint paper from researchers at Intel Labs and Cornell University demonstrating the ability of Intel's neuromorphic test chip, Loihi, to learn and recognize 10 hazardous chemicals, even in the presence of significant noise and occlusion. The system employs a neural network to process sensory data in real-time, much like how human olfaction works.
In the field of autonomous vehicles, brain-inspired networks could enable cars to process and respond to complex driving environments in real time, making them more reliable and safer than current models. Traditional AI models struggle with the unpredictability of real-world scenarios, but neuromorphic systems can adapt to these situations on-the-fly. This adaptability is essential for creating truly autonomous systems that can operate safely in dynamic environments.
Inspired by human vision, Prophesee’s technology uses a patented sensor design and AI algorithms that mimic the eye and brain to reveal what was once invisible using standard frame-based technology. Prophesee’s computer vision systems open new potential in areas such as autonomous vehicles, industrial automation, IoT, mobile and AR/VR. One early application was in medical devices that restore vision to the blind.
Moreover, SynSense, has raised double-digit millions from two Chinese venture capital firms—Maxvision and RunWoo—in a strategic investment round. The new capital will be used to further develop the DYNAP-CNN2 chip. The chip is designed to provide low-power-consumption support for complex visual applications such as autonomous flying and obstacle avoidance.
Brain-inspired networks are also making strides in the area of robotics. By mimicking the way the human brain controls the body, neuromorphic systems can enable robots to perform complex tasks with greater precision and dexterity. This capability is particularly important in fields such as manufacturing, where robots are increasingly being used to perform delicate and intricate tasks that require fine motor control.
Thanks to the world’s first neuromorphic programmable robot, which SynSense unveiled together with the company, QunYu, at the 22nd China Shantou (Chenghai) International Toy Fair in April 2023, the possibilities for human-robot interaction are expanding. According to a statement, the robot can recognize, visually perceive, and imitate the human body. It is SynSense’s Speck chip that makes this possible. “By waving your arms, the robot can learn your movements and wave its arms in response,” explained Yannan Xing, Senior Algorithm Application Engineer at SynSense.
Read More: Chinese Firm Launches Traffic Solution Entitled ‘Smart Transportation Brain’

Innovations in Brain-Inspired Design
One of the most significant challenges in AI is balancing performance with energy efficiency. Brain-inspired systems promise to deliver the best of both worlds, drawing attention from tech giants like Microsoft, which has made strides in integrating neuromorphic architectures into AI. Microsoft's research into brain-inspired AI emphasizes that leveraging the brain's design can create more capable and sustainable technologies. These innovations focus on creating hardware and software that work together similarly to how neurons and synapses collaborate in the brain.
A critical area of innovation in brain-inspired design is the development of hardware architectures capable of supporting neuromorphic systems. While today's AI systems rely heavily on GPUs and traditional processing units, neuromorphic computing demands specialized hardware capable of mimicking the intricate behaviors of biological neurons. As a result, companies and research institutions are working on creating neuromorphic chips, such as IBM's TrueNorth chip, which contains one million neurons and 256 million synapses, and is designed to simulate brain-like operations.
TrueNorth operates through a network of spiking neurons, allowing it to process information in a highly parallel and energy-efficient manner, much like biological neural systems. This innovation represents a significant step forward in neuromorphic computing, offering vast potential for applications requiring real-time decision-making and low-power AI solutions.
The iCub humanoid robot, developed by the Italian Institute of Technology, uses neuromorphic principles to enhance its motor skills and dexterity. The robot is designed to learn and interact with humans in a manner similar to how children learn through exploration. Its neuromorphic architecture helps it perform complex tasks like grasping objects of varying sizes, walking on uneven surfaces, and even mimicking human emotions through facial expressions. The goal of iCub is to develop human-like learning and movement, allowing robots to assist in healthcare, caregiving, or industrial tasks that require delicate handling.
Researchers at Oak Ridge National Laboratory (ORNL) developed a neuromorphic robot for environmental monitoring and exploration. The robot's brain-inspired control system allows it to process data from multiple sensors in real-time, enabling it to autonomously navigate difficult terrains and perform complex tasks, such as sampling soil or collecting environmental data in hazardous areas. The neuromorphic system enables the robot to make quick adjustments based on sensory input, allowing it to perform these tasks with higher precision and minimal power consumption, which is critical for extended field operations.
The SpiNNaker (Spiking Neural Network Architecture) project, developed at the University of Manchester, is a supercomputer designed to mimic the human brain's neural network. Unlike traditional computing systems, SpiNNaker’s architecture allows it to simulate millions of neurons in real-time. The system is being used to model brain disorders like epilepsy, Parkinson’s disease, and Alzheimer’s, helping researchers understand the brain's functioning and simulate treatments with a focus on real-time, energy-efficient processing.
BrainChip, an AI company, developed the Akida neural processor, a neuromorphic chip designed for edge AI applications, enabling smart devices to process information locally without relying on cloud computing. Inspired by the brain’s spiking neural networks, Akida is used in devices that require real-time learning and ultra-low power consumption, such as drones, security cameras, and industrial sensors. Its ability to learn on-site, in real-time, allows it to perform complex tasks like image recognition and anomaly detection with high efficiency and minimal energy use, making it ideal for edge computing applications.
Fujitsu developed the Digital Annealer, a brain-inspired computing platform designed to solve complex optimization problems that traditional computers struggle with. Although it is not a neuromorphic system in the same sense as other examples, its brain-inspired design allows it to handle combinatorial optimization tasks, such as route planning for autonomous vehicles, financial portfolio optimization, and drug discovery.
Pohoiki Springs, built by Intel, is a neuromorphic system combining 768 Loihi chips to create a large-scale, brain-inspired computing platform. It is designed for advanced research in AI, robotics, and autonomous systems. The Pohoiki Springs system can process data more efficiently than conventional supercomputers while using significantly less energy. Researchers use it to develop AI systems that can solve optimization problems, learn autonomously, and adapt in real-time, making it applicable to areas such as robotics control systems, smart cities, and AI-powered healthcare.

 
  • Like
  • Fire
  • Love
Reactions: 41 users

Esq.111

Fascinatingly Intuitive.
Morning Chippers ,

One I prepared earlier , 😃.

Regards,
Esq.
 

Attachments

  • 20230824_063426.jpg
    20230824_063426.jpg
    5.3 MB · Views: 95
  • Haha
  • Like
  • Love
Reactions: 27 users

Bravo

If ARM was an arm, BRN would be its biceps💪!
Wonder why they put us at the top of the list? Heheheh!



Screenshot 2024-09-24 at 10.09.35 am.png





Neuromorphic Computing Market to Reach $20.4 Billion by 2031 By Top Research Firm​

2024-09-23
3D Technology


Onkar Patil

Guest Post By
Onkar PatilInformation Technology Markets2024-09-23
According to Persistence Market Research, the global neuromorphic computing market is projected to grow from USD 5.4 billion in 2024 to USD 20.4 billion by 2031, with a CAGR of 20.9%, fueled by advancements in hardware and applications in robotics, healthcare, and autonomous vehicles

Introduction: The Rise of Neuromorphic Computing
The global neuromorphic computing market is poised for significant growth, projected to expand from US$5.4 billion in 2024 to US$20.4 billion by 2031, achieving a robust CAGR of 20.9% during the forecast period. Key trends driving this growth include advancements in neuromorphic hardware and a shift beyond traditional AI applications into sectors like robotics, autonomous vehicles, and healthcare diagnostics.
The development of efficient neuromorphic algorithms for processing complex data patterns is also gaining momentum. Consumer electronics are expected to capture a substantial revenue share, with North America leading the market.
Historically, the market has grown at a CAGR of 16.7% from 2018 to 2023, underscoring its rapid evolution and expanding applications.
Understanding Neuromorphic Computing
Neuromorphic computing refers to the design of computer systems inspired by the structure and function of the human brain. Unlike traditional computing systems that rely on binary processing, neuromorphic systems use spiking neural networks to process data in a way that resembles human cognition.
This paradigm shift enables these systems to learn, adapt, and perform complex tasks with a high degree of efficiency.
The Components of Neuromorphic Systems
Neuromorphic systems typically consist of specialized hardware and software designed to emulate neural processes. Key components include:
  • Neurons and Synapses: Basic units of processing, mimicking the biological counterparts in the brain.
  • Spike-Timing Dependent Plasticity (STDP): A learning rule that adjusts the strength of connections based on the timing of neuron spikes.
  • Event-Driven Architecture: Processing is triggered by changes in the environment, allowing for real-time data processing with minimal power consumption.
Elevate your business strategy with comprehensive market data.

Request a sample report now:
www.persistencemarketresearch.com/samples/34726

Factors Driving Market Growth
Several factors are driving the growth of the neuromorphic computing market, each contributing to the technology's increasing adoption across various sectors.
Demand for Energy-Efficient Computing
As data centers and computing systems become increasingly energy-intensive, the need for energy-efficient alternatives is paramount. Neuromorphic computing's ability to perform complex computations with significantly lower power consumption compared to traditional systems makes it an attractive option for organizations looking to reduce their carbon footprint and operational costs.
Advances in Artificial Intelligence and Machine Learning
The rapid advancements in artificial intelligence (AI) and machine learning (ML) are creating a fertile ground for neuromorphic computing. These technologies require sophisticated algorithms capable of processing large amounts of data quickly and accurately.
Neuromorphic systems, with their inherent ability to learn and adapt, are uniquely positioned to enhance AI and ML applications, leading to greater efficiency and effectiveness.
Increasing Investment in Research and Development
The neuromorphic computing sector is witnessing significant investments from both public and private sectors. Governments and organizations are allocating funds to research and development initiatives aimed at exploring the full potential of neuromorphic architectures.
This influx of capital is driving innovation and accelerating the deployment of neuromorphic technologies across various industries.
Key Applications of Neuromorphic Computing
The potential applications of neuromorphic computing are vast and varied, spanning multiple sectors. Here are some of the key areas where this technology is making significant strides:
Robotics and Autonomous Systems
Neuromorphic computing plays a crucial role in enhancing the capabilities of robots and autonomous systems. By enabling machines to process sensory information in real-time, neuromorphic architectures improve decision-making and adaptability, making them more effective in dynamic environments.
Healthcare and Medical Diagnostics
In healthcare, neuromorphic computing is being utilized to enhance medical diagnostics and patient monitoring systems. By processing vast amounts of data from medical devices and imaging systems, neuromorphic technologies can identify patterns and anomalies more quickly, leading to improved patient outcomes and more efficient care delivery.
Smart Devices and the Internet of Things (IoT)
As the IoT continues to expand, the need for intelligent processing solutions becomes increasingly critical. Neuromorphic computing offers a powerful solution for smart devices, allowing them to learn from user interactions and environmental changes.
This capability enhances functionality and provides a more personalized experience for users.
Regional Insights: Where is the Growth Happening?
The neuromorphic computing market is not limited to a specific geographical region; instead, it is experiencing growth across the globe. However, certain regions are emerging as key players in this space.
North America: A Leader in Innovation
North America is at the forefront of neuromorphic computing innovation, driven by significant investment in research and development from both private companies and government agencies. The presence of leading tech companies and research institutions is fostering collaboration and accelerating advancements in neuromorphic technologies.
Europe: A Growing Hub for Research
Europe is also emerging as a crucial player in the neuromorphic computing market. With initiatives such as the Human Brain Project, European researchers are pushing the boundaries of what is possible with neuromorphic systems.
The region's focus on AI and machine learning is further propelling growth in this sector.
Asia-Pacific: The Next Frontier
The Asia-Pacific region is expected to witness substantial growth in the neuromorphic computing market. Countries like China and Japan are investing heavily in AI research and development, positioning themselves as leaders in adopting neuromorphic technologies.
The growing demand for advanced computing solutions in industries such as robotics and healthcare is driving this growth.
Challenges and Considerations
Despite the promising outlook for the neuromorphic computing market, several challenges need to be addressed.
Technical Complexity
The technical complexity of designing and implementing neuromorphic systems presents a significant hurdle for widespread adoption. Organizations may face challenges in integrating these systems with existing infrastructure, requiring substantial investment in training and development.
Standardization and Compatibility
The lack of standardization in neuromorphic architectures can hinder interoperability between different systems. Establishing industry standards is essential to facilitate collaboration and ensure compatibility among various neuromorphic technologies.
Ethical Considerations
As with any advanced technology, neuromorphic computing raises ethical considerations regarding privacy, security, and potential misuse. Addressing these concerns will be critical in building public trust and ensuring responsible deployment of neuromorphic systems.
Key Players:
  • BrainChip Holdings Ltd.
  • Intel Corporation
  • Qualcomm
  • SynSense AG
  • Samsung Electronics Co. Ltd
  • IBM Corporation
  • SK Hynix Inc.
  • General Vision Inc.
  • GrAI Matter Labs
  • Innatera Nanosystems
The Future of Neuromorphic Computing
Looking ahead, the future of neuromorphic computing appears bright. With advancements in hardware and software, combined with increasing investment in research and development, the potential for neuromorphic systems is vast.
As organizations continue to seek more efficient and intelligent solutions, the demand for neuromorphic computing is expected to surge.
Collaboration Between Academia and Industry
To realize the full potential of neuromorphic computing, collaboration between academia and industry will be vital. Researchers can drive innovation while industry partners can facilitate the practical application of these technologies, creating a symbiotic relationship that benefits both sectors.
Continued Investment and Research
Ongoing investment in neuromorphic research will be crucial for addressing the current challenges and unlocking new applications. As organizations recognize the potential benefits of neuromorphic systems, we can expect to see a significant increase in funding and resources dedicated to this field.
Conclusion: A Transformative Force in Computing
The neuromorphic computing market is on the brink of explosive growth, with projections indicating a market size of $20.4 billion by 2031. As this technology continues to evolve, its applications across various sectors will expand, driving innovation and transforming the way we process information.
Embracing neuromorphic computing will not only enhance efficiency but also pave the way for a more intelligent and adaptive future.


 

Attachments

  • 1727136551013.png
    1727136551013.png
    71 bytes · Views: 29
  • Like
  • Love
  • Fire
Reactions: 44 users

Bravo

If ARM was an arm, BRN would be its biceps💪!

AI Explained: Edge AI Computing Brings Tech to Devices​

By PYMNTS | September 23, 2024
|
AI edge computing


Edge AI computing brings the brain of artificial intelligence (AI) directly to your devices, making them smarter, faster and more private.
From self-driving cars navigating city streets to smartphones instantly translating foreign languages, AI is increasingly moving out of centralized data centers and onto the devices we use daily. This shift toward “edge AI” represents a significant evolution in how AI is deployed and used, promising faster response times, improved privacy and the ability to operate in environments with limited connectivity.
Edge AI computing brings AI capabilities directly to devices and local networks rather than relying on distant cloud servers. This allows for faster processing, reduced latency and improved privacy since data doesn’t need to travel far from where it’s collected and used.
The impact on commerce could be particularly profound. Retailers are experimenting with AI-powered cameras and sensors to create cashierless stores, where customers can simply pick up items and walk out, with payment processed automatically. Online shopping could become more personalized, with AI-enabled devices offering real-time recommendations based on a user’s behavior and preferences. Smart shelves with embedded AI could dynamically adjust pricing based on demand and inventory levels in brick-and-mortar stores, potentially revolutionizing traditional retail strategies.

The Rise of AI at the Edge

Edge computing isn’t a new concept, but its marriage with AI is opening up possibilities that were once the realm of science fiction. By processing data locally on devices rather than sending it to the cloud, edge AI can reduce latency from seconds to milliseconds, improve privacy by keeping sensitive data on the device and operate in environments with limited or no internet connectivity.
One prominent application is in autonomous vehicles. Tesla’s Full Self-Driving computer, powered by a custom AI chip, can process 2,300 frames per second from the car’s cameras, making split-second decisions crucial for safe navigation. This local processing allows Tesla vehicles to operate even in areas with poor cellular coverage, a critical feature for the widespread adoption of self-driving technology.
In our pockets, smartphones can increasingly run complex AI models locally. This on-device processing speeds up these features and enhances user privacy by keeping personal data off the cloud.
Google’s latest Pixel phone showcases the power of on-device AI with features like Live Translate, which can translate speech in real time without an internet connection. The Pixel’s custom Tensor chip can process natural language at a rate of 600 words per minute, a capability that would have required a server farm just a few years ago.
The true potential of edge AI may lie in its ability to transform entire cities. In Singapore, a network of AI-enabled cameras and sensors is being deployed as part of a “Smart Nation” initiative. These devices can monitor everything from traffic flow to public safety, processing data locally to respond to incidents in real-time while minimizing the transmission of sensitive information.
Despite its potential, the rise of edge AI is challenging. Hardware limitations mean edge devices often can’t run the most advanced AI models. This has led to a race among chipmakers to develop more robust, energy-efficient AI processors. Nvidia’s Jetson line of AI computers can deliver up to 275 trillion operations per second while consuming as little as 5 watts of power, making them suitable for a wide range of edge devices.
The proliferation of AI-enabled devices raises questions about surveillance and data ownership. The growing number of decisions AI makes at the edge necessitates increased transparency and accountability in these systems.

The Future of AI at the Edge

The momentum behind edge AI shows no signs of slowing. In healthcare, companies like Medtronic are developing AI-enabled insulin pumps that can monitor blood glucose levels and adjust insulin delivery automatically, potentially revolutionizing diabetes management.
Nvidia’s Clara AGX AI computing platform enables AI-powered medical devices to process high-resolution medical imaging data locally, speeding up diagnoses and improving patient privacy.
In agriculture, John Deere’s See & Spray technology uses onboard AI to distinguish between crops and weeds, allowing for precise herbicide application and potentially reducing chemical use by up to 90%.
Edge AI will continue to evolve, and we can expect to see even more innovative applications emerge. The possibilities are vast, from smart homes that can predict and respond to our needs to industrial equipment that can self-diagnose and prevent failures before they occur.

 
  • Like
  • Fire
  • Love
Reactions: 16 users

Bravo

If ARM was an arm, BRN would be its biceps💪!
Data Center Knowledge Logo
How LLMs on the Edge Could Help Solve the AI Data Center Problem
Locally run AI systems, known as LLMs on the edge, could help ease the strain on data centers, but it may take some time before this approach goes mainstream.
Picture of Drew Robb
Drew Robb
September 18, 2024
7 Min Read
LLMs on the edge involves AI running natively on PCs, tablets, laptops, and smartphones, reducing strain on data centers

LLMs on the edge: AI running natively on PCs and smartphones can reduce the strain on data centers.Image: Data Center Knowledge / Alamy

There has been plenty of coverage on the problem AI poses to data center power. One way to ease the strain is through the use of ‘LLMs on the edge’, which enables AI systems to run natively on PCs, tablets, laptops, and smartphones.
The obvious benefits of LLMs on the edge include lowering the cost of LLM training, reduced latency in querying the LLM, enhanced user privacy, and improved reliability.
If they’re able to ease the pressure on data centers by reducing processing power needs, LLMs on the edge could have the potential to eliminate the need for multi-gigawatt-scale AI data center factories. But is this approach really feasible?

With growing discussions around moving the LLMs that underpin generative AI to the edge, we take a closer look at whether this shift can truly reduce the data center strain.

Smartphones Lead the Way in Edge AI​

Michael Azoff, chief analyst for cloud and data center research practice at Omdia, says the AI-on-the-edge use case that is moving the fastest is lightweight LLMs on smartphones.
Huawei has developed different sizes of its LLM Pangu 5.0 and the smallest version has been integrated with its smartphone operating system, HarmonyOS. Devices running this include the Huawei Mate 30 Pro 5G.
Samsung, meanwhile, has developed Gauss LLM that is used in Samsung Galaxy AI, which operates in its flagship Samsung S24 smartphone. Its AI features include live translation, converting voice to text and summarizing notes, circle to search, and photo and message assistance.

Samsung has also moved into mass production of its LPDDR5X DRAM semiconductors. These 12-nanometer chips process memory workloads directly on the device, enabling the phone’s operating system to work faster with storage devices to more efficiently handle AI workloads.
LLM-on-the-Edge-1.jpg

Smartphone manufacturers are experimenting with LLMs on the edge.
Overall, smartphone manufacturers are working hard to make LLMs smaller. Instead of ChatGPT-3’s 175 billion parameters, they are trying to reduce them to around two billion parameters.
Intel and AMD are involved in AI at the edge, too. AMD is working on notebook chips capable of running 30 billion-parameter LLMs locally at speed. Similarly, Intel has assembled a partner ecosystem that is hard at work developing the AI PC. These AI-enabled devices may be pricier than regular models. But the markup may not be as high as expected, and it is likely to come down sharply as adoption ramps up.
“The expensive part of AI at the edge is mostly on the training,” Azoff told Data Center Knowledge. “A trained model used in inference mode does

He believes early deployments are likely to be for scenarios where errors and ‘hallucinations’ don't matter so much, and where there is unlikely to be much risk of reputational damage.
Examples include enhanced recommendation engines, AI-powered internet searches, and creating illustrations or designs. Here, users are relied on to detect suspect responses or poorly represented images and designs.

Data Center Implications for LLMs on the Edge​

With data centers preparing for a massive ramp-up in density and power needs to support the growth of AI, what might the LLMs on the edge trend mean for digital infrastructure facilities?
In the foreseeable future, models running on the edge will continue to be trained in the data center. Thus, the heavy traffic currently hitting data centers from AI is unlikely to wane in the short term. But the models being trained within data centers are already changing. Yes, the massive ones from the likes of OpenAI, Google, and Amazon will continue. But smaller, more focused LLMs are in their ascendency.
“By 2027, more than 50% of the GenAI models that enterprises use will be specific to either an industry or business function – up from approximately 1% in 2023,” Arun Chandrasekaran, an analyst at Gartner, told Data Center Knowledge. “Domain models can be smaller, less computationally intensive, and lower the hallucination risks associated with general-purpose models.”

The development work being done to reduce the size and processing intensity of GenAI will spill over into even more efficient edge LLMs that can run on a range of devices. Once edge LLMs gain momentum, they promise to reduce the amount of AI processing that needs to be done in a centralized data center. It is all a matter of scale.
For now, LLM training largely dominates GenAI as the models are still being created or refined. But imagine hundreds of millions of users using LLMs locally on smartphones and PCs, and the queries having to be processed through large data centers. At scale, that amount of traffic could overwhelm data centers. Thus, the value of LLMs on the edge may not be realized until they enter the mainstream.

LLMs on the Edge: Security and Privacy​

Anyone interacting with an LLM in the cloud is potentially exposing the organization to privacy questions and the potential for a cybersecurity breach.
As more queries and prompts are being done outside the enterprise, there are going to be questions about who has access to that data. After all, users are asking AI systems all sorts of questions about their health, finances, and businesses.
To do so, these users often enter personally identifiable information (PII), sensitive healthcare data, customer information, or even corporate secrets.
The move toward smaller LLMs that can either be contained within the enterprise data center – and thus not running in the cloud – or that can run on local devices is a way to bypass many of the ongoing security and privacy concerns posed by broad usage of LLMs such as ChatGPT.

“Security and privacy on the edge are really important if you are using AI as your personal assistant, and you're going to be dealing with confidential information, sensitive information that you don't want to be made public,” said Azoff.

Timeline for Edge LLMs​

LLMs on the edge won’t become apparent immediately – except for a few specialized use cases. But the edge trend appears unstoppable.
Forrester’s Infrastructure Hardware Survey revealed that 67% of infrastructure hardware decision-makers in organizations have adopted edge intelligence or were in the process of doing so. About one in three companies will also collect and perform AI analysis of edge environments to empower employees with higher- and faster-value insight.
“Enterprises want to collect relevant input from mobile, IoT, and other devices to provide customers with relevant use-case-driven insights when they request them or need greater value,” said Michele Goetz, a business insights analyst at Forrester Research.
“We should see edge LLMs running on smartphones and laptops in large numbers within two to three years.”
Pruning the models to reach a more manageable number of parameters is one obvious way to make them more feasible on the edge. Further, developers are shifting the GenAI model from the GPU to the CPU, reducing the processing footprint, and building standards for compiling.
As well as the smartphone applications noted above, the use cases that lead the way will be those that are achievable despite limited connectivity and bandwidth, according to Goetz.
Field engineering and operations in industries such as utilities, mining, and transportation maintenance are already personal device-oriented and ready for LLM augmentation. As there is business value in such edge LLM applications, paying more for an LLM-capable field device or phone is expected to be less of an issue.

Widespread consumer and business use of LLMs on the edge will have to wait until hardware prices come down as adoption ramps up. For example, Apple Vision Pro is mainly deployed in business solutions where the price tag can be justified.
Other use cases on the near horizon include telecom and network management, smart buildings, and factory automation. More advanced used cases for LLMs on the edge – such as immersive retail and autonomous vehicles – will have to wait five years or more, according to Goetz.
“Before we can see LLMs on personal devices flourish, there will be a growth in specialized LLMs for specific industries and business processes,” the analyst said.
“Once these are developed, it is easier to scale them out for adoption because you aren’t training and tuning a model, shrinking it, and deploying it all at the same time.”

 
  • Like
  • Love
Reactions: 20 users

Bravo

If ARM was an arm, BRN would be its biceps💪!
Might have to do some more digging here!!!!I think this could be very interesting!



NVIDIA and Hitachi Rail (parent company Hitachi) are collaborating to:
allow volumes of data to be processed at the ‘edge’ (on the trains or infrastructure) in real time, with only relevant information sent back to the operational control centers. This enables an unprecedented improvement in the speed that actionable insights reach transport operators, as previously it could take up to ten days for data to be processed in maintenance locations.

BTW, Hitachi Semiconductor and Mitsubishi Electric were the founding partners in Renesas in 2002. Hitachi sold its stake in Reneas earlier this year.



September 23, 2024 23:00 ET| Source: Hitachi Rail Limite
Screenshot 2024-09-24 at 1.39.17 pm.png









And here's a blog from 2020 that shows Hitchai was working with Intel on neuromorphic hardware.


Neuromorphic Computing For Data and Edge Computing​

By Hubert Yoshida posted 10-28-2020 17:45​






0 Like
0EM2S000002XOkw.png

In previous post, I have written about Data Centric Computing, the movement to offload data management functions from CPUs to smart NICs and FPGAs, or DPUs (Data Processing Units) as NVIDIA calls them so that the CPUs could focus more of their power on application processing.

Another approach to Data Centric Computing is the use of computational storage as explained in a post by Stacy Peterson in SearchStorage where computation is moved closer to storage to reduce the amount of storage that moves between storage and compute. This is being driven by the need to reduce latency in IoT and edge devices that are required to handle massive amounts of data. Steve Garbrecht explains how Lumada Edge brings DataOps to the Edge in his post.

Hitachi is also working with Intel in developing neuromorphic hardware to distribute processing across various infrastructure elements which could mean less reliance on centralized systems that require constant high (expensive) bandwidths. Neuromorphic hardware is an electronic device which mimics the natural biological structures of our nervous system. It is an attempt to replicate the cognitive abilities of our brains to process information faster and more efficiently than computers due to the architecture of our neural system.

This sound a little far out, but in March of this year, Intel announced the Pohoiki Springs system, shown here, which comprises about 770 neuromorphic research chips, each with 130,000 neurons, inside a chassis the size of five standard server. It has a computational capacity of about 100 million neurons, roughly similar to the brain of a mole-rat.

0EM2S000002XNGl.png


Unlike traditional CPUs, in the Pohoiki Springs system, the memory and computing elements are intertwined rather than separate. That minimizes the distance that data has to travel, because in traditional computing architectures, data has to flow back and forth between memory and computing.

With neuromorphic computing, it is possible to train machine-learning models using a fraction of the data it takes to train them on traditional computing hardware. That means the models learn similarly to the way human babies learn, by seeing an image or toy once and being able to recognize it forever. The models can also learn from the data, nearly instantaneously, ultimately making predictions that could be more accurate than those made by traditional machine-learning models.

Last year Hitachi joined the Intel Neuromorphic Research Community (INRC). Hitachi has joined forces with Accenture, Airbus, GE, Intel and other INRC members to create proof-of-concept applications that will bring the most value to their businesses. Intel will leverage the insights that come from this customer-centric research to inform the designs of future processors and systems. These engagements will ensure Intel remains strategically positioned at the forefront of neuromorphic technology commercialization.

Hitachi is unique in the way it combines information technologies (IT) including AI, big data analytics and other digital technologies; operational technologies (OT) for system control and operation; and an extensive range of products. Through its Social Innovation Business, Hitachi is providing digital solutions to help resolve challenges faced by customers and society.

“Intel’s Loihi and Spiking Neural Networks (Loihi is the research chip in the Pohoiki Springs System which includes 130,000 neurons optimized for spiking neural network) have the potential to recognize and understand the time series data of many high-resolution cameras and sensors quickly,” said Norikatsu Takaura, chief researcher of the Research & Development Group at Hitachi Ltd. “Neuromorphic computing and its technology stack will improve the scalability and flexibility of edge computing systems.”

In order to gain insight into electrical circuits and biological processes, neuromorphic engineers require interdisciplinary knowledge of biology, physics, math, which plays to the strength of Hitachi’s Social Innovation business. This is a fast growing area. Analysts forecast the neuromorphic computing market could rise from $69 million in 2024 to $5 billion in 2029 – and $21.3 billion in 2034.

https://community.hitachivantara.co...morphic-computing-for-data-and-edge-computing

Screenshot 2024-09-24 at 1.46.04 pm.png
 
Last edited:
  • Like
  • Fire
  • Love
Reactions: 29 users
LLM at the edge AI in smartphones
Come on BRN
 
  • Like
  • Fire
Reactions: 4 users
Top Bottom