BRN Discussion Ongoing

Tothemoon24

Top 20
July 18, 2024

Tata Technologies partners with Arm to drive innovation in software-defined vehicles (SDVs)​

Share
LinkedInTwitterFacebook

Press release​

• Tata Technologies has signed a Memorandum of Understanding (MoU) with Arm to develop automotive software and systems solutions for software-defined vehicles (SDVs).
• Tata Technologies will work with Arm to enable its software solutions on the Arm Automotive Enhanced (AE) portfolio, including Arm Compute Subsystems (CSS) for Automotive, to accelerate the development timelines of high-performance vehicle computing systems.
• Both companies will work together to develop a wide range of solutions across areas, such as software platforms leveraging SOAFEE reference architecture virtual platforms and demonstrating a chip-to-cloud software stack for SDVs.
Pune, Mumbai, Bengaluru, Chennai, Delhi, Kolkata, Kochi, India, 18th July 2024: Tata Technologies, a leading global engineering and product development digital services company, has announced a strategic partnership with Arm aimed at driving innovation in software-defined vehicles (SDVs). Combining Tata Technologies’ rich automotive domain expertise and software capabilities with high-performance, power-efficient Arm® Automotive Enhanced (AE) technologies, this partnership strives to reduce the development time of SDVs for automotive OEMs.
The automotive industry is transforming towards SDVs, driven by the growing demand for connected, autonomous, and electric vehicles. The evolution of SDVs demands sophisticated software seamlessly integrating with hardware to enhance functionality, safety, and user experiences. As part of this strategic partnership, Tata Technologies will develop a SOAFEE reference architecture stack using the Arm AE portfolio and Arm Compute Subsystems (CSS) for Automotive, along with enabling a cloud-native development framework integrating a variety of DevSecOps and virtual platform solutions to shift-left the development of SDVs, accelerating the time to market for automakers.
This partnership builds on the momentum from CES 2024 and Mobile World Congress 2024, where Tata Technologies and Arm jointly demonstrated a cloud-native reference software architecture for SDVs on Arm SoCs. These solutions were presented at Embedded World 2024 on the newly launched Arm Cortex®-A720AE in a virtualised environment, realising a shift-left strategy for safety-critical vehicle software running on heterogeneous computing systems.
Speaking on the partnership, Warren Harris, CEO & Managing Director of Tata Technologies, said, “We are excited about this collaboration with Arm, which underscores Tata Technologies’ commitment to engineering a better world by enabling the automotive industry to realise connected, autonomous and sustainable products that deliver great customer experience. As a strategic partner of Arm, we are developing innovative solutions leveraging their advanced Arm AE technology, and we expect this collaboration to deliver significant time-to-market benefits for the whole automotive industry. We are optimistic about the future of our partnership and the transformative impact it will have on shaping the future of mobility.”
Dipti Vachani, senior vice president and general manager, Automotive Line of Business, Arm, commented on the collaboration: “Vehicle electronics are becoming increasingly complex with the need for more AI and software to improve user experiences and advance autonomy. This partnership combines the high-performance, power-efficient and functional safety leadership of the Arm AE technology platform and the time-to-market advantages of our CSS for Automotive with the automotive software expertise from Tata Technologies to empower our mutual customers to accelerate the development of AI-enabled vehicles.”
The collaboration signals a promising future in developing and deploying cloud-native solutions for future next-gen vehicles. With 25 years of expertise in product engineering and digital services, along with a proven track record in delivering engineering solutions to the automotive industry, Tata Technologies is well-positioned to meet the needs of SDVs. Moreover, it will enable rapid prototyping, testing, and deployment of SDV technologies, unlocking new opportunities for developers and accelerating the time to market for leading OEMs.
 
  • Like
  • Love
  • Thinking
Reactions: 53 users

IloveLamp

Top 20
Ultra personalised, recognises individuals voices, runs completely on the cpu.........no gpu required


1000017156.jpg


1000017158.jpg

1000017160.jpg



1000017154.jpg
 
Last edited:
  • Like
  • Thinking
  • Fire
Reactions: 19 users

Shadow59

Regular
I'm calling Green Friday!
Hope I haven't jinxed it!
 
  • Fire
  • Like
  • Wow
Reactions: 8 users

GDJR69

Regular
I'm calling Green Friday!
Hope I haven't jinxed it!
I don't think so, Nasdaq slumped 1% overnight, never a good sign for Australian tech stocks.
 
  • Like
Reactions: 1 users

MDhere

Regular
I'm calling Green Friday!
Hope I haven't jinxed it!
my crystal ball says at least .26 though its been on the blink of late, time to shine it up and give it a whirl
 
  • Like
  • Haha
  • Fire
Reactions: 9 users

Esq.111

Fascinatingly Intuitive.
Good Morning Chippers ,


Rotation trade takes small caps from dead money to Wall Street darlings​

By Lewis Krauskopf
July 18, 20242:34 PM GMT+9:30Updated 18 hours ago



A Wall St. street sign is seen near the NYSE in New York

A Wall St. street sign is seen near the New York Stock Exchange in New York City, U.S., September 17, 2019. REUTERS/Brendan McDermid/File Photo Purchase Licensing Rights, opens new tab
NEW YORK, July 18 (Reuters) - U.S. small-cap stocks are having a long-awaited moment, ignited by expectations of interest rate cuts and improving prospects for the election of Republican presidential candidate Donald Trump, a proponent of policies seen as benefiting smaller domestic companies.
The small-company-focused Russell 2000 (.RUT), opens new tab surged more than 11.5% over five days, the index's biggest gain in such a stretch since April 2020.
Advertisement · Scroll to continue
Report this ad
At the same time, tech and growth stocks have wobbled, reinforcing the view that small caps have benefited from a rotation out of this year’s biggest winners into unloved areas of the market. The tech-heavy Nasdaq 100 (.NDX), opens new tab is down 3% since last week, including its biggest one-day drop of the year on Wednesday. The S&P 500 (.SPX), opens new tab, generally considered the benchmark for large-cap U.S. stocks, is up 0.2%.
Advertisement · Scroll to continue
Report this ad
"I think the narrative has changed," said Eric Kuby, chief investment officer at North Star Investment Management Corp, which specializes in small-cap stocks. "I'm hoping ... this jump over the last week is really just the beginning of what could be a very long, multi-year period of time where small caps could make up a lot of ground."
For months, shares of smaller companies have languished while investors poured money into the massive tech stocks that have led indexes for most of 2024. The Russell 2000 is up only 10.5% this year despite the recent surge, while the S&P 500 has gained 17% and the Nasdaq 100 is up nearly 18%.
three, five and even ten years.




Reuters Graphics

Reuters Graphics
The outlook shifted last week, when a softer-than-expected inflation reading boosted expectations the Federal Reserve will cut rates in coming months, a potential boon to smaller companies suffering from elevated borrowing costs.
Higher rates have been a "headwind to small caps," said Jason Swiatek, head of small- and mid-cap equity at Jennison Associates. "On the flip side, as you switch to a rate-cutting cycle, that alleviates a bit of that pressure."

The rally accelerated after a failed assassination attempt over the weekend appeared to increase expectations of a victory by Trump, whose proposals to raise tariffs and lower taxes could benefit smaller companies.
Among the small-cap stocks that have surged since the inflation data last week are biotech firm Caribou Biosciences (CRBU.O), opens new tab, up 55% in that time, homebuilder Hovnanian Enterprises (HOV.N), opens new tab, up over 30%, and insurer Hippo Holdings (HIPO.N), opens new tab, up over 29%.

An extended rotation out of tech - whose run has sparked concerns over stretched valuations and drawn comparisons to the dotcom bubble two decades ago - could fuel further small-cap strength.
The Russell 2000 last had a total market value of $2.7 trillion, according to LSEG data. That's smaller than the individual market values of three stocks, Microsoft (MSFT.O), opens new tab, Apple (AAPL.O), opens new tab and Nvidia (NVDA.O), opens new tab, with market caps each over $2.9 trillion.
As money flows "come out of the megacap stocks and they look for a new home, it doesn't take much to get the smaller stocks going," said Peter Tuz, president of Chase Investment Counsel.
History shows that a sharp rally by small caps bodes well for their near-term performance. The Russell 2000 gained at least 1% in five straight sessions over the past week, which has only happened four times before, according to Bespoke Investment Group. Following those prior streaks, the index posted an average gain of 5.9% over the next month, according to Bespoke.
While the S&P 500 has notched record highs all year, the Russell 2000 remains some 8% below its 2021 peak, suggesting small caps may have room to climb.
Retail investors are buying as well. Analysts at Vanda Research said inflows into small caps sparked a “short squeeze,” when a rising price forces bearish investors to unwind bets against a stock, driving it even higher.
“We think there’s scope for retail to continue chasing this trade over the next 1-2 weeks,” they wrote.
Small-cap investors have been disappointed by periods of strength before. Excitement over the prospect of rate cuts sent the Russell 2000 up over 20% between late October and late December of 2023, only for the index to retreat earlier this year when rate cuts did not materialize.
The earnings season now getting underway could provide more justification for small caps, with Russell 2000 companies expected to post an 18% rise in second-quarter earnings, according to LSEG. Megacap growth companies will also have a chance to reclaim the narrative, with heavyweights Tesla (TSLA.O), opens new tab and Alphabet (GOOGL.O), opens new tab reporting next week.
Brokerage firm Edward Jones has a "neutral" outlook on small-caps as it waits to see if companies can show stronger profit growth, said Angelo Kourkafas, senior investment strategist at the firm.
To be more optimistic on the group longer-term, he said, "We would need to see more signs that either earnings are coming in much better than expected or that economic activity is starting to pick up.”
Get a look at the day ahead in U.S. and global markets with the Morning Bid U.S. newsletter. Sign up here.
Reporting by Lewis Krauskopf; additional reporting by Suzanne McGee; Editing by Ira Iosebashvili and Leslie Adler
Our Standards: The Thomson Reuters Trust Principles., opens new tab

Regards ,
Esq.
 
Last edited:
  • Like
  • Fire
  • Thinking
Reactions: 17 users

HopalongPetrovski

I'm Spartacus!
Something tell me I can’t wait that long
Another 60 years
Oh dam!
That’s just to long to wait
suspended-animation_53229f6f.jpeg


No problemo.
Can't wait for the company to fire up when we'll all have to be dead for "tax purposes". 🤣
 
Last edited:
  • Haha
Reactions: 12 users

Hrdwk

Regular


Any chance we could be involved with Dyson?
 
  • Like
  • Thinking
  • Love
Reactions: 8 users

Shadow59

Regular
I don't think so, Nasdaq slumped 1% overnight, never a good sign for Australian tech stocks.
Brn has been very contrary to the Nasdaq recently. Yesterday it finally broke through resistance with a small increase in volume. Hopefully it is the reversal.
brn 1.JPG
 
  • Like
  • Love
  • Fire
Reactions: 29 users

7für7

Top 20
In the TSE forum, sometimes it feels like being back in school... two hours of philosophy (with questions like "I wonder where we'll be in two years"), then comes mathematics with calculations of prices and chart pictures, later comes history with postings of articles from 1-4 years ago, then a long break where the ones in the back row are the loudest (bashers), followed by technical classes with endless postings on how technology like drones, headphones, etc. becomes more efficient with AI, and finally sports... when it's red we go swimming, and when it's green everyone jumps up and down...o love it! Makes me feel young again and somtimes you get really well informed and you have also something to laugh! 👍
 
  • Haha
  • Like
  • Fire
Reactions: 9 users

MrNick

Regular

View attachment 66756

The Audio Revolution at the Edge​



In today’s interconnected world, clear and noise-free audio has become more crucial than ever. From wireless earbuds to smart home devices and enterprise communication systems, the demand for high-quality audio processing continues to grow. At BrainChip, we’re excited to introduce our groundbreaking Audio Denoising solution, powered by our innovative Temporal Event Neural Network (TENNs); a platform of use case specific algorithms coupled with our Akida™ event-based semiconductor IP that improve performance and power efficiency. TENNs represents a significant leap forward in audio processing, offering unparalleled efficiency and performance for edge computing applications.



The Challenge of Audio Denoising​



Audio denoising, the process of removing unwanted noise from audio signals, has long been a complex challenge in signal processing. Traditional methods often struggle to balance noise reduction while preserving the original signal’s quality. As well, today’s audio pre-processing approaches are computationally and energy inefficient.



Enter TENNs: A New Paradigm in Audio Processing​



BrainChip’s Audio Denoising solution leverages the power of TENNs, a revolutionary approach to neural network architecture that excels in processing sequential and continuous data streams. By combining the principles of state space models and generalized convolution kernels, TENNs offers a highly efficient alternative to traditional transformer models, making it ideal for edge computing audio applications.



Key Features and Benefits​



Scalable TENNs Models: Our medium-sized model achieves an impressive Perceptual Evaluation of
Speech Quality (PESQ) score of 3.36 with just 590,000 parameters. This demonstrates TENN’s ability to
deliver excellent noise reduction while maintaining audio quality. The TENN architecture allows for easy
scaling to smaller or larger models, adapting to specific customer needs without compromising
performance.

Unmatched Efficiency: TENN models require fewer parameters and multiply-accumulate operations
(MACs) per sample compared to equivalent CNN-based models. In fact, our Audio Denoising solution
uses 12 times fewer MACs and nearly 3 times fewer parameters than state-of-the-art networks, while
providing comparable performance. This translates to significantly lower power consumption and
reduced area requirements when designing System-on-Chip (SoC) solutions.

Hardware IP Integration: Our product includes Hardware IP, enabling companies to seamlessly
incorporate BrainChip AI acceleration into their SoC designs. This integration ensures optimal
performance and efficiency for audio denoising tasks, making it ideal for a wide range of edge devices.

Versatile Applications: From in-ear wireless devices to fixed audio equipment and VoIP systems, our
audio denoising solution caters to a broad spectrum of applications. It can be implemented as a
standalone feature or integrated into a pipeline feeding speech recognition or keyword spotting
systems, which BrainChip is also developing.

Customizable Solutions: With our TENNs license, customers can fine-tune models to their specific
requirements, ensuring the best possible performance for their unique audio environments and use
cases.



Real-World Applications and Performance​



Our Audio Denoising solution powered by TENNs has demonstrated impressive results across various
applications:

Enhanced Speech Clarity: Improve the quality of voice calls and audio recordings by removing
background noise and focusing on the speaker’s voice. This is particularly valuable for mobile
devices and hearing aids.

Improved VoIP Communication: Elevate the experience of video conferencing and online
meetings with clearer, noise-free audio, essential in today’s remote work environment.

Smart Home Devices: Enhance the accuracy of voice-controlled smart home systems by
providing cleaner audio input, improving user experience and device functionality.

Industrial Monitoring: Improve the quality of audio data in industrial settings, enabling more
accurate analysis and predictive maintenance.

Medical Devices: Vital sign estimation, allowing for more accurate and power efficient
monitoring of human health.

When compared to state-of-the-art networks for speech enhancement through denoising, our TENNs-
based solution achieved:

– Comparable PESQ scores (3.36 for the medium model)
– Approximately 12 times fewer MACs (Multiply-Accumulate operations)
– Nearly 3 times fewer parameters

These results highlight the exceptional efficiency of our TENNs Audio Denoising solution, making it ideal
for edge computing applications where power consumption and computational resources are limited.
Also note that the ability to execute on raw data eliminates the need for expensive pre-processing.



Implementation and Future Directions​



The implementation of our Audio Denoising offering within BrainChip’s hardware, specifically in the
Akida 2.0 IP showcases a significant advancement in hardware-accelerated AI for audio processing.
Akida 2.0’s architecture is designed to fully exploit TENN’s capabilities, featuring a mesh network of
nodes, each equipped with an event-based TENN processing unit.
Looking ahead, we plan to continue refining our Audio Denoising capabilities, focusing on:

Enhancing activation sparsity to further improve efficiency.

Exploring more of the polynomial space to increase model flexibility.

Developing integrated solutions that combine audio denoising with speech recognition and keyword spotting.



A New Era of Crystal-Clear Audio​



BrainChip’s TENNs-powered Audio Denoising solution marks a significant milestone in the evolution of audio processing technology. By addressing key challenges related to power consumption and computational efficiency, we’re paving the way for a new generation of edge devices capable of delivering crystal-clear audio in any environment. Our Audio Denoising product is just the beginning of what’s possible with TENNs technology, and we’re excited to see how our partners and customers will leverage this powerful algorithm platform to create the next generation of audio-enabled devices. To learn more about how BrainChip’s Audio Denoising solution can elevate your products, contact us for a demonstration or to discuss your specific audio processing needs.
Cochlear would be the first to contact with a view to collaborating perhaps, either way, cracking news.
 
Last edited:
  • Like
  • Fire
Reactions: 13 users

Bravo

If ARM was an arm, BRN would be its biceps💪!

DARPA slaps down credit card for 3D military chiplets – $840M ought to be enough?​


UT-Austin lab gets the job, and five years to do it​

Matthew Connatser
Thu 18 Jul 2024 // 23:26 UTC

The Pentagon's boffinry nerve center DARPA has doled out $840 million to develop next-generation semiconductor microsystems for America's military.
The recipient of the cash is the Texas Institute for Electronics (TIE), an org founded in 2021, housed at the University of Texas-Austin, and operating as a consortium comprising state and local governments in the Lone Star State, chip firms, and academic institutions.
TIE’s research focuses on heterogeneous integration technology, better known as chiplets – individual silicon dies that are packaged together into complete chips. Processors from AMD and others famously use this approach: A modern AMD Ryzen or Epyc part, for instance, includes a collection of dies that each house clusters of CPU cores and IO circuitry.
Owing to TIE's experience in this area of semiconductor R&D, DARPA has selected the group to develop 3D heterogeneous integration (3DHI) tech, an approach that involves stacking layers of silicon dies on top of each other rather than side-by-side in a chip package. The funding is part of DARPA’s Next Generation Microelectronics Manufacturing (NGMM) program.

The project will take five years to complete, split evenly into two parts. The first phase involves the construction of a manufacturing center which will be used to create 3DHI microsystem prototypes for the Department of Defense (DoD). TIE's industry partners include AMD, Applied Materials, Global Foundries, Intel, Micron, and many others.

As we've alluded to, this isn't totally new and novel tech; chiplets and stacks of dies are being used in some shape or form in today's PC and server microprocessors and GPUs. Crucially, the goal of NGMM is to give the US Dept of Defense "higher performance, lower power, light weight and compact defense systems" for things like "radar, satellite imaging, [and] unmanned aerial vehicles."
Ie: Kick this consumer and business-grade technology up a gear for the military.
As such, the total budget for the project is about $1.4 billion, $840 million of which is from DARPA and $552 million from Texas itself.

 
  • Like
  • Fire
  • Love
Reactions: 18 users
Without any news or improvement in share price, the mood (and posts grasping at straws) on this forum...



or for those who prefer a more updated version;

 
Last edited:
  • Like
  • Love
Reactions: 2 users

GDJR69

Regular
Without any news or improvement in share price, the mood (and posts grasping at straws) on this forum...



or for those who prefer a more updated version;


But the moment you sell you know there will be a string of stellar announcements one after another.
 
  • Haha
  • Like
  • Fire
Reactions: 13 users

Attachments

  • 1721351166774.png
    1721351166774.png
    91.2 KB · Views: 41
  • Fire
  • Thinking
  • Like
Reactions: 5 users

Kozikan

Regular

View attachment 66756

The Audio Revolution at the Edge​



In today’s interconnected world, clear and noise-free audio has become more crucial than ever. From wireless earbuds to smart home devices and enterprise communication systems, the demand for high-quality audio processing continues to grow. At BrainChip, we’re excited to introduce our groundbreaking Audio Denoising solution, powered by our innovative Temporal Event Neural Network (TENNs); a platform of use case specific algorithms coupled with our Akida™ event-based semiconductor IP that improve performance and power efficiency. TENNs represents a significant leap forward in audio processing, offering unparalleled efficiency and performance for edge computing applications.



The Challenge of Audio Denoising​



Audio denoising, the process of removing unwanted noise from audio signals, has long been a complex challenge in signal processing. Traditional methods often struggle to balance noise reduction while preserving the original signal’s quality. As well, today’s audio pre-processing approaches are computationally and energy inefficient.



Enter TENNs: A New Paradigm in Audio Processing​



BrainChip’s Audio Denoising solution leverages the power of TENNs, a revolutionary approach to neural network architecture that excels in processing sequential and continuous data streams. By combining the principles of state space models and generalized convolution kernels, TENNs offers a highly efficient alternative to traditional transformer models, making it ideal for edge computing audio applications.



Key Features and Benefits​



Scalable TENNs Models: Our medium-sized model achieves an impressive Perceptual Evaluation of
Speech Quality (PESQ) score of 3.36 with just 590,000 parameters. This demonstrates TENN’s ability to
deliver excellent noise reduction while maintaining audio quality. The TENN architecture allows for easy
scaling to smaller or larger models, adapting to specific customer needs without compromising
performance.

Unmatched Efficiency: TENN models require fewer parameters and multiply-accumulate operations
(MACs) per sample compared to equivalent CNN-based models. In fact, our Audio Denoising solution
uses 12 times fewer MACs and nearly 3 times fewer parameters than state-of-the-art networks, while
providing comparable performance. This translates to significantly lower power consumption and
reduced area requirements when designing System-on-Chip (SoC) solutions.

Hardware IP Integration: Our product includes Hardware IP, enabling companies to seamlessly
incorporate BrainChip AI acceleration into their SoC designs. This integration ensures optimal
performance and efficiency for audio denoising tasks, making it ideal for a wide range of edge devices.

Versatile Applications: From in-ear wireless devices to fixed audio equipment and VoIP systems, our
audio denoising solution caters to a broad spectrum of applications. It can be implemented as a
standalone feature or integrated into a pipeline feeding speech recognition or keyword spotting
systems, which BrainChip is also developing.

Customizable Solutions: With our TENNs license, customers can fine-tune models to their specific
requirements, ensuring the best possible performance for their unique audio environments and use
cases.



Real-World Applications and Performance​



Our Audio Denoising solution powered by TENNs has demonstrated impressive results across various
applications:

Enhanced Speech Clarity: Improve the quality of voice calls and audio recordings by removing
background noise and focusing on the speaker’s voice. This is particularly valuable for mobile
devices and hearing aids.

Improved VoIP Communication: Elevate the experience of video conferencing and online
meetings with clearer, noise-free audio, essential in today’s remote work environment.

Smart Home Devices: Enhance the accuracy of voice-controlled smart home systems by
providing cleaner audio input, improving user experience and device functionality.

Industrial Monitoring: Improve the quality of audio data in industrial settings, enabling more
accurate analysis and predictive maintenance.

Medical Devices: Vital sign estimation, allowing for more accurate and power efficient
monitoring of human health.

When compared to state-of-the-art networks for speech enhancement through denoising, our TENNs-
based solution achieved:

– Comparable PESQ scores (3.36 for the medium model)
– Approximately 12 times fewer MACs (Multiply-Accumulate operations)
– Nearly 3 times fewer parameters

These results highlight the exceptional efficiency of our TENNs Audio Denoising solution, making it ideal
for edge computing applications where power consumption and computational resources are limited.
Also note that the ability to execute on raw data eliminates the need for expensive pre-processing.



Implementation and Future Directions​



The implementation of our Audio Denoising offering within BrainChip’s hardware, specifically in the
Akida 2.0 IP showcases a significant advancement in hardware-accelerated AI for audio processing.
Akida 2.0’s architecture is designed to fully exploit TENN’s capabilities, featuring a mesh network of
nodes, each equipped with an event-based TENN processing unit.
Looking ahead, we plan to continue refining our Audio Denoising capabilities, focusing on:

Enhancing activation sparsity to further improve efficiency.

Exploring more of the polynomial space to increase model flexibility.

Developing integrated solutions that combine audio denoising with speech recognition and keyword spotting.



A New Era of Crystal-Clear Audio​



BrainChip’s TENNs-powered Audio Denoising solution marks a significant milestone in the evolution of audio processing technology. By addressing key challenges related to power consumption and computational efficiency, we’re paving the way for a new generation of edge devices capable of delivering crystal-clear audio in any environment. Our Audio Denoising product is just the beginning of what’s possible with TENNs technology, and we’re excited to see how our partners and customers will leverage this powerful algorithm platform to create the next generation of audio-enabled devices. To learn more about how BrainChip’s Audio Denoising solution can elevate your products, contact us for a demonstration or to discuss your specific audio processing needs.
Customizable Solutions: With our TENNs LICENSE 👍👍👍👍👍customers can fine-tune models to their specific
requirements, ensuring the best possible performance for their unique audio environments and use
cases.
 
  • Like
  • Fire
Reactions: 16 users

Bravo

If ARM was an arm, BRN would be its biceps💪!
13 July 2024

Can Neuromorphic Intelligence Bring Robots to Life?​

The potential of brain-inspired computing to transform autonomous agents and their interactions with the environment​

Can Neuromorphic Intelligence Bring Robots to Life?

In the fast-paced world of robotics and artificial intelligence, creating machines that can seamlessly interact with their environment is the holy grail. Imagine robots that not only navigate their surroundings but also learn and adapt in real-time, just as humans do. This dream is inching closer to reality thanks to the field of neuromorphic engineering, a fascinating discipline that is revolutionizing how we think about intelligent systems.


At the heart of this transformation is the concept of embodied neuromorphic intelligence. This approach leverages brain-inspired computing methods to develop robots capable of adaptive, low-power, and efficient interactions with the world. The idea is to mimic the way living organisms process information, enabling robots to perform complex tasks with minimal resources. This novel approach promises to reshape industries, from autonomous vehicles to healthcare and beyond.


Neuromorphic engineering combines principles from neuroscience, electrical engineering, and computer science to create systems that emulate the brain's structure and functionality. Unlike traditional computing, which relies on binary logic and clock-driven operations, neuromorphic systems use spiking neural networks (SNNs) that communicate through electrical pulses, much like neurons in the human brain. This allows for more efficient processing, especially for tasks involving perception, decision-making, and motor control.


The journey towards neuromorphic intelligence has been fueled by significant advancements in both hardware and software. Researchers have developed specialized neuromorphic chips that can execute complex neural algorithms with remarkable efficiency. These chips, combined with sophisticated algorithms, allow robots to process sensory inputs and generate appropriate responses in real-time. For instance, a robot equipped with neuromorphic vision can detect and react to changes in its environment almost instantaneously, making it ideal for dynamic and unpredictable settings.


One of the key challenges in neuromorphic engineering is to integrate neuromorphic perception with motor control effectively. To achieve this, researchers have drawn inspiration from the human nervous system, where sensory inputs are continuously processed and used to guide actions. By mimicking this process, neuromorphic systems can generate more coordinated and adaptive behaviors. For example, a neuromorphic robot can use information from its visual sensors to adjust its movements, allowing it to navigate complex environments with ease.


A recent study published in Nature Communications highlights the potential of neuromorphic intelligence to transform robotics. The research, led by Chiara Bartolozzi and her team, explores how neuromorphic circuits and sensorimotor architectures can endow robots with the ability to learn, adapt, and make decisions autonomously. The study presents several proof-of-concept applications, demonstrating the feasibility of this approach in real-world scenarios.


One of the standout examples in the study is the development of a neuromorphic robotic arm. This arm, equipped with spiking neural networks, can perform complex tasks such as grasping objects, manipulating tools, and even playing musical instruments. The researchers achieved this by combining neuromorphic sensors, which emulate the human sense of touch, with advanced motor control algorithms. The result is a robotic arm that can adapt to different tasks and environments, showcasing the versatility of neuromorphic intelligence.


The study also delves into the intricacies of neuromorphic perception. Neuromorphic vision sensors, for instance, mimic the retina's ability to detect changes in light and motion. These sensors can capture visual information with high temporal resolution, allowing robots to perceive and respond to their surroundings more effectively. By integrating these sensors with neuromorphic computation, robots can perform tasks ranging from object recognition to navigation with unprecedented efficiency.


One of the most exciting aspects of neuromorphic intelligence is its potential to revolutionize human-robot interaction. Traditional robots often struggle to interpret and respond to human cues, such as gestures and facial expressions. Neuromorphic systems, on the other hand, can process these complex signals in real-time, enabling more natural and intuitive interactions. This has profound implications for fields like healthcare, where robots could assist patients with daily tasks and provide companionship for the elderly.


Beyond robotics, neuromorphic intelligence holds promise for various applications, including environmental monitoring, smart homes, and autonomous vehicles. For instance, drones equipped with neuromorphic vision can navigate through forests to monitor wildlife or assess the health of crops. In smart homes, neuromorphic sensors can detect and respond to environmental changes, enhancing energy efficiency and security. Autonomous vehicles, with their need for rapid decision-making in complex environments, stand to benefit immensely from neuromorphic computing, potentially leading to safer and more reliable transportation systems.


Despite its tremendous potential, the field of neuromorphic engineering faces several challenges. One of the primary obstacles is the lack of standardized tools and frameworks for developing and integrating neuromorphic systems. Unlike traditional computing, which has a well-established ecosystem of software and hardware tools, neuromorphic engineering is still in its nascent stages. Researchers are working to develop user-friendly platforms that can facilitate the design and deployment of neuromorphic systems, making them accessible to a broader community of engineers and developers.


The study acknowledges these challenges and calls for a collaborative effort to advance the field. It emphasizes the need for modular and reusable components, standard communication protocols, and open-source implementations. By fostering a collaborative ecosystem, the neuromorphic community can accelerate the development of intelligent systems that can seamlessly integrate with existing technologies.


Looking ahead, the future of neuromorphic intelligence is bright, with exciting possibilities on the horizon. Researchers are exploring new materials and technologies that could enhance the performance and scalability of neuromorphic systems. For instance, advancements in memristive devices, which can mimic the synaptic plasticity of the brain, hold promise for creating more efficient and compact neuromorphic circuits. Similarly, the integration of neuromorphic computing with emerging fields like quantum computing and bio-inspired robotics could unlock new frontiers in artificial intelligence.


The journey towards neuromorphic intelligence is an exciting one, filled with challenges and opportunities. As researchers continue to push the boundaries of what is possible, the impact of this field will be felt across various domains, from healthcare to environmental conservation. The dream of creating intelligent machines that can think and act like humans is no longer confined to the realm of science fiction; it is becoming a reality, one breakthrough at a time.


In the words of Chiara Bartolozzi, "The promise of neuromorphic intelligence lies in its ability to combine efficient computation with adaptive behavior, bringing us closer to the goal of creating truly intelligent systems." With ongoing research and collaboration, the future of neuromorphic engineering looks promising, and its potential to transform our world is limitless.



oh-wow-498-x-370-gif-bzodhc765cu8f8mk.gif



I thought this was also pretty cool! The authors of this research paper thank Dr Chiara Bartolozzi ( who is referred to in the above article) for her insightful discussions. This research paper also mentions BrainChip's Akida! 🥳🥳🥳





Screenshot 2024-07-19 at 12.03.11 pm.png





Screenshot 2024-07-19 at 12.01.00 pm.png






Screenshot 2024-07-19 at 12.02.26 pm.png
 
Last edited:
  • Like
  • Fire
  • Love
Reactions: 39 users

Bravo

If ARM was an arm, BRN would be its biceps💪!
AI & CLOUD DEVICES QUALCOMM JULY 18, 2024

Feature: Qualcomm makes the case for on-device AI​

durga_malladi_svp_gm_qualcomm_technologies_ai_analyst_media_2024

BY MICHAEL CARROLL
SHARE

Durga Malladi, SVP and GM for technology planning and edge solutions with Qualcomm Technologies (pictured), inevitably made the case for AI being placed on devices rather than the cloud during the company’s analyst and media workshop in June, as he argued the current focus on the technology is not another false dawn.
As a chip company, Qualcomm’s argument for AI to be on device is not unexpected: Malladi explained the company is on a “relentless mission” to shift processing from the cloud “towards the edge running directly onto devices”.
He noted the huge leap in computational processing capabilities on today’s devices along with advances in connectivity technology makes such a shift entirely possible, although conceded the same high-speed networks which enable on-device AI could equally provide decent cloud-based service, particularly as the number of parameters in the models used begin to range in the billions.

Malladi noted questions of scaling AI on device remain common, despite the “computing horsepower” available.
Running AI on device may be challenging, but the Qualcomm executive argued the pros outweigh the cons of relying on the cloud, citing cost and the growing complexity of tasks the technology is being asked to handle.
Malladi explained the cost of inference “scales exponentially if you run everything solely on the cloud”, stating this would prove problematic in future.
Quote Icon

This is no longer a chatbot
DURGA MALLADI – SVP AND GM FOR TECHNOLOGY PLANNING AND EDGE SOLUTIONSQUALCOMM TECHNOLOGIES
He elucidated by referring to research published by Reuters in 2023 into the level of generative AI processing needed to run a proportion of Google Search queries through the cloud, which showed the cost is “mind boggling”. It would offset any gain made by reducing the price of the hardware involved, Malladi said.
“The second thing is that the kinds of applications are getting very rich now. This is no longer a chatbot.”
Services involve more multi-modality, Malladi said, pointing to images, voice and other additions he explained make it “tougher” and harder to scale. Throw in the sheer number of actual users and the numbers involved in “token generation or image processing” become even more overwhelming.
Malladi highlighted environmental concerns associated with the growing demand for cloud computing, citing predictions the amount of power AI will require could amount to 3.5 per cent of the world’s total energy consumption by 2030.
A new dawn
Malladi referred to the current hype around AI as the third spring for a technology he explained had existed since at least the middle of the last century.
He noted the development in the 1950s of the Turing Test, which Encyclopaedia Brittanica states is an assessment of a computer’s ability to reason in a way people would, as one of the early moves in what he called the first “spring” of AI.
This spring was characterised by a lot of original concepts, including the development of ELIZA, referred to by the New Jersey Institute of Technology as a natural language processing programme written in the mid-1960s by Massachusetts Institute of Technology Professor Joseph Weizenbaum.
An interesting aside is ELIZA was first called a chatterbot, a term now slightly abbreviated to chatbot.
Malladi said this initial spring quickly turned to winter, as research later in the 1960s proved the amount these chatterbots could learn was nowhere near the “lofty goals” expected.
It took until the early 1980s for the second spring of AI to begin, with expert systems, deep convolutional networks and parallel distributed processing capabilities paving the way. Malladi explained factors including human expertise and the start of PCs becoming mainstream led to the collapse of this round of interest in the technology by the early 1990s, the second winter.
Despite this second breakdown, Malladi noted progress in concepts around handwriting and number recognition, citing the potential for ATMs to recognise numbers on cheques being deposited.
Ironically, it was developments later in the 1990s which give Malladi the confidence that the current AI spring will not peter out again.
He pointed to the birth of the consumer internet, which brought access to a vast amount of data, the lack of which had been a hindrance in the preceding two decades. The second factor was a dramatic increase in the amount of computing power available. Malladi noted desktops and laptops gained more processing capabilities, changing the foundations of AI.
“So we are in this third spring of AI and our prediction is there’s no going back now”, Malladi said, explaining the processing power of devices and amount of data available from public and enterprise sources mean there is “tonnes of automation that can be done already” concerning consumer and productivity use cases.
Security
Malladi brought this back to the case for on-device AI by looking at the type of data involved today.
The executive noted a growing demand for more personalised responses from AI-powered consumer services, but also higher levels of security. Using medical records as an example, Malladi explained an AI voice assistant must offer personalised information rather than rely on details sourced from the public domain, arguing this presents a risk when cloud processing is involved.
“Do you want access to that data and then send it to the cloud so that it can run inference over there and come back? Why would I want to do that if I can run it directly on device?”
Another potential use was demonstrated during Qualcomm’s Snapdragon Summit in 2023, when a person sought information on what they were looking at by pointing their phone at it. Malladi explained context is required to generate a response, including deriving the user’s position from various sensors, a task involving a “lot of information” which is “very local and contextual”.
Malladi argued these examples of the need for data privacy is the reason why on device “is the way to go”.
For enterprise scenarios, he explained there may be a need to access data off-site, noting access to corporate servers or cloud services may vary depending on where the employee is.
“But regardless of connectivity, you want to have a uniform AI experience”, he explained, noting if you can run the technology directly on a device “you actually have that capability to get the responses with absolutely no bearing on how the connectivity is”.
Common goals
As with many recent high-level discussions about AI, Malladi noted the importance of partnerships and ethics.
He highlighted Qualcomm does not create genAI models, meaning the development of standardised approaches to assessing these is increasingly important because developers tend to employ their own rules regarding what is fair or safe.
Qualcomm is contributing towards developing those standards, with Malladi referencing work on the AI safety working group of ML Commons, an engineering consortium focused on the technology.
Quote Icon

This has been a really good initiative which is recognised, at least within the US, as a starting point
DURGA MALLADI – SVP AND GM FOR TECHNOLOGY PLANNING AND EDGE SOLUTIONSQUALCOMM TECHNOLOGIES
The company’s partnerships play into its role in developing ethics and principles: Malladi said alongside device OEMs, Qualcomm works with governments and regulators, in part to explain what AI is “and what it is not”, while also engaging with developers, work which includes offering access to testing through a hub centred around the company’s various compatible silicon.
“Our job is not to explain to them the intricacies of our NPU and our CPU, but to make it much more easy for them to be able to access” the chips “without knowing all of the details”.
Malladi argued keeping data local rather than employing the cloud could also play a key role in AI ethics, though acknowledged security remained an important consideration even when information is stored on device. “This has nothing to do with AI per se, but I think in the context of AI it becomes even more important”.
The executive noted increasing concerns among regulators about deepfakes, explaining a big part of the issue is what actually constitutes fakery. He asked if performing some simple edits to a picture counts as falsification, adding Qualcomm considers this as an original element, augmentation as another and totally synthetic images a third.
He said Qualcomm is working with secure content transparency tools provider Truepic to verify metadata covering all three elements, providing a “certificate of authenticity” to offer some degree of transparency.
Along with the fact many flagship smartphones are incorporating AI directly, Malladi noted the pace of development in language models is also playing to Qualcomm’s mission, because companies are doing more with fewer parameter options.
He pointed to Meta Platform’s Llama3, which comes with 8 billion and 70 billion parameter options compared with the 7 billion, 13 billion and 70 billion of its predecessor as an example.
“Bottom line, what we call smaller models are way more superior than yesterday’s larger models,” in turn enabling richer use cases on mainstream devices.
While Malladi’s presentation was of course oriented towards Qualcomm’s core competencies and its pro-device push, his views carry weight due to his background as a technologist who studied neural networks, among other fields, at university.
His presentation adds to a growing consensus of the core challenges around implementing AI, along with an emerging understanding of the need for collaboration, education and, of course, data.

 
  • Like
  • Fire
Reactions: 12 users

Esq.111

Fascinatingly Intuitive.
Afternoon Chippers .

Pre checks of spring completed, LOAD TESTING about to commence.

1721360195927.png


Regards,
Esq.
 
  • Like
  • Haha
  • Love
Reactions: 24 users
Top Bottom