BRN Discussion Ongoing

Luppo71

Founding Member
 
  • Like
  • Thinking
  • Wow
Reactions: 7 users

Bravo

If ARM was an arm, BRN would be its biceps💪!
13 July 2024

Can Neuromorphic Intelligence Bring Robots to Life?​

The potential of brain-inspired computing to transform autonomous agents and their interactions with the environment​

Can Neuromorphic Intelligence Bring Robots to Life?

In the fast-paced world of robotics and artificial intelligence, creating machines that can seamlessly interact with their environment is the holy grail. Imagine robots that not only navigate their surroundings but also learn and adapt in real-time, just as humans do. This dream is inching closer to reality thanks to the field of neuromorphic engineering, a fascinating discipline that is revolutionizing how we think about intelligent systems.


At the heart of this transformation is the concept of embodied neuromorphic intelligence. This approach leverages brain-inspired computing methods to develop robots capable of adaptive, low-power, and efficient interactions with the world. The idea is to mimic the way living organisms process information, enabling robots to perform complex tasks with minimal resources. This novel approach promises to reshape industries, from autonomous vehicles to healthcare and beyond.


Neuromorphic engineering combines principles from neuroscience, electrical engineering, and computer science to create systems that emulate the brain's structure and functionality. Unlike traditional computing, which relies on binary logic and clock-driven operations, neuromorphic systems use spiking neural networks (SNNs) that communicate through electrical pulses, much like neurons in the human brain. This allows for more efficient processing, especially for tasks involving perception, decision-making, and motor control.


The journey towards neuromorphic intelligence has been fueled by significant advancements in both hardware and software. Researchers have developed specialized neuromorphic chips that can execute complex neural algorithms with remarkable efficiency. These chips, combined with sophisticated algorithms, allow robots to process sensory inputs and generate appropriate responses in real-time. For instance, a robot equipped with neuromorphic vision can detect and react to changes in its environment almost instantaneously, making it ideal for dynamic and unpredictable settings.


One of the key challenges in neuromorphic engineering is to integrate neuromorphic perception with motor control effectively. To achieve this, researchers have drawn inspiration from the human nervous system, where sensory inputs are continuously processed and used to guide actions. By mimicking this process, neuromorphic systems can generate more coordinated and adaptive behaviors. For example, a neuromorphic robot can use information from its visual sensors to adjust its movements, allowing it to navigate complex environments with ease.


A recent study published in Nature Communications highlights the potential of neuromorphic intelligence to transform robotics. The research, led by Chiara Bartolozzi and her team, explores how neuromorphic circuits and sensorimotor architectures can endow robots with the ability to learn, adapt, and make decisions autonomously. The study presents several proof-of-concept applications, demonstrating the feasibility of this approach in real-world scenarios.


One of the standout examples in the study is the development of a neuromorphic robotic arm. This arm, equipped with spiking neural networks, can perform complex tasks such as grasping objects, manipulating tools, and even playing musical instruments. The researchers achieved this by combining neuromorphic sensors, which emulate the human sense of touch, with advanced motor control algorithms. The result is a robotic arm that can adapt to different tasks and environments, showcasing the versatility of neuromorphic intelligence.


The study also delves into the intricacies of neuromorphic perception. Neuromorphic vision sensors, for instance, mimic the retina's ability to detect changes in light and motion. These sensors can capture visual information with high temporal resolution, allowing robots to perceive and respond to their surroundings more effectively. By integrating these sensors with neuromorphic computation, robots can perform tasks ranging from object recognition to navigation with unprecedented efficiency.


One of the most exciting aspects of neuromorphic intelligence is its potential to revolutionize human-robot interaction. Traditional robots often struggle to interpret and respond to human cues, such as gestures and facial expressions. Neuromorphic systems, on the other hand, can process these complex signals in real-time, enabling more natural and intuitive interactions. This has profound implications for fields like healthcare, where robots could assist patients with daily tasks and provide companionship for the elderly.


Beyond robotics, neuromorphic intelligence holds promise for various applications, including environmental monitoring, smart homes, and autonomous vehicles. For instance, drones equipped with neuromorphic vision can navigate through forests to monitor wildlife or assess the health of crops. In smart homes, neuromorphic sensors can detect and respond to environmental changes, enhancing energy efficiency and security. Autonomous vehicles, with their need for rapid decision-making in complex environments, stand to benefit immensely from neuromorphic computing, potentially leading to safer and more reliable transportation systems.


Despite its tremendous potential, the field of neuromorphic engineering faces several challenges. One of the primary obstacles is the lack of standardized tools and frameworks for developing and integrating neuromorphic systems. Unlike traditional computing, which has a well-established ecosystem of software and hardware tools, neuromorphic engineering is still in its nascent stages. Researchers are working to develop user-friendly platforms that can facilitate the design and deployment of neuromorphic systems, making them accessible to a broader community of engineers and developers.


The study acknowledges these challenges and calls for a collaborative effort to advance the field. It emphasizes the need for modular and reusable components, standard communication protocols, and open-source implementations. By fostering a collaborative ecosystem, the neuromorphic community can accelerate the development of intelligent systems that can seamlessly integrate with existing technologies.


Looking ahead, the future of neuromorphic intelligence is bright, with exciting possibilities on the horizon. Researchers are exploring new materials and technologies that could enhance the performance and scalability of neuromorphic systems. For instance, advancements in memristive devices, which can mimic the synaptic plasticity of the brain, hold promise for creating more efficient and compact neuromorphic circuits. Similarly, the integration of neuromorphic computing with emerging fields like quantum computing and bio-inspired robotics could unlock new frontiers in artificial intelligence.


The journey towards neuromorphic intelligence is an exciting one, filled with challenges and opportunities. As researchers continue to push the boundaries of what is possible, the impact of this field will be felt across various domains, from healthcare to environmental conservation. The dream of creating intelligent machines that can think and act like humans is no longer confined to the realm of science fiction; it is becoming a reality, one breakthrough at a time.


In the words of Chiara Bartolozzi, "The promise of neuromorphic intelligence lies in its ability to combine efficient computation with adaptive behavior, bringing us closer to the goal of creating truly intelligent systems." With ongoing research and collaboration, the future of neuromorphic engineering looks promising, and its potential to transform our world is limitless.



View attachment 66779


I thought this was also pretty cool! The authors of this research paper thank Dr Chiara Bartolozzi ( who is referred to in the above article) for her insightful discussions. This research paper also mentions BrainChip's Akida! 🥳🥳🥳





View attachment 66778




View attachment 66774





View attachment 66776



Here's a very short interview recorded in May 2024 with Chiara Bartolozzi (Researcher, Fondazione Istituto Italiano di Tecnologia) on neuromorphic intelligence in robotics.

Chiara was mentioned in a research paper (see above) in which the authors refer to BrainChip's Akida in terms of how the technology might be incorporated into neuroprosthetic devices.

 
  • Like
  • Love
  • Fire
Reactions: 22 users

Bravo

If ARM was an arm, BRN would be its biceps💪!

Autonomous vehicles may soon benefit from 100 times faster neuromorphic cameras​

1198_neuro.png

Tuesday 27th of August 2024



• With the capacity to capture 5,000 images per second while consuming up to 100 times less energy, event cameras which offer ultra-fast data transmission far surpass their traditional counterparts.
• A research team at the University of Zurich has been working on the integration of these new devices in driver assistance systems, which should pave the way for faster obstacle detection in autonomous vehicles.
• Event cameras, which continually capture changes in brightness on the level of individual pixels, benefit from vastly reduced data flows and storage requirements. In China, a research team has recently announced the development of a vision chip that can capture up to 10,000 images per second.

Imagine a new generation of cameras that consume up to 100 times less energy while transmitting image data at 100 times the rate achieved by current devices. These are just two of the game-changing properties of bio-inspired, neuromorphic or event cameras, which could soon have major impact in a host of applications. Instead of recording a fixed number of frames per second, the new devices asynchronously measure brightness changes for individual pixels while transmitting no data for others that remain unchanged, which leads to a huge reduction in bandwidth. “Elements in the data stream are referred to as ‘events’ because only fractions of the signal are measured by specific electronic chips, explains researcher Daniel Gehrig of University of Pennsylvania’s General Robotics, Automation, Sensing and Perception (GRASP) Lab.

The research team set itself the goal of combining a neuromorphic camera with a decision-making algorithm without incurring any loss in performance

“In cameras of this type, like those developed by the French company Prophesee, the pixels are continuously exposed but only measure changes in luminance, which effectively allows for continuous signal monitoring.” Put simply, no movement can escape detection by the sensor. “The speed of the camera is equivalent to 5,000 images per second. Any changes will be registered within 0.2 milliseconds, which makes it 100 times faster than a traditional camera.”

Reducing driver assistance reaction times​

A few weeks ago, when he was still working in the computer science department of the University of Zurich, Daniel Gehrig published an article in the journal Nature outlining how event cameras could be used to enable vehicles to detect obstacles like pedestrians and cyclists more rapidly. Vehicles equipped with advanced driver assistance detection systems that make use of traditional cameras, which still need to be made faster and more reliable, currently collect around ten terabytes of data per hour.
The Zurich research team set itself the goal of combining a neuromorphic camera with a decision-making algorithm without incurring any loss in performance. “Conventional algorithms analyse images as a whole, unlike the algorithm we developed to process event stream data, which is 5,000 times more efficient in terms of the time required to produce results.” However, to ensure the overall accuracy of the system the researchers also added a second conventional camera at a mere 20 frames per second: “Neuromorphic cameras capture movement, but not the whole scene. Adding a conventional camera gives us context on the vehicle’s environment.”

A Chinese chip that captures 10,000 images per second​

For the research team, the next step in this project will be to link their system to a LiDAR. “As it stands, cameras can capture changes in a scene very quickly, but are unable to apprehend distances between objects. The LiDAR will give the vehicle more information and enable it to know how much time remains before it must make a decision.” Ideally, the team would also like to integrate the new algorithm directly into neuromorphic sensors for the automobile industry. However, as Daniel Gehrig points out, “To do this, the algorithm will need to be simplified.”
The Swiss researchers are not alone in developing bio-inspired cameras for intelligent and autonomous vehicles. In China, researchers at Tsinghua University’s Center for Brain Inspired Computing Research (CBICR) have published details of a vision chip called Tianmouc, capable of capturing 10,000 images per second while reducing bandwidth by 90%. Their goal is to avoid data bottlenecks and enable autonomous systems to handle various extreme events with hardware technology that can match the rapid progress of artificial intelligence.
 
  • Like
  • Fire
  • Love
Reactions: 37 users

Bravo

If ARM was an arm, BRN would be its biceps💪!

Autonomous vehicles may soon benefit from 100 times faster neuromorphic cameras​

1198_neuro.png

Tuesday 27th of August 2024



• With the capacity to capture 5,000 images per second while consuming up to 100 times less energy, event cameras which offer ultra-fast data transmission far surpass their traditional counterparts.
• A research team at the University of Zurich has been working on the integration of these new devices in driver assistance systems, which should pave the way for faster obstacle detection in autonomous vehicles.
• Event cameras, which continually capture changes in brightness on the level of individual pixels, benefit from vastly reduced data flows and storage requirements. In China, a research team has recently announced the development of a vision chip that can capture up to 10,000 images per second.

Imagine a new generation of cameras that consume up to 100 times less energy while transmitting image data at 100 times the rate achieved by current devices. These are just two of the game-changing properties of bio-inspired, neuromorphic or event cameras, which could soon have major impact in a host of applications. Instead of recording a fixed number of frames per second, the new devices asynchronously measure brightness changes for individual pixels while transmitting no data for others that remain unchanged, which leads to a huge reduction in bandwidth. “Elements in the data stream are referred to as ‘events’ because only fractions of the signal are measured by specific electronic chips, explains researcher Daniel Gehrig of University of Pennsylvania’s General Robotics, Automation, Sensing and Perception (GRASP) Lab.



“In cameras of this type, like those developed by the French company Prophesee, the pixels are continuously exposed but only measure changes in luminance, which effectively allows for continuous signal monitoring.” Put simply, no movement can escape detection by the sensor. “The speed of the camera is equivalent to 5,000 images per second. Any changes will be registered within 0.2 milliseconds, which makes it 100 times faster than a traditional camera.”

Reducing driver assistance reaction times​

A few weeks ago, when he was still working in the computer science department of the University of Zurich, Daniel Gehrig published an article in the journal Nature outlining how event cameras could be used to enable vehicles to detect obstacles like pedestrians and cyclists more rapidly. Vehicles equipped with advanced driver assistance detection systems that make use of traditional cameras, which still need to be made faster and more reliable, currently collect around ten terabytes of data per hour.
The Zurich research team set itself the goal of combining a neuromorphic camera with a decision-making algorithm without incurring any loss in performance. “Conventional algorithms analyse images as a whole, unlike the algorithm we developed to process event stream data, which is 5,000 times more efficient in terms of the time required to produce results.” However, to ensure the overall accuracy of the system the researchers also added a second conventional camera at a mere 20 frames per second: “Neuromorphic cameras capture movement, but not the whole scene. Adding a conventional camera gives us context on the vehicle’s environment.”

A Chinese chip that captures 10,000 images per second​

For the research team, the next step in this project will be to link their system to a LiDAR. “As it stands, cameras can capture changes in a scene very quickly, but are unable to apprehend distances between objects. The LiDAR will give the vehicle more information and enable it to know how much time remains before it must make a decision.” Ideally, the team would also like to integrate the new algorithm directly into neuromorphic sensors for the automobile industry. However, as Daniel Gehrig points out, “To do this, the algorithm will need to be simplified.”
The Swiss researchers are not alone in developing bio-inspired cameras for intelligent and autonomous vehicles. In China, researchers at Tsinghua University’s Center for Brain Inspired Computing Research (CBICR) have published details of a vision chip called Tianmouc, capable of capturing 10,000 images per second while reducing bandwidth by 90%. Their goal is to avoid data bottlenecks and enable autonomous systems to handle various extreme events with hardware technology that can match the rapid progress of artificial intelligence.


The above article leads me to suspect they might be referring to a need for TENNs-PLEIADES when they talk about the difficulty in apprehending distances between objects and how much time remains to make a decision. After all we know TENNs can achieve excellent performance on tasks that use temporal and spatial information.

It also says for it to work for the automobile industry “To do this, the algorithm will need to be simplified.”

This maybe one for @Diogenese to help answer.

Needless to say, it would be 10 out of 10 if TENNs were to be the solution to the issues that they raise. 😝

For the research team, the next step in this project will be to link their system to a LiDAR. “As it stands, cameras can capture changes in a scene very quickly, but are unable to apprehend distances between objects. The LiDAR will give the vehicle more information and enable it to know how much time remains before it must make a decision.” Ideally, the team would also like to integrate the new algorithm directly into neuromorphic sensors for the automobile industry. However, as Daniel Gehrig points out, “To do this, the algorithm will need to be simplified.”
 
Last edited:
  • Like
  • Fire
  • Love
Reactions: 20 users

Diogenese

Top 20
The above article leads me to suspect they might be referring to a need for TENNs-PLEIADES when they talk about the difficulty in apprehending distances between objects and how much time remains to make a decision. After all we know TENNs can achieve excellent performance on tasks that use temporal and spatial information.

It also says for it to work for the automobile industry “To do this, the algorithm will need to be simplified.”

This maybe one for @Diogenese to help answer.

Needless to say, it would be 10 out of 10 if TENNs were to be the solution to the issues that they raise. 😝

For the research team, the next step in this project will be to link their system to a LiDAR. “As it stands, cameras can capture changes in a scene very quickly, but are unable to apprehend distances between objects. The LiDAR will give the vehicle more information and enable it to know how much time remains before it must make a decision.” Ideally,Hi the team would also like to integrate the new algorithm directly into neuromorphic sensors for the automobile industry. However, as Daniel Gehrig points out, “To do this, the algorithm will need to be simplified.”
Hi Bravo,

Measuring the distance is a speciality of lidar or radar.

Lidar measures the time of flight - the time between sending a laser pulse and receiving the reflection - divide by 2 and multiply by C and you know the distance.

TeNNs can measure the object's movement/speed and direction,, so time to spatial coincidence can be calculated. Of course, the calculation needs to take into account the speed/direction of the vehicle as well as the object.
 
Last edited:
  • Like
  • Love
  • Fire
Reactions: 22 users

7für7

Regular
Tell me you are a brainchip holder without telling me, you are a brainchip holder

1724980179366.gif
 
  • Haha
  • Thinking
Reactions: 9 users

Bravo

If ARM was an arm, BRN would be its biceps💪!
Hi Bravo,

Measuring the distance is a speciality of lidar or radar.

Lidar measures the time of flight - the time between sending a laser pulse and receiving the reflection - divide by 2*C and you know the distance.

TeNNs can measure the object's movement/speed and direction,, so time to spatial coincidence can be calculated. Of course, the calculation needs to take into account the speed/direction of the vehicle as well as the object.


i-like-it-a-lot-jim-carrey.gif
 
  • Haha
  • Love
  • Fire
Reactions: 8 users

Luppo71

Founding Member
  • Haha
  • Like
Reactions: 3 users

7für7

Regular
Sorry all, didn't realise there was 9000 posts about this already.
Need to read more posts .
Seams like some people put each other into ignorlist… not your fault 😂
 

Luppo71

Founding Member
Seams like some people put each other into ignorlist… not your fault 😂
Been here since day one and don't have anyone one on ignore.
Although you've come close a couple times.
 
  • Haha
  • Like
Reactions: 14 users

IloveLamp

Top 20
  • Haha
  • Like
Reactions: 8 users

7für7

Regular
Been here since day one and don't have anyone one on ignore.
Although you've come close a couple times.
Sometimes I don't understand why people always react so aggressively... I sometimes think that many people just want to vent their frustration and do so on the first person they find... In my case, I didn't even write anything against you but simply shared a general observation. I even wrote that it wasn't your fault that others before you posted the same thing several times. Anyway... I really don't care how long you've been here, whether you're the king of a tribe in Papua New Guinea, or Elon Musk in disguise. If you feel the need to put me on ignore for trivial matters, then just do it... or do you think I'm now scared and thinking 'Oh no... he's considering blocking me'? Ridiculous.
 
  • Thinking
  • Haha
  • Like
Reactions: 6 users

Plebby

Member
Sometimes I don't understand why people always react so aggressively... I sometimes think that many people just want to vent their frustration and do so on the first person they find... In my case, I didn't even write anything against you but simply shared a general observation. I even wrote that it wasn't your fault that others before you posted the same thing several times. Anyway... I really don't care how long you've been here, whether you're the king of a tribe in Papua New Guinea, or Elon Musk in disguise. If you feel the need to put me on ignore for trivial matters, then just do it... or do you think I'm now scared and thinking 'Oh no... he's considering blocking me'? Ridiculous.
You are painful.
 
  • Like
  • Haha
Reactions: 5 users
Sometimes I don't understand why people always react so aggressively... I sometimes think that many people just want to vent their frustration and do so on the first person they find... In my case, I didn't even write anything against you but simply shared a general observation. I even wrote that it wasn't your fault that others before you posted the same thing several times. Anyway... I really don't care how long you've been here, whether you're the king of a tribe in Papua New Guinea, or Elon Musk in disguise. If you feel the need to put me on ignore for trivial matters, then just do it... or do you think I'm now scared and thinking 'Oh no... he's considering blocking me'? Ridiculous.
Come on 7 of 9. I took it as humour although sometimes it is hard to get the correct context of someone's post. The ironic thing is if it were just in jest, you are doing what you just accused him of.

SC
 
  • Haha
  • Like
Reactions: 7 users

7für7

Regular
Come on 7 of 9. I took it as humour although sometimes it is hard to get the correct context of someone's post. The ironic thing is if it were just in jest, you are doing what you just accused him of.

SC
Alright then, if he meant it as a joke, he can just tell me... then I'll apologize 🤷🏻‍♂️ it's no problem... until then, what I wrote stands. That's clear. Anyway .. I think this topic is quite overrated and most of the people here are not interested on personal fights…. As I said. I am always kind, yes sometimes I post also „unfunny funny stuff“. But just because I am waiting here for really huge news. Until there are no news, everyone is posting his stuff…. Akida inside even there is no connection to us… articles from 250 years ago about how people imagined the future etc… sometimes some childish posts like how ugly our CEO looks on pics or why he is using simple toilet paper instead of silk based one… 🤷🏻‍♂️
 
Last edited:
  • Like
  • Haha
Reactions: 4 users
Alright then, if he meant it as a joke, he can just tell me... then I'll apologize 🤷🏻‍♂️ it's no problem... until then, what I wrote stands. That's clear. Anyway .. I think this topic is quite overrated and most of the people here are not interested on personal fights…. As I said. I am always kind, yes sometimes I post also „unfunny funny stuff“. But just because I am waiting here for really huge news. Until there are no news, everyone is posting his stuff…. Akida inside even there is no connection to us… articles from 250 years ago about how people imagined the future etc… sometimes some childish posts line how ugly our CEO looks on pics or why is using simple toilet paper instead of silk based one… 🤷🏻‍♂️
We're all a little frustrated. All good. Just.pointing out it is hard to understand what someone means in written word on a forum. Unless of course someone fires both barrels at someone and leaves no doubt. 🤣

SC
 
  • Haha
  • Like
  • Love
Reactions: 5 users

Tezza

Regular
I am also frustrated, and i am getting closer to putting my money elsewhere. I do believe in the product, but bank interest over the last 4 years would have been better. Not sure I can listen to this is our year again in 2025.
Thinking of switching shares and maybe jump back in on the way up after a concrete ip sign up or Ann that we are in this or that product.
Now to enjoy the weekend, off to broadbeach for a week.
 
  • Like
  • Love
Reactions: 10 users

CHIPS

Regular
Last edited:
  • Like
  • Fire
  • Love
Reactions: 17 users
Top Bottom