BRN Discussion Ongoing

rgupta

Regular
  • Like
  • Haha
Reactions: 9 users

JB49

Regular
  • Like
Reactions: 3 users
@Diogenese here's another to add to your collection where they still mention Akida.

Good to see they appear to believe we are a good fit for them.

Published 27/8.




IMG_20240829_172521.jpg


IMG_20240829_172329.jpg
 
Last edited:
  • Like
  • Fire
  • Love
Reactions: 59 users

goodvibes

Regular
Looks promising to me…MYWAI ’s AIoT Management System has been selected for the #9 edition of Bosch Startup Harbour acceleration program


Excited to share that MYWAI ’s AIoT Management System has been selected for the #9 edition of Bosch Startup Harbour acceleration program! Our unique ability to bring Edge, Generative, Multimodal, and Agentic AI to existing equipment, appliances, and machinery within Smart Factories, Energy, Harbours, and Hospitals setups made this possible. Grateful for the opportunity and eager to collaborate with Bosch and the talented cohort. A big thank you to the selection committee, and congratulations to all the startups in this edition! Let’s innovate together!
 
  • Like
  • Love
  • Fire
Reactions: 21 users

manny100

Top 20
Good find FMF, all the pieces to the puzzle adding up. No doubt it will be posted on the hole known as the crapper.
 
  • Like
Reactions: 6 users
Good find FMF, all the pieces to the puzzle adding up. No doubt it will be posted on the hole known as the crapper.
Maybe.

I hadn't decided if I was putting there as yet. Sometimes I do as something positive for actual holders amongst the onslaught bagging of the company, the tech, the BOD (who yes, do need to get some traction happening) by non holders, holders and supposed holders but state openly they are short.

Never understood that disclosure....if they are short then borrowed the shares to sell short and technically don't hold them as they've sold them unless they are hedging and do also hold another lot I guess.

Each to their own.

It's just sad that forums appear to bring out the worst in some peoples character.

By all means state the case and the contrarian view and why, then move on...why hang out on a thread doing the same thing day after day...just malicious and belligerent attitudes imo, but they'll always hide behind...it's a forum and they can if they want....true they can but they can choose not to as well.

Anyway...thats the world we live in unfortunately.
 
Last edited:
  • Like
  • Sad
  • Love
Reactions: 23 users
Maybe.

I hadn't decided if I was putting there as yet. Sometimes I do as something positive for actual holders amongst the onslaught bagging of the company, the tech, the BOD (who yes, do need to get some traction happening) by non holders, holders and supposed holders but state openly they are short.

Never understood that disclosure....if they are short then borrowed the shares to sell short and technically don't hold them as they've sold them unless they are hedging and do also hold another lot I guess.

Each to their own.

It's just sad that forums appear to bring out the worst in some peoples character.

By all means state the case and the contrarian view and why, then move on...why hang out on a thread doing the same thing day after day...just malicious and belligerent attitudes imo, but they'll always hide behind...it's a forum and they can if they want....true they can but they can choose not to as well.

Anyway...thats the world we live in unfortunately.
Then they say, well nothing I post, is going to influence the share price 🙄..

Sentiment is a powerful thing.

Doji candles in charting, were "basically" created to measure it, is my understanding..

In its purest form, at least..
 
  • Like
  • Love
Reactions: 3 users

7für7

Top 20
Good to see we are at least increasing sales persons if not the sales !!!🤣🤣🤣
Hmmm… you must be British with your sense of Humor…… not!
 
  • Haha
  • Sad
Reactions: 2 users
Then they say, well nothing I post, is going to influence the share price 🙄..

Sentiment is a powerful thing.

Doji candles in charting, were "basically" created to measure it, is my understanding..

In its purest form, at least..
Agree that sentiment is powerful.

See, you do know charting :)

Correct....doji have several diff variations but are all quite revealing.

I snipped a quick chart again to look at last couple days bars for example.

Yesterday was what some call a morning star. Little doji cross. Easier to see when using OHLC bars the price opens gets pushed up or down then the opposite way then closes where it opened. Indicates the tussle between bears & bulls.

Today's bar is a dragonfly doji, of sorts. Has a very little tail but still some indecision style bar.

A strong dragonfly has a long tail and again price opens, gets pushed down immediately by bears then bulls push back up to close where opened. A longer tail indicates the bulls had some strength to push back up.

These ones form near bottoms, in downtrends but also in ranging or congestion areas etc. Can see some others left when in that short ranging.

When they form at highs or uptrends then you get evening stars or gravestone doji (upside down dragonfly).

Anyway, they overall indicate indecision, congestion or battles between bulls n bears. Can be a precursor to reversals at times. B nice to see the bulls rise to the occasion though.


IMG_20240829_202453.jpg
 
  • Like
  • Fire
  • Thinking
Reactions: 16 users
Graphemic Synesthesia solved @Frangipani 😉

At least according to my hypothesis, which I'm calling..
Conditioned Associative Learned Memory Response or CALMeR for short.

I had a brief conversation with someone who displayed vivid memories of events from 42 years ago. I brought up "S's" mind boggling memory, synesthesia and your colour associations and she basically said "me too".
I asked what colour was the letter A?
Bang "red" she replied.


It's been going over in my head, why the letter A, would be associated, with the color red (which is the most commonly reported, associative colour of agreeance).

And "this" popped into my head.
I Googled and came up with..
(it's a selective screenshot, for baby learning cards).


20240829_231647.jpg



From
"It has been suggested that synesthesia develops during childhood when children are intensively engaged with abstract concepts for the first time"

My hypothesis, just expands on that and I do not think grapheme-color synesthesia, is true synesthesia, involving the senses, but is purely memory related.

I'm currently writing a paper, which I will put forward for peer review and subsequent inclusion, in the next edition of the Neuromorphic Times.




I'm kidding 😛
 
Last edited:
  • Haha
  • Like
  • Fire
Reactions: 9 users
Elon Musk, on Technology Advancement.

 
  • Like
  • Fire
Reactions: 2 users

Luppo71

Founding Member
  • Like
  • Thinking
  • Wow
Reactions: 7 users

Bravo

If ARM was an arm, BRN would be its biceps💪!
13 July 2024

Can Neuromorphic Intelligence Bring Robots to Life?​

The potential of brain-inspired computing to transform autonomous agents and their interactions with the environment​

Can Neuromorphic Intelligence Bring Robots to Life?

In the fast-paced world of robotics and artificial intelligence, creating machines that can seamlessly interact with their environment is the holy grail. Imagine robots that not only navigate their surroundings but also learn and adapt in real-time, just as humans do. This dream is inching closer to reality thanks to the field of neuromorphic engineering, a fascinating discipline that is revolutionizing how we think about intelligent systems.


At the heart of this transformation is the concept of embodied neuromorphic intelligence. This approach leverages brain-inspired computing methods to develop robots capable of adaptive, low-power, and efficient interactions with the world. The idea is to mimic the way living organisms process information, enabling robots to perform complex tasks with minimal resources. This novel approach promises to reshape industries, from autonomous vehicles to healthcare and beyond.


Neuromorphic engineering combines principles from neuroscience, electrical engineering, and computer science to create systems that emulate the brain's structure and functionality. Unlike traditional computing, which relies on binary logic and clock-driven operations, neuromorphic systems use spiking neural networks (SNNs) that communicate through electrical pulses, much like neurons in the human brain. This allows for more efficient processing, especially for tasks involving perception, decision-making, and motor control.


The journey towards neuromorphic intelligence has been fueled by significant advancements in both hardware and software. Researchers have developed specialized neuromorphic chips that can execute complex neural algorithms with remarkable efficiency. These chips, combined with sophisticated algorithms, allow robots to process sensory inputs and generate appropriate responses in real-time. For instance, a robot equipped with neuromorphic vision can detect and react to changes in its environment almost instantaneously, making it ideal for dynamic and unpredictable settings.


One of the key challenges in neuromorphic engineering is to integrate neuromorphic perception with motor control effectively. To achieve this, researchers have drawn inspiration from the human nervous system, where sensory inputs are continuously processed and used to guide actions. By mimicking this process, neuromorphic systems can generate more coordinated and adaptive behaviors. For example, a neuromorphic robot can use information from its visual sensors to adjust its movements, allowing it to navigate complex environments with ease.


A recent study published in Nature Communications highlights the potential of neuromorphic intelligence to transform robotics. The research, led by Chiara Bartolozzi and her team, explores how neuromorphic circuits and sensorimotor architectures can endow robots with the ability to learn, adapt, and make decisions autonomously. The study presents several proof-of-concept applications, demonstrating the feasibility of this approach in real-world scenarios.


One of the standout examples in the study is the development of a neuromorphic robotic arm. This arm, equipped with spiking neural networks, can perform complex tasks such as grasping objects, manipulating tools, and even playing musical instruments. The researchers achieved this by combining neuromorphic sensors, which emulate the human sense of touch, with advanced motor control algorithms. The result is a robotic arm that can adapt to different tasks and environments, showcasing the versatility of neuromorphic intelligence.


The study also delves into the intricacies of neuromorphic perception. Neuromorphic vision sensors, for instance, mimic the retina's ability to detect changes in light and motion. These sensors can capture visual information with high temporal resolution, allowing robots to perceive and respond to their surroundings more effectively. By integrating these sensors with neuromorphic computation, robots can perform tasks ranging from object recognition to navigation with unprecedented efficiency.


One of the most exciting aspects of neuromorphic intelligence is its potential to revolutionize human-robot interaction. Traditional robots often struggle to interpret and respond to human cues, such as gestures and facial expressions. Neuromorphic systems, on the other hand, can process these complex signals in real-time, enabling more natural and intuitive interactions. This has profound implications for fields like healthcare, where robots could assist patients with daily tasks and provide companionship for the elderly.


Beyond robotics, neuromorphic intelligence holds promise for various applications, including environmental monitoring, smart homes, and autonomous vehicles. For instance, drones equipped with neuromorphic vision can navigate through forests to monitor wildlife or assess the health of crops. In smart homes, neuromorphic sensors can detect and respond to environmental changes, enhancing energy efficiency and security. Autonomous vehicles, with their need for rapid decision-making in complex environments, stand to benefit immensely from neuromorphic computing, potentially leading to safer and more reliable transportation systems.


Despite its tremendous potential, the field of neuromorphic engineering faces several challenges. One of the primary obstacles is the lack of standardized tools and frameworks for developing and integrating neuromorphic systems. Unlike traditional computing, which has a well-established ecosystem of software and hardware tools, neuromorphic engineering is still in its nascent stages. Researchers are working to develop user-friendly platforms that can facilitate the design and deployment of neuromorphic systems, making them accessible to a broader community of engineers and developers.


The study acknowledges these challenges and calls for a collaborative effort to advance the field. It emphasizes the need for modular and reusable components, standard communication protocols, and open-source implementations. By fostering a collaborative ecosystem, the neuromorphic community can accelerate the development of intelligent systems that can seamlessly integrate with existing technologies.


Looking ahead, the future of neuromorphic intelligence is bright, with exciting possibilities on the horizon. Researchers are exploring new materials and technologies that could enhance the performance and scalability of neuromorphic systems. For instance, advancements in memristive devices, which can mimic the synaptic plasticity of the brain, hold promise for creating more efficient and compact neuromorphic circuits. Similarly, the integration of neuromorphic computing with emerging fields like quantum computing and bio-inspired robotics could unlock new frontiers in artificial intelligence.


The journey towards neuromorphic intelligence is an exciting one, filled with challenges and opportunities. As researchers continue to push the boundaries of what is possible, the impact of this field will be felt across various domains, from healthcare to environmental conservation. The dream of creating intelligent machines that can think and act like humans is no longer confined to the realm of science fiction; it is becoming a reality, one breakthrough at a time.


In the words of Chiara Bartolozzi, "The promise of neuromorphic intelligence lies in its ability to combine efficient computation with adaptive behavior, bringing us closer to the goal of creating truly intelligent systems." With ongoing research and collaboration, the future of neuromorphic engineering looks promising, and its potential to transform our world is limitless.



View attachment 66779


I thought this was also pretty cool! The authors of this research paper thank Dr Chiara Bartolozzi ( who is referred to in the above article) for her insightful discussions. This research paper also mentions BrainChip's Akida! 🥳🥳🥳





View attachment 66778




View attachment 66774





View attachment 66776



Here's a very short interview recorded in May 2024 with Chiara Bartolozzi (Researcher, Fondazione Istituto Italiano di Tecnologia) on neuromorphic intelligence in robotics.

Chiara was mentioned in a research paper (see above) in which the authors refer to BrainChip's Akida in terms of how the technology might be incorporated into neuroprosthetic devices.

 
  • Like
  • Love
  • Fire
Reactions: 24 users

Bravo

If ARM was an arm, BRN would be its biceps💪!

Autonomous vehicles may soon benefit from 100 times faster neuromorphic cameras​

1198_neuro.png

Tuesday 27th of August 2024



• With the capacity to capture 5,000 images per second while consuming up to 100 times less energy, event cameras which offer ultra-fast data transmission far surpass their traditional counterparts.
• A research team at the University of Zurich has been working on the integration of these new devices in driver assistance systems, which should pave the way for faster obstacle detection in autonomous vehicles.
• Event cameras, which continually capture changes in brightness on the level of individual pixels, benefit from vastly reduced data flows and storage requirements. In China, a research team has recently announced the development of a vision chip that can capture up to 10,000 images per second.

Imagine a new generation of cameras that consume up to 100 times less energy while transmitting image data at 100 times the rate achieved by current devices. These are just two of the game-changing properties of bio-inspired, neuromorphic or event cameras, which could soon have major impact in a host of applications. Instead of recording a fixed number of frames per second, the new devices asynchronously measure brightness changes for individual pixels while transmitting no data for others that remain unchanged, which leads to a huge reduction in bandwidth. “Elements in the data stream are referred to as ‘events’ because only fractions of the signal are measured by specific electronic chips, explains researcher Daniel Gehrig of University of Pennsylvania’s General Robotics, Automation, Sensing and Perception (GRASP) Lab.

The research team set itself the goal of combining a neuromorphic camera with a decision-making algorithm without incurring any loss in performance

“In cameras of this type, like those developed by the French company Prophesee, the pixels are continuously exposed but only measure changes in luminance, which effectively allows for continuous signal monitoring.” Put simply, no movement can escape detection by the sensor. “The speed of the camera is equivalent to 5,000 images per second. Any changes will be registered within 0.2 milliseconds, which makes it 100 times faster than a traditional camera.”

Reducing driver assistance reaction times​

A few weeks ago, when he was still working in the computer science department of the University of Zurich, Daniel Gehrig published an article in the journal Nature outlining how event cameras could be used to enable vehicles to detect obstacles like pedestrians and cyclists more rapidly. Vehicles equipped with advanced driver assistance detection systems that make use of traditional cameras, which still need to be made faster and more reliable, currently collect around ten terabytes of data per hour.
The Zurich research team set itself the goal of combining a neuromorphic camera with a decision-making algorithm without incurring any loss in performance. “Conventional algorithms analyse images as a whole, unlike the algorithm we developed to process event stream data, which is 5,000 times more efficient in terms of the time required to produce results.” However, to ensure the overall accuracy of the system the researchers also added a second conventional camera at a mere 20 frames per second: “Neuromorphic cameras capture movement, but not the whole scene. Adding a conventional camera gives us context on the vehicle’s environment.”

A Chinese chip that captures 10,000 images per second​

For the research team, the next step in this project will be to link their system to a LiDAR. “As it stands, cameras can capture changes in a scene very quickly, but are unable to apprehend distances between objects. The LiDAR will give the vehicle more information and enable it to know how much time remains before it must make a decision.” Ideally, the team would also like to integrate the new algorithm directly into neuromorphic sensors for the automobile industry. However, as Daniel Gehrig points out, “To do this, the algorithm will need to be simplified.”
The Swiss researchers are not alone in developing bio-inspired cameras for intelligent and autonomous vehicles. In China, researchers at Tsinghua University’s Center for Brain Inspired Computing Research (CBICR) have published details of a vision chip called Tianmouc, capable of capturing 10,000 images per second while reducing bandwidth by 90%. Their goal is to avoid data bottlenecks and enable autonomous systems to handle various extreme events with hardware technology that can match the rapid progress of artificial intelligence.
 
  • Like
  • Fire
  • Love
Reactions: 39 users

Bravo

If ARM was an arm, BRN would be its biceps💪!

Autonomous vehicles may soon benefit from 100 times faster neuromorphic cameras​

1198_neuro.png

Tuesday 27th of August 2024



• With the capacity to capture 5,000 images per second while consuming up to 100 times less energy, event cameras which offer ultra-fast data transmission far surpass their traditional counterparts.
• A research team at the University of Zurich has been working on the integration of these new devices in driver assistance systems, which should pave the way for faster obstacle detection in autonomous vehicles.
• Event cameras, which continually capture changes in brightness on the level of individual pixels, benefit from vastly reduced data flows and storage requirements. In China, a research team has recently announced the development of a vision chip that can capture up to 10,000 images per second.

Imagine a new generation of cameras that consume up to 100 times less energy while transmitting image data at 100 times the rate achieved by current devices. These are just two of the game-changing properties of bio-inspired, neuromorphic or event cameras, which could soon have major impact in a host of applications. Instead of recording a fixed number of frames per second, the new devices asynchronously measure brightness changes for individual pixels while transmitting no data for others that remain unchanged, which leads to a huge reduction in bandwidth. “Elements in the data stream are referred to as ‘events’ because only fractions of the signal are measured by specific electronic chips, explains researcher Daniel Gehrig of University of Pennsylvania’s General Robotics, Automation, Sensing and Perception (GRASP) Lab.



“In cameras of this type, like those developed by the French company Prophesee, the pixels are continuously exposed but only measure changes in luminance, which effectively allows for continuous signal monitoring.” Put simply, no movement can escape detection by the sensor. “The speed of the camera is equivalent to 5,000 images per second. Any changes will be registered within 0.2 milliseconds, which makes it 100 times faster than a traditional camera.”

Reducing driver assistance reaction times​

A few weeks ago, when he was still working in the computer science department of the University of Zurich, Daniel Gehrig published an article in the journal Nature outlining how event cameras could be used to enable vehicles to detect obstacles like pedestrians and cyclists more rapidly. Vehicles equipped with advanced driver assistance detection systems that make use of traditional cameras, which still need to be made faster and more reliable, currently collect around ten terabytes of data per hour.
The Zurich research team set itself the goal of combining a neuromorphic camera with a decision-making algorithm without incurring any loss in performance. “Conventional algorithms analyse images as a whole, unlike the algorithm we developed to process event stream data, which is 5,000 times more efficient in terms of the time required to produce results.” However, to ensure the overall accuracy of the system the researchers also added a second conventional camera at a mere 20 frames per second: “Neuromorphic cameras capture movement, but not the whole scene. Adding a conventional camera gives us context on the vehicle’s environment.”

A Chinese chip that captures 10,000 images per second​

For the research team, the next step in this project will be to link their system to a LiDAR. “As it stands, cameras can capture changes in a scene very quickly, but are unable to apprehend distances between objects. The LiDAR will give the vehicle more information and enable it to know how much time remains before it must make a decision.” Ideally, the team would also like to integrate the new algorithm directly into neuromorphic sensors for the automobile industry. However, as Daniel Gehrig points out, “To do this, the algorithm will need to be simplified.”
The Swiss researchers are not alone in developing bio-inspired cameras for intelligent and autonomous vehicles. In China, researchers at Tsinghua University’s Center for Brain Inspired Computing Research (CBICR) have published details of a vision chip called Tianmouc, capable of capturing 10,000 images per second while reducing bandwidth by 90%. Their goal is to avoid data bottlenecks and enable autonomous systems to handle various extreme events with hardware technology that can match the rapid progress of artificial intelligence.


The above article leads me to suspect they might be referring to a need for TENNs-PLEIADES when they talk about the difficulty in apprehending distances between objects and how much time remains to make a decision. After all we know TENNs can achieve excellent performance on tasks that use temporal and spatial information.

It also says for it to work for the automobile industry “To do this, the algorithm will need to be simplified.”

This maybe one for @Diogenese to help answer.

Needless to say, it would be 10 out of 10 if TENNs were to be the solution to the issues that they raise. 😝

For the research team, the next step in this project will be to link their system to a LiDAR. “As it stands, cameras can capture changes in a scene very quickly, but are unable to apprehend distances between objects. The LiDAR will give the vehicle more information and enable it to know how much time remains before it must make a decision.” Ideally, the team would also like to integrate the new algorithm directly into neuromorphic sensors for the automobile industry. However, as Daniel Gehrig points out, “To do this, the algorithm will need to be simplified.”
 
Last edited:
  • Like
  • Fire
  • Love
Reactions: 22 users

Diogenese

Top 20
The above article leads me to suspect they might be referring to a need for TENNs-PLEIADES when they talk about the difficulty in apprehending distances between objects and how much time remains to make a decision. After all we know TENNs can achieve excellent performance on tasks that use temporal and spatial information.

It also says for it to work for the automobile industry “To do this, the algorithm will need to be simplified.”

This maybe one for @Diogenese to help answer.

Needless to say, it would be 10 out of 10 if TENNs were to be the solution to the issues that they raise. 😝

For the research team, the next step in this project will be to link their system to a LiDAR. “As it stands, cameras can capture changes in a scene very quickly, but are unable to apprehend distances between objects. The LiDAR will give the vehicle more information and enable it to know how much time remains before it must make a decision.” Ideally,Hi the team would also like to integrate the new algorithm directly into neuromorphic sensors for the automobile industry. However, as Daniel Gehrig points out, “To do this, the algorithm will need to be simplified.”
Hi Bravo,

Measuring the distance is a speciality of lidar or radar.

Lidar measures the time of flight - the time between sending a laser pulse and receiving the reflection - divide by 2 and multiply by C and you know the distance.

TeNNs can measure the object's movement/speed and direction,, so time to spatial coincidence can be calculated. Of course, the calculation needs to take into account the speed/direction of the vehicle as well as the object.
 
Last edited:
  • Like
  • Love
  • Fire
Reactions: 24 users

7für7

Top 20
Tell me you are a brainchip holder without telling me, you are a brainchip holder

1724980179366.gif
 
  • Haha
  • Thinking
Reactions: 9 users

Bravo

If ARM was an arm, BRN would be its biceps💪!
Hi Bravo,

Measuring the distance is a speciality of lidar or radar.

Lidar measures the time of flight - the time between sending a laser pulse and receiving the reflection - divide by 2*C and you know the distance.

TeNNs can measure the object's movement/speed and direction,, so time to spatial coincidence can be calculated. Of course, the calculation needs to take into account the speed/direction of the vehicle as well as the object.


i-like-it-a-lot-jim-carrey.gif
 
  • Haha
  • Love
  • Fire
Reactions: 8 users

Luppo71

Founding Member
  • Haha
  • Like
Reactions: 3 users
Top Bottom