BRN Discussion Ongoing

Arayat

Emerged
7, I don't have anything against you, my "come on" comment was supposed to be a playful jab, with the image of 100% Ai below it.
I think you might be reading into this one a little too deep, but I'm happy to get specific.
Using ChatGPT to tell us all why Brainchip is a great company is bad practise though. Ai bots are people pleasers, who will create a narrative that sounds close to correct but may not be.

For instance:
"BrainChip has regularly secured funding rounds and received investments, reflecting investor confidence in the company and its technology"

This is just not accurate is it?
They have a Put agreement with LDA capital who can sell on market and return an agreed upon value based on average price over the term period.
I'm not saying I personally take issue with LDA, but many do! This flies in the face of the statement that it "reflects investor confidence".
ChatGPT doesn't understand the funding is not from new/existing investors, but it just assumes and spins a yarn.

AI writing is flat, formulaic and boring. Like a metronome. Or if you tell it to sound excited, the excitement sounds artificial. And it lacks insight. So to be persuasive and really get to the point, a human is far better.

I write for a living and still get work. Not sure if I will in a couple of years though, or not the same kind of work, as AI improves. Let's hope Akida is behind some of the improvement.
 
Last edited:
  • Like
  • Love
  • Fire
Reactions: 17 users

Diogenese

Top 20
AI

AI writing is flat, formulaic and boring. Like a metronome. Or if you tell it to sound excited, the excitement sounds artificial. And it lacks insight. So to be persuasive and really get to the point, a human is far better.

I write for a living and still get work. Not sure if I will in a couple of years though, or not the same kind of work, as AI improves. Let's hope Akida is behind some of the improvement.
Sprung!

All right, you got me - I'm a silicon lifeform that uses sparse N-of-M coding and a model library of digital, forward-fed, convoluted, bowdlerized, plagiarized vocabs and anodyne syntax.

Top of the world, Ma!
 
  • Haha
  • Like
  • Fire
Reactions: 19 users
Sprung!

All right, you got me - I'm a silicon lifeform that uses sparse N-of-M coding and a model library of digital, forward-fed, convoluted, bowdlerized, plagiarized vocabs and anodyne syntax.

Top of the world, Ma!
Well , you had me completely fooled Dio.🙃
 
  • Haha
Reactions: 3 users

FJ-215

Regular
AI writing is flat, formulaic and boring. Like a metronome. Or if you tell it to sound excited, the excitement sounds artificial. And it lacks insight. So to be persuasive and really get to the point, a human is far better.

I write for a living and still get work. Not sure if I will in a couple of years though, or not the same kind of work, as AI improves. Let's hope Akida is behind some of the improvement.
" ..AI writing is flat, formulaic and boring. Like a metronome. Or if you tell it to sound excited, the excitement sounds artificial. And it lacks insight."

That's sounds like some of my report cards from high school......:sick:
 
  • Haha
  • Sad
Reactions: 8 users

Frangipani

Regular
New preprint mentioning Akida (mainly Gen 1) by three researchers from Edith Cowan University, including Alexander Rassau, who co-authored several papers with Anup Vanarse, Adam Osseiran and Peter van der Made between 2016 - 2020…


9C022B1E-1612-4D55-9CEA-642F89C73DA4.jpeg


b1b3f029-f38c-441a-be58-4126626df8a0-jpeg.63086


885DC573-8B63-4C95-8F7C-9A07D1F79C6D.jpeg




5. Conclusion

SLAM based on event cameras and neuromorphic computing represents an innovative approach to spatial perception and mapping in dynamic environments.
Event cameras capture visual information asynchronously, responding immediately to changes in the scene with high temporal resolution. Neuromorphic computing, inspired by the brain's processing principles, has the capacity to efficiently handle this event-based data, enabling real-time, low-power computation. By combining event cameras and neuromorphic processing, SLAM systems could achieve several advantages, including low latency, low power consumption, robustness to changing lighting conditions, and adaptability to dynamic environments. This integrated approach offers efficient, scalable, and robust solutions for applications such as robotics, augmented reality, and autonomous vehicles, with the potential to transform spatial perception and navigation capabilities in various domains.


5.1. Summary of Key Findings


VSLAM systems based on traditional image sensors such as monocular, stereo or RGB-D cameras have gained significant development attention in recent decades. These sensors can gather detailed data about the scene and are available at affordable prices. They also have relatively low power requirements, making them feasible for autonomous systems such as self-driving cars, unmanned aerial vehicles and other mobile robots. VSLAM systems employing these sensors have achieved reasonable performance and accuracy but have often struggled in real-world contexts due to high computational demands, limited adaptability to dynamic environments, and susceptibility to motion blur and lighting changes. Moreover, they face difficulties in real-time processing, especially in resource-constrained settings like autonomous drones or mobile robots.

To overcome the drawbacks of these conventional sensors, event cameras have begun to be explored. They have been inspired by the working of biological retinas and attempt to mimic the characteristics of human eyes. This biological design influence for event cameras means they consume minimal power and operate with lower bandwidth in addition to other notable features such as very low latency, high temporal resolution, and wide dynamic range. These attractive features make event cameras highly suitable for robotics, autonomous vehicles, drone navigation, and high-speed tracking applications. However, they operate on a fundamentally different principle compared to traditional cameras; event cameras respond to the brightness changes of the scene and generate events rather than capturing the full frame at a time. This poses challenges as algorithms and approaches employed in conventional image processing and SLAM systems cannot be directly applied and novel methods are required to realize their potential.

For SLAM systems based on event cameras, relevant methods can be selected based on the event representations and the hardware platform being used. Commonly employed methods for event- based SLAM are featured-based, direct, motion-compensated and deep-learning. Feature-based methods can be computationally efficient as they only deal with the small numbers of events produced by the fast-moving cameras for processing. However, their efficacy diminishes while dealing with a texture-less environment. On the other hand, the direct method can achieve robustness in a texture-less environment, but it can only be employed for moderate camera motions. Motion- compensated methods can offer robustness in high-speed motion as well as in large-scale settings, but they can only be employed for rotational camera motions. Deep learning methods can be effectively used to acquire the required attributes of the event data and generate the map while being robust to noise and outliers. However, this requires large amounts of training data, and performance cannot be guaranteed for different environment settings. SNNs have emerged in recent years as alternatives to CNNs and are considered well-suited for data generated by event cameras. The development of practical SNN-based systems is, however, still in the early stages and relevant methods and techniques need considerable further development before they can be implemented in an event camera-based SLAM system.

For conventional SLAM systems, traditional computing platforms usually require additional hardware such as GPU co-processors to perform the heavy computational loads, particularly when deep learning methods are employed. This high computational requirement means power requirements are also high, making them impractical for deployment in mobile autonomous systems. However, neuromorphic event-driven processors utilizing SNNs to model cognitive and interaction capabilities show promise in providing a solution. The research on implementing and integrating these emerging technologies is still in the early stages, however, an additional research effort will be required to realize this potential.

This review has identified that a system based on event cameras and neuromorphic processing presents a promising pathway for enhancing state-of-the-art solutions in SLAM. The unique features of event cameras, such as adaptability to changing lighting conditions, support for high dynamic range and lower power consumption due to the asynchronous nature of event data generation are the driving factors that can help to enhance the performance of the SLAM system. In addition, neuromorphic processors, which are designed to efficiently process and support parallel incoming event streams, can help to minimize the computational cost and increase the efficiency of the system. Such a neuromorphic SLAM system has the possibility of overcoming significant obstacles in autonomous navigation, such as the need for quick and precise perception, while simultaneously reducing problems relating to real-time processing requirements and energy usage. Moreover, if appropriate algorithms and methods can be developed, this technology has the potential to transform the realm of mobile autonomous systems by enhancing their agility, energy efficiency, and ability to function in a variety of complex and unpredictable situations.


5.2. Current State-of-the-Art and Future Scope


During the last few decades, much research has focused on implementing SLAM based on frame-based cameras and laser scanners. Nonetheless, a fully reliable and adaptable solution has yet to be discovered due to the computational complexities and sensor limitations, leading to systems requiring high power consumption and having difficulty adapting to changes in the environment, rendering them impractical for many use cases, particularly for mobile autonomous systems. For this reason, researchers have begun to shift focus to finding alternative or new solutions to address these problems. One promising direction for further exploration was found to be the combination of an event camera and neuromorphic computing technology due to the unique benefits that these complementary approaches can bring to the SLAM problem.

The research to incorporate event cameras and neuromorphic computing technology into a functional SLAM system is, however, currently in the early stages. Given that the algorithms and techniques employed in conventional SLAM approaches are not directly applicable to these emerging technologies, the necessity of finding new algorithms and methods within the neuromorphic computing paradigm is the main challenge faced by researchers. Some promising approaches to applying event cameras to the SLAM problem have been identified in this paper, but future research focus needs to be applied to the problem of utilizing emerging neuromorphic processing capabilities to implement these methods practically and efficiently.


5.3. Neuromorphic SLAM Challenges

Developing SLAM algorithms that effectively utilize event-based data from event cameras and harness the computational capabilities of neuromorphic processors presents a significant challenge. These algorithms must be either heavily modified or newly conceived to fully exploit the strengths of both technologies. Furthermore, integrating data from event cameras with neuromorphic processors and other sensor modalities, such as IMUs or traditional cameras, necessitates the development of new fusion techniques. Managing the diverse data formats, temporal characteristics, and noise profiles from these sensors while maintaining consistency and accuracy throughout the SLAM process will be a complex task.
In terms of scalability, expanding event cameras and neuromorphic processor-based SLAM systems to accommodate large-scale environments with intricate dynamics will pose challenges in computational resource allocation. It is essential to ensure scalability while preserving real-time performance for practical deployment. Additionally, event cameras and neuromorphic processors must adapt to dynamic environments where scene changes occur rapidly. Developing algorithms capable of swiftly updating SLAM estimates based on incoming event data while maintaining robustness and accuracy is critical.

Leveraging the learning capabilities of neuromorphic processors for SLAM tasks, such as map building and localization, necessitates the design of training algorithms and methodologies proficient in learning from event data streams. The development of adaptive learning algorithms capable of enhancing SLAM performance over time in real-world environments presents a significant challenge. Moreover, ensuring the correctness and reliability of event camera and neuromorphic processor- based SLAM systems poses hurdles in verification and validation. Rigorous testing methodologies must also be developed to validate the performance and robustness of these systems. If these challenges can be overcome, the potential rewards are significant, however.


Author Contributions: Conceptualization, S.T., A.R. and D.C.; methodology, S.T.; validation, S.T., A.R. and D.C.; formal analysis, S.T.; investigation, S.T.; writing—original draft preparation, S.T.; writing—review and editing, S.T., A.R. and D.C; supervision, A.R. and D.C. All authors have read and agreed to the published version of the manuscript.

Funding: This research received no external funding.

Institutional Review Board Statement: Not applicable.

Data Availability Statement: Not applicable.

Acknowledgments: This publication is partially supported by an ECU, School of Engineering Scholarship.

Conflicts of Interest: The authors declare no conflict of interest.
 

Attachments

  • 3D6B6968-05CE-42AB-833B-19D848249EAA.jpeg
    3D6B6968-05CE-42AB-833B-19D848249EAA.jpeg
    744.2 KB · Views: 27
  • B1B3F029-F38C-441A-BE58-4126626DF8A0.jpeg
    B1B3F029-F38C-441A-BE58-4126626DF8A0.jpeg
    744.2 KB · Views: 705
  • Like
  • Fire
  • Love
Reactions: 36 users

Frangipani

Regular
50938091-20CA-4B8C-B4B8-AA765F4CF47E.jpeg
 
  • Like
  • Fire
  • Love
Reactions: 34 users
  • Like
  • Love
  • Fire
Reactions: 25 users

MegaportX

Regular
Later this month
 

Attachments

  • Screenshot_20240517-183341.png
    Screenshot_20240517-183341.png
    527.7 KB · Views: 39
  • Like
  • Fire
Reactions: 9 users

JDelekto

Regular
I am posting this for anyone concerned. If you hold shares of BRCHF on the OTC markets, make sure that you check with your trading platform.

I sent an e-mail to Tony explaining my dilemma of the May 19th, 11:00 AM deadline for proxy votes (which does appear to be the case), and how I had not yet received proxy voting materials from Fidelity similar to last year.

He was kind enough to immediately forward my inquiry to someone at Boardroom Limited, who is their share registrar. I was surprised by the answer that I received, which was somewhat clarified by a follow-up response.

I was originally told that the shares I owned did not have voting rights due to the OTC nature of the holding and that my name nor Fidelity's did not appear on the register and had not been included in any previous AGMs.

This had me in a panic, as now I did not know whether I owned any shares of the stock and wanted to get to the bottom of this. Shares of stock sold on the OTC markets can indeed provide voting rights and dividends, and nowhere in my purchase of the stock on Fidelity was it disclosed that I neither held the underlying shares nor were voting rights conferred to me for those shares.

In a follow-up e-mail, I was advised that the OTC markets are when a foreign company is not listed on an exchange in the US, but its shares are traded via market makers. As such, I'm not directly listed on the register. As explained, my holdings through Fidelity were likely held through a sub-custodian in Australia (JP Morgan and HSBC were cited as examples) with me being the beneficial owner of the shares.

From some research, a beneficial owner is the "person who enjoys the benefits of ownership even though the title to some form of property is in another name". This means, as per the SEC, that I should directly, or indirectly have the power to vote or dispose of the stock.

I was directed back to Fidelity, who I need to speak to and get final clarification on this. As Fidelity was able to provide proxy materials last year (although past the time to vote), it is looking as if they are the direct entity I need to deal with. As it stands now, there is no way I will have these materials in time to vote this year.

Also, Tony is going to inquire about DTC eligibility. He said that as far as he was aware, this is currently in place as it was last year. He is going to inquire with the CFO to be sure. If they have this eligibility in place, and Fidelity has been charging me these $50 transaction fees (which were waived last year), then someone from Fidelity will have much explaining to do.

All roads are starting to lead to Fidelity --they are not on my Christmas list this year.

Edit: typo
 
  • Like
  • Love
  • Fire
Reactions: 19 users
I am posting this for anyone concerned. If you hold shares of BRCHF on the OTC markets, make sure that you check with your trading platform.

I sent an e-mail to Tony explaining my dilemma of the May 19th, 11:00 AM deadline for proxy votes (which does appear to be the case), and how I had not yet received proxy voting materials from Fidelity similar to last year.

He was kind enough to immediately forward my inquiry to someone at Boardroom Limited, who is their share registrar. I was surprised by the answer that I received, which was somewhat clarified by a follow-up response.

I was originally told that the shares I owned did not have voting rights due to the OTC nature of the holding and that my name nor Fidelity's did not appear on the register and had not been included in any previous AGMs.

This had me in a panic, as now I did not know whether I owned any shares of the stock and wanted to get to the bottom of this. Shares of stock sold on the OTC markets can indeed provide voting rights and dividends, and nowhere in my purchase of the stock on Fidelity was it disclosed that I neither held the underlying shares nor were voting rights conferred to me for those shares.

In a follow-up e-mail, I was advised that the OTC markets are when a foreign company is not listed on an exchange in the US, but its shares are traded via market makers. As such, I'm not directly listed on the register. As explained, my holdings through Fidelity were likely held through a sub-custodian in Australia (JP Morgan and HSBC were cited as examples) with me being the beneficial owner of the shares.

From some research, a beneficial owner is the "person who enjoys the benefits of ownership even though the title to some form of property is in another name". This means, as per the SEC, that I should directly, or indirectly have the power to vote or dispose of the stock.

I was directed back to Fidelity, who I need to speak to and get final clarification on this. As Fidelity was able to provide proxy materials last year (although past the time to vote), it is looking as if they are the direct entity I need to deal with. As it stands now, there is no way I will have these materials in time to vote this year.

Also, Tony is going to inquire about DTC eligibility. He said that as far as he was aware, this is currently in place as it was last year. He is going to inquire with the CFO to be sure. If they have this eligibility in place, and Fidelity has been charging me these $50 transaction fees (which were waived last year), then someone from Fidelity will have much explaining to do.

All roads are starting to lead to Fidelity --they are not on my Christmas list this year.

Edit: typo
Is it really the year 2024?...
 
  • Like
  • Thinking
  • Haha
Reactions: 3 users

Tothemoon24

Top 20

Autonomous Drone Using Neuromorphic AI​


ByAkanksha Gaur
May 17, 2024





This suggests a future where drones could become as small and agile as insects or birds, offering potential applications in areas like greenhouse monitoring and warehouse inventory management.
TN167-500x331.jpg

Researchers at Delft University of Technology have developed a drone that flies autonomously using neuromorphic image processing and control inspired by animal brains. Unlike current deep neural networks running on GPUs, animal brains use less data and energy. Neuromorphic processors are suitable for small drones because they require less hardware and battery power. The drone’s deep neural network processes data up to 64 times faster and uses three times less energy than a GPU. Further advancements may enable drones to become as small and agile as insects or birds.
Spiking neural networks, inspired by the brain’s processing methods, hold potential for autonomous robots. Current AI relies on deep neural networks requiring significant computing power and energy, a limitation for small robots like drones. Animal brains process information asynchronously and communicate via electrical pulses, or spikes, which are energy-efficient. Scientists and tech companies are developing neuromorphic processors to run spiking neural networks, promising faster and more energy-efficient performance. According to Jesse Hagenaars, a Ph.D. candidate involved in the study, spiking neural networks simplify calculations, making them quicker and more energy-efficient compared to standard deep neural networks.

Vision and Control​

Neuromorphic processors’ energy efficiency increases when combined with neuromorphic sensors, like cameras that only send signals when pixels change brightness. These cameras perceive motion quickly, are energy-efficient, and work well in varying light conditions. Researchers from Delft University of Technology demonstrated a drone using neuromorphic vision and control for autonomous flight. They developed a spiking neural network that processes signals from a neuromorphic camera, determining the drone’s pose and thrust. The network runs on Intel’s Loihi neuromorphic research chip.
Training the spiking neural network involved self-supervised learning and artificial evolution in a simulator. The drone can fly at different speeds and under varying light conditions. Measurements confirm the potential of neuromorphic AI. The network runs significantly faster and more efficiently on the neuromorphic chip compared to a GPU, consuming less power. This enables deployment on smaller autonomous robots.
Guido de Croon, Professor in bio-inspired drones, highlights that neuromorphic AI is crucial for tiny autonomous robots. Applications include monitoring crops in greenhouses and tracking stock in warehouses. The current work is a step towards achieving these applications, with further developments needed to scale down hardware and expand capabilities.
 
  • Like
  • Fire
Reactions: 14 users

Frangipani

Regular

👆🏻The University of Washington in Seattle, interesting…

UW’s Department of Electrical & Computer Engineering had already spread the word about a summer internship opportunity at BrainChip last May. In fact, it was one of their graduating Master’s students, who was a BrainChip intern himself at the time (April 2023 - July 2023), promoting said opportunity.

I wonder what exactly Rishabh Gupta is up to these days, hiding in stealth mode ever since graduating from UW & simultaneously wrapping up his internship at BrainChip. What he has chosen to reveal on LinkedIn is that he now resides in San Jose, CA and is “Building next-gen infrastructure and aligned services optimized for multi-modal Generative AI” resp. that his start-up intends to build said infrastructure and services “to democratise AI”… 🤔 He certainly has a very impressive CV so far as well as first-hand experience with Akida 2.0, given the time frame of his internship and him mentioning vision transformers.




EDB442F2-039E-4393-84A7-552EF73CABBD.jpeg


97EC4B17-3664-41B2-B8A1-3BBBE527C852.jpeg


FF9A6AB4-8BD5-46B6-A8BA-5095C29666D0.jpeg


6077A915-3FD8-4D6C-B68B-2DC2082BFD1F.jpeg


4F26D5C5-812A-471D-96F5-1D1A7120C32A.jpeg



Meanwhile, yet another university is encouraging their students to apply for a summer internship at BrainChip:



FA5B8675-B53A-4678-9B6F-E2FDB9E64932.jpeg



I guess it is just a question of time before USC will be added to the BrainChip University AI Accelerator Program, although Nandan Nayampally is sadly no longer with our company…

Remember this June 2023 This is our Mission podcast?


After introducing his guest, who also serves as the Executive Vice Dean at the USC Viterbi School of Engineering, Nandan Nayampally says “You know, we go back a long way … in fact, we had common alma maters.” (03:14 min)

Gaurav Sukatme:
From 25:32 min: “I think the partnership between industry and academia is crucial here to make progress.”

From 27:13 min: “You know, companies like yours, like Brainchip, what you are doing with the University Accelerator Program, I like very much - in fact, we’re looking into it, as you know, we’ll be having a phone [?] conversation about exploring that further. I think programs like that are unique and can really make the nexus between a leading company and academia sort of be tighter and be stronger.”

At the end of the podcast, Nandan Nayampally thanks his guest for sharing his insights and closes with the words “…and hopefully we’ll work together much closer soon.” (35:15 min)
 
Last edited:
  • Like
  • Love
  • Thinking
Reactions: 24 users

IloveLamp

Top 20

Autonomous Drone Using Neuromorphic AI​


ByAkanksha Gaur
May 17, 2024





This suggests a future where drones could become as small and agile as insects or birds, offering potential applications in areas like greenhouse monitoring and warehouse inventory management.
TN167-500x331.jpg

Researchers at Delft University of Technology have developed a drone that flies autonomously using neuromorphic image processing and control inspired by animal brains. Unlike current deep neural networks running on GPUs, animal brains use less data and energy. Neuromorphic processors are suitable for small drones because they require less hardware and battery power. The drone’s deep neural network processes data up to 64 times faster and uses three times less energy than a GPU. Further advancements may enable drones to become as small and agile as insects or birds.
Spiking neural networks, inspired by the brain’s processing methods, hold potential for autonomous robots. Current AI relies on deep neural networks requiring significant computing power and energy, a limitation for small robots like drones. Animal brains process information asynchronously and communicate via electrical pulses, or spikes, which are energy-efficient. Scientists and tech companies are developing neuromorphic processors to run spiking neural networks, promising faster and more energy-efficient performance. According to Jesse Hagenaars, a Ph.D. candidate involved in the study, spiking neural networks simplify calculations, making them quicker and more energy-efficient compared to standard deep neural networks.

Vision and Control​

Neuromorphic processors’ energy efficiency increases when combined with neuromorphic sensors, like cameras that only send signals when pixels change brightness. These cameras perceive motion quickly, are energy-efficient, and work well in varying light conditions. Researchers from Delft University of Technology demonstrated a drone using neuromorphic vision and control for autonomous flight. They developed a spiking neural network that processes signals from a neuromorphic camera, determining the drone’s pose and thrust. The network runs on Intel’s Loihi neuromorphic research chip.
Training the spiking neural network involved self-supervised learning and artificial evolution in a simulator. The drone can fly at different speeds and under varying light conditions. Measurements confirm the potential of neuromorphic AI. The network runs significantly faster and more efficiently on the neuromorphic chip compared to a GPU, consuming less power. This enables deployment on smaller autonomous robots.
Guido de Croon, Professor in bio-inspired drones, highlights that neuromorphic AI is crucial for tiny autonomous robots. Applications include monitoring crops in greenhouses and tracking stock in warehouses. The current work is a step towards achieving these applications, with further developments needed to scale down hardware and expand capabilities.
Pretty sure that ones Loihi 2
 
  • Like
Reactions: 3 users

IloveLamp

Top 20

Autonomous Drone Using Neuromorphic AI​


ByAkanksha Gaur
May 17, 2024





This suggests a future where drones could become as small and agile as insects or birds, offering potential applications in areas like greenhouse monitoring and warehouse inventory management.
TN167-500x331.jpg

Researchers at Delft University of Technology have developed a drone that flies autonomously using neuromorphic image processing and control inspired by animal brains. Unlike current deep neural networks running on GPUs, animal brains use less data and energy. Neuromorphic processors are suitable for small drones because they require less hardware and battery power. The drone’s deep neural network processes data up to 64 times faster and uses three times less energy than a GPU. Further advancements may enable drones to become as small and agile as insects or birds.
Spiking neural networks, inspired by the brain’s processing methods, hold potential for autonomous robots. Current AI relies on deep neural networks requiring significant computing power and energy, a limitation for small robots like drones. Animal brains process information asynchronously and communicate via electrical pulses, or spikes, which are energy-efficient. Scientists and tech companies are developing neuromorphic processors to run spiking neural networks, promising faster and more energy-efficient performance. According to Jesse Hagenaars, a Ph.D. candidate involved in the study, spiking neural networks simplify calculations, making them quicker and more energy-efficient compared to standard deep neural networks.

Vision and Control​

Neuromorphic processors’ energy efficiency increases when combined with neuromorphic sensors, like cameras that only send signals when pixels change brightness. These cameras perceive motion quickly, are energy-efficient, and work well in varying light conditions. Researchers from Delft University of Technology demonstrated a drone using neuromorphic vision and control for autonomous flight. They developed a spiking neural network that processes signals from a neuromorphic camera, determining the drone’s pose and thrust. The network runs on Intel’s Loihi neuromorphic research chip.
Training the spiking neural network involved self-supervised learning and artificial evolution in a simulator. The drone can fly at different speeds and under varying light conditions. Measurements confirm the potential of neuromorphic AI. The network runs significantly faster and more efficiently on the neuromorphic chip compared to a GPU, consuming less power. This enables deployment on smaller autonomous robots.
Guido de Croon, Professor in bio-inspired drones, highlights that neuromorphic AI is crucial for tiny autonomous robots. Applications include monitoring crops in greenhouses and tracking stock in warehouses. The current work is a step towards achieving these applications, with further developments needed to scale down hardware and expand capabilities.
Not sure if same one but

1000015806.jpg
 
  • Like
  • Fire
  • Thinking
Reactions: 7 users

rgupta

Regular
I have also been in for 9 years and I could not think of anything worse than "voting for a spill".

I mean IMO I would be kissing goodbye my 9 years of investing in a company that I totally believe in ..As frustrating as it has been, I think we really are so close to GREATNESS!!...... Why would anyone serious about their investment want to create financial ruin by this action!

None of us know what is happening behind closed doors? A deal or two could be in the last stages of getting finalised???

Could you imagine if there was a spill and CEO's of companies we could be dealing with then turn around and say hey the deal is off until we see some stabilisation back within BRN.........back we go !!!!!!

I know we all have our own reasons and thoughts for our own investment strategies but personally I will not let my total frustration with the company at the moment get in the way of the 'Big picture ahead' ....I am very bullish on what is ahead for our Akida... 'Second to none technology'


My thoughts

Good luck Chippers
You are 100% right. To me the sp is low coz everyone wants to see the clear skies.
On the other hand we had passed more than 80% of journey for commercialisation and any wrong expectations from share holders will put us at least 6 Month to 1 year behind and on top who knows business politics activation from competitors can lead into failure of company.
I will request every holder to consider carefully what they are asking for.
Definately the technology is worth the wait and don't do anything silly.
Dyor
 
  • Love
  • Like
  • Fire
Reactions: 11 users

Frangipani

Regular

Autonomous Drone Using Neuromorphic AI​


ByAkanksha Gaur
May 17, 2024





This suggests a future where drones could become as small and agile as insects or birds, offering potential applications in areas like greenhouse monitoring and warehouse inventory management.
TN167-500x331.jpg

Researchers at Delft University of Technology have developed a drone that flies autonomously using neuromorphic image processing and control inspired by animal brains. Unlike current deep neural networks running on GPUs, animal brains use less data and energy. Neuromorphic processors are suitable for small drones because they require less hardware and battery power. The drone’s deep neural network processes data up to 64 times faster and uses three times less energy than a GPU. Further advancements may enable drones to become as small and agile as insects or birds.
Spiking neural networks, inspired by the brain’s processing methods, hold potential for autonomous robots. Current AI relies on deep neural networks requiring significant computing power and energy, a limitation for small robots like drones. Animal brains process information asynchronously and communicate via electrical pulses, or spikes, which are energy-efficient. Scientists and tech companies are developing neuromorphic processors to run spiking neural networks, promising faster and more energy-efficient performance. According to Jesse Hagenaars, a Ph.D. candidate involved in the study, spiking neural networks simplify calculations, making them quicker and more energy-efficient compared to standard deep neural networks.

Vision and Control​

Neuromorphic processors’ energy efficiency increases when combined with neuromorphic sensors, like cameras that only send signals when pixels change brightness. These cameras perceive motion quickly, are energy-efficient, and work well in varying light conditions. Researchers from Delft University of Technology demonstrated a drone using neuromorphic vision and control for autonomous flight. They developed a spiking neural network that processes signals from a neuromorphic camera, determining the drone’s pose and thrust. The network runs on Intel’s Loihi neuromorphic research chip.
Training the spiking neural network involved self-supervised learning and artificial evolution in a simulator. The drone can fly at different speeds and under varying light conditions. Measurements confirm the potential of neuromorphic AI. The network runs significantly faster and more efficiently on the neuromorphic chip compared to a GPU, consuming less power. This enables deployment on smaller autonomous robots.
Guido de Croon, Professor in bio-inspired drones, highlights that neuromorphic AI is crucial for tiny autonomous robots. Applications include monitoring crops in greenhouses and tracking stock in warehouses. The current work is a step towards achieving these applications, with further developments needed to scale down hardware and expand capabilities.



Pretty sure that ones Loihi 2
Yes, that’s correct:


77910E54-EF34-4E23-86DA-485D933CF05E.jpeg



The paper’s first author works for Sony Switzerland and was a PhD student at TU Delft until November 2023, where his co-authors are researchers, led by Guido de Croon.


C9671A4B-94EB-4506-97E3-83C20D9A4984.jpeg


631235F3-47E7-47A9-AF50-AA45B6C52150.jpeg
 
  • Like
  • Fire
Reactions: 6 users

rgupta

Regular
Dear Shareholder,
IMPORTANT VOTING REMINDER - VOTE NOW
You should have received information regarding BrainChip Holdings Ltd's ("BrainChip") 2024 Annual General Meeting which will be held on Tuesday, 21st of May 2024.
Your vote is very important this year and the Board is strongly encouraging all shareholders to participate and vote, no matter the size of your holding. Whether or not you choose to attend the meeting, the Board encourages you to lodge your vote below.

Your proxy vote needs to be received by 11.00am (Sydney time) on Sunday, 19th of May to be valid.

My 700,000 votes may not matter but based on the 'importance' that everyone vote I suggest you do based on you own reasons and judgement.
The only worry I have is about proxy votes. e.g my shares in my super have vote with Australian super, though I believe they will not the idiots but they can vote against my wishes of my holdings.
Dyor
 
  • Like
Reactions: 3 users

IloveLamp

Top 20
SwRI engineers are implementing feature extraction algorithms on advanced platforms, including neuromorphic processing hardware. Neuromorphic computing systems use spiking neural networks to emulate how the human brain retains “memories,” making processing faster, more accurate and efficient.

“We are working to provide the Air Force with efficient and resilient cognitive EW solutions,” said SwRI’s Dr. Steven Harbour, who is applying his doctorate in neuroscience to lead the development of neuromorphic systems.

“We are implementing neuromorphics in hardware to be used for the first time in an operational combat environment.

It puts us well ahead of our adversaries. To the best of our knowledge, we are the first in the world to do this.”



1000015810.jpg
1000015813.jpg
 
  • Like
  • Fire
  • Wow
Reactions: 9 users

Frangipani

Regular
Interesting to see Masqualier (ex Spikenet) and Benosman (ex Prophesee) as supervisors.

Actually, no. Neither of them was his supervisor.
Benoît Miramond from his uni lab (LEAT) was his thesis director (= the main supervisor) and Philippe Thierion from his industrial workplace (Renault Software Factory) was the co-supervisor.

C6960706-5DA8-4404-9AE9-A4B7800D9599.jpeg

C0089BAD-9AD3-4A45-9C4A-9B804BC70237.jpeg



Timothée Masquelier was one of the jury’s two rapporteurs, the reviewers who are being sent a copy of the PhD thesis in advance to evaluate its quality. They cannot be directly involved in the PhD student’s work (although Loïc Cordone had evidently been in contact with Timothée Masquelier for quite some time before submitting his thesis, resulting in fruitful discussions - see his remark in Acknowledgements).

Their reports decide whether or not the PhD student will get the doctoral school’s permission to hold the defence. Once it is granted, the defence has to take place in public (in rare cases, an exemption is granted) and in front of a jury, which in Loïc Cordone’s case comprised both rapporteurs, his two supervisors (who don’t have a say in the final decision, though) as well as four more renowned experts in the PhD dissertation’s field of research, acting as examiners alongside the other jury members in the discussion/Q&A session following the examinee’s oral presentation.

Ryad Benosman was such an examiner in that December 2022 PhD defense jury.

130D2684-18E2-492E-86D9-9B9C757410B4.jpeg
 
  • Like
  • Sad
Reactions: 4 users

Frangipani

Regular
SwRI engineers are implementing feature extraction algorithms on advanced platforms, including neuromorphic processing hardware. Neuromorphic computing systems use spiking neural networks to emulate how the human brain retains “memories,” making processing faster, more accurate and efficient.

“We are working to provide the Air Force with efficient and resilient cognitive EW solutions,” said SwRI’s Dr. Steven Harbour, who is applying his doctorate in neuroscience to lead the development of neuromorphic systems.

“We are implementing neuromorphics in hardware to be used for the first time in an operational combat environment.

It puts us well ahead of our adversaries. To the best of our knowledge, we are the first in the world to do this.”



View attachment 63114 View attachment 63115

0EB86D1B-ACA5-44F5-B510-B98B2570C694.jpeg



The Southwest Research Institute (SwRI) has been closely collaborating with Intel for quite some time…

https://thestockexchange.com.au/threads/brn-discussion-ongoing.1/post-418092
 
  • Like
Reactions: 2 users
Top Bottom