BRN Discussion Ongoing

Been some good things happen over the last 12 months for sure but we haven’t been privy to enough information for me to give our BOD a clear thumbs up. I don’t know if Sean and Antonio are the best people to take BrainChip to the next level. I will be looking for them to give some form of revenue projection at the upcoming AGM with clear timelines of what has been achieved and what is to come. Sean said at the beginning of last year that 2024 was a critical year for BrainChip. I think he said the same at the beginning of this year. I think we deserve to know what he meant by critical and whether the targets they set for last year were met and whether they are on track to meet this years targets. There has to be some visible accountability. Our previous CEO Lou Dinardo did well and from what I’ve seen I think Sean has done well building the ecosystem and partner network. But are we at a stage where we need someone with different skills. I don’t know because I have no way of knowing how well Sean is doing. I do like that Tony Lewis is actively promoting BrainChip online. Why hasn’t Sean got a social media presence? I also haven’t heard a peep out of the other members of the board. Do we now need a CEO with more confidence and charisma? I will make my mind up after I’ve listened to what is said by Sean and Antonio at this years AGM.
1741850813363.gif
 
  • Haha
  • Like
Reactions: 5 users

Diogenese

Top 20
Mercedes have a new SDV CLA due out soon.

It will not contain Akida 1 simulation software as Mercedes has said NN simulation software is too power hungry which I take at face value (ie, not a red herring). TENNs is amenable to being applied as software, but I don't know if the Mercedes "embargo" on NN S/W extends to TENNs. TENNs is significantly less power hungry than Akida.

Akida 1 is improbable as a future choice for Mercedes because of the rapid evolution of the tech, such as, eg, Akida 2/TENNs.

Akida 2 (including) has not been reduced to silicon as far as we know.

Mercedes has also been developing chiplet technology so this is a possible future path of development.


WO2024230948A1 AUTONOMOUS VEHICLE SYSTEM ON CHIP 20230510

Applicants: MERCEDES BENZ GROUP AG [DE]; Inventors: PIEDNOEL FRANCOIS [US]

View attachment 79151

[0046] In various examples, the system on chip 300 can include additional chiplets that can store, alter, or otherwise process the sensor data cached by the sensor data input chiplet 310. The system on chip 300 can include an autonomous drive chiplet 340 that can perform operations to determine the physical characteristics of the environment around the sensors. These operations can include perception, sensor fusion, trajectory prediction, and/or other autonomous driving algorithms of an autonomous vehicle. To perform these operations, the autonomous drive chiplet 340 can include specialized hardware such as digital signal processors (DSP), a direct memory access (DMA) engine, and neural network (NN) accelerators. The autonomous drive chiplet 340 can be connected to a dedicated HBM-RAM chiplet 335 in which the autonomous drive chiplet 340 can publish all status information, variables, statistical information, and/or processed sensor data as processed by the autonomous drive chiplet 340.

Chiplets can be designed to be clip-in, ie, not soldered, just held in palce by a spring clip which would make it easire to replace with more advanced chiplets as the tech develops.

Thus Mercedes SDV could have a PCB with soldered Google/Qualcomm/Nvidia processors and memory, and a clip-in NN chiplet, or even a dummy clip-in chiplet/receptacle ready for a future chiplet.
Not that we should hang our collective hat on Mercedes. They are interesting as being one of the few who have given the world a peek behind the veil and the kudos which came with that. And, of course it is encouraging that we are still friends. However, we know that MB have been playing away as far as processors in general are concerned, so nothing is guaranteed until the contract is inked.

There are more concrete links with QV CybreNeuro-RT (US DoE & M2), ESA/Frontgrade, Prophesee (nice to see you again ... it's been a while.), BH SNAP 3U VPX, VVDN ...

And I'd like to see some proof of life from Valeo, Renesas, MegaChips, TATA/TCS, and several others.
I appreciate those who feed us information on here and try hard to keep positive. I also appreciate the fact that Brn is growing its ecosystem and partnerships.
However, it is evident, to me, that the SP will not budge unless the Brn post a decent revenue grows or a huge contract deal.
Surely, by now, the 100 EAP’s would have brought us closer to achieving IP sales.

Very frustrating.
I share your frustration. Even allowing for the adoption lag with IP sales ...

However, I do think the M2 QV CyberNeuro-RT mini PCB does have huge imminent potential, even greater if brought out in USB form.
 
  • Like
  • Love
  • Fire
Reactions: 32 users
Not that we should hang our collective hat on Mercedes. They are interesting as being one of the few who have given the world a peek behind the veil and the kudos which came with that. And, of course it is encouraging that we are still friends. However, we know that MB have been playing away as far as processors in general are concerned, so nothing is guaranteed until the contract is inked.

There are more concrete links with QV CybreNeuro-RT (US DoE & M2), ESA/Frontgrade, Prophesee (nice to see you again ... it's been a while.), BH SNAP 3U VPX, VVDN ...

And I'd like to see some proof of life from Valeo, Renesas, MegaChips, TATA/TCS, and several others.

I share your frustration. Even allowing for the adoption lag with IP sales ...

However, I do think the M2 QV CyberNeuro-RT mini PCB does have huge imminent potential, even greater if brought out in USB form.
They did say back in 2020 that they would be bringing out a usb stick

IMG_2282.png
 
  • Like
  • Fire
  • Love
Reactions: 14 users

manny100

Top 20
SO how does that work for TENNs which dates back a couple of years?
I mean in the context of bashing managements performance in obtaining client validation prior to Sept'24.
Since Sept'24 we have had some significant client validation == good recent performance.
 
  • Like
Reactions: 2 users
I'm guessing that, because EI works with several companies who are mutual competitors, they would need to have established Chinese walls to prevent cross-company information flow to protect each comapny's IP/confidntial information, and I'm hoping that these restrictions continue under the new regieme

Do you mean they have NDAs in place and still are a money making company?! :eek:
 
  • Haha
Reactions: 1 users

CHIPS

Regular
  • Like
Reactions: 1 users

Bravo

If ARM was an arm, BRN would be its biceps💪!


Hi @Tothemoon24,

TCS Research is a really interesting prospect for us for sure. I'm still trying to work out where they might fit into the following, if at all.

Let me try and explain.

Here's an article published online 11 hours ago and it talks about Ubotica and NASA's new AI-powered autonomous satellite solution, which also comes with an ability to analyse cloud cover.


Screenshot 2025-03-13 at 7.36.22 pm.png



Screenshot 2025-03-13 at 7.35.39 pm.png




As it happens, in May 2023 Tothemoon posted and article about Ubotica and Qualcomm who were working with Nasa's JPL to develop AI algorithms that could help future space missions process raw data more efficiently to detect events like a volcanic eruptions, wildfires, flooding, harmful algal blooms, dramatic snowfalls, etc, which you can find out more about it in the link here, in which I replied to TTM's post with further comments on potential links in this project other companies such with Airbus and NimbleAI and the MESEO project.



Anyhooo, back to TCS Research and where they might fit in with all of this, if they even do!!!

Today, when reading about NASA and Ubotica's new autonomous system and it's ability to analyse clouds, all I could think about was TCS Research paper entitled "Low Power & Low Latency Cloud Cover Detection in Small Satellites Using On-board Neuromorphic Processors" ( SEE BELOW).

The research paper states "We design and train a CNN and convert it into SNN using the CNN2SNN conversion toolkit of Brainchip Akida neuromorphic platform. We achieve 95.46% accuracy while power consumption and latency are at least 35x and 3.4x more efficient respectively in stage-1 (and 230x & 7x in stage-2) compared to the equivalent CNN running on Jetson TX2. Sheesh!

I guess my line of thinking was that were are still partners with both NASA and TCS and maybe, thanks to these exiting partnerships, we are working behind the scenes, under the cover of darkness (or extreme cloudiness as the case may be🤭) with the likes of Ubotica and QUALCOMM.

I can't help but think that it's BrainChip's technology, not Qualcomm's which appears to be gaining traction in the defence and space sectors and endorsements from the likes of Lockheed Martin, Bascom Hunter, Airbus, Frontgrade Gaisler, Quantum Ventura, etc., might be enough to entice Qualcomm to say:

what-shes-having-the-same.gif



Just random thoughts; nothing more, nothing less..





Screenshot 2025-03-13 at 8.03.44 pm.png









 

Attachments

  • Screenshot 2025-03-13 at 7.45.56 pm.png
    Screenshot 2025-03-13 at 7.45.56 pm.png
    110.4 KB · Views: 46
Last edited:
  • Like
  • Fire
  • Love
Reactions: 25 users

7für7

Top 20
Hi @Tothemoon24,

TCS Research is a really interesting prospect for us for sure. I'm still trying to work out where they might fit into the following, if at all.

Let me try and explain.

Here's an article published online 11 hours ago and it talks about Ubotica and NASA's new AI-powered autonomous satellite solution, which also comes with an ability to analyse cloud cover.


View attachment 79178


View attachment 79174



As it happens, in May 2023 Tothemoon posted and article about Ubotica and Qualcomm who were working with Nasa's JPL to develop AI algorithms that could help future space missions process raw data more efficiently to detect events like a volcanic eruptions, wildfires, flooding, harmful algal blooms, dramatic snowfalls, etc, which you can find out more about it in the link here, in which I replied to TTM's post with further comments on potential links in this project other companies such with Airbus and NimbleAI and the MESEO project.



Anyhooo, back to TCS Research and where they might fit in with all of this, if they even do!!!

Today, when reading about NASA and Ubotica's new autonomous system and it's ability to analyse clouds, all I could think about was TCS Research paper entitled "Low Power & Low Latency Cloud Cover Detection in Small Satellites Using On-board Neuromorphic Processors" ( SEE BELOW).

I guess my line of thinking was that were are still partners with both NASA and TCS and maybe, thanks to these exiting partnerships, we are working behind the scenes, under the cover of darkness (or extreme cloudiness as the case may be🤭) with the likes of Ubotica and QUALCOMM.

I can't help but think that it's BrainChip's technology, not Qualcomm's which appears to be gaining traction in the defence and space sectors and endorsements from the likes of Lockheed Martin, Bascom Hunter, Airbus, Frontgrade Gaisler, Quantum Ventura, etc., might be enough to entice Qualcomm to say:

View attachment 79179


Just random thoughts; nothing more, nothing less..






View attachment 79177







Bravo, you know what? Basically, everyone’s talking behind your back in the BrainChip community…

“Look how much she’s researching.”

“Yeah yeah, she’s tirelessly posting.”

“Did you see yesterday and today how much effort she’s putting into posting?”

“Yeah, I’ve been wondering for a while now if she even has TIME TO GO JOGGING ANYMORE.”

Yeah, that’s exactly what’s going on, Bravo! YOU DON’T JOG ANYMORE!! JUST LOOK AT THE STOCK PRICE AND YOU’LL KNOW WHAT’S UP!!!
 
  • Haha
Reactions: 8 users

IMG_0264.jpeg
 
  • Like
  • Love
  • Fire
Reactions: 34 users

7für7

Top 20

View attachment 79180
So… basically no denoising on this clip .. can someone confirm that?
 

Diogenese

Top 20
Hi @Tothemoon24,

TCS Research is a really interesting prospect for us for sure. I'm still trying to work out where they might fit into the following, if at all.

Let me try and explain.

Here's an article published online 11 hours ago and it talks about Ubotica and NASA's new AI-powered autonomous satellite solution, which also comes with an ability to analyse cloud cover.


View attachment 79178


View attachment 79174



As it happens, in May 2023 Tothemoon posted and article about Ubotica and Qualcomm who were working with Nasa's JPL to develop AI algorithms that could help future space missions process raw data more efficiently to detect events like a volcanic eruptions, wildfires, flooding, harmful algal blooms, dramatic snowfalls, etc, which you can find out more about it in the link here, in which I replied to TTM's post with further comments on potential links in this project other companies such with Airbus and NimbleAI and the MESEO project.



Anyhooo, back to TCS Research and where they might fit in with all of this, if they even do!!!

Today, when reading about NASA and Ubotica's new autonomous system and it's ability to analyse clouds, all I could think about was TCS Research paper entitled "Low Power & Low Latency Cloud Cover Detection in Small Satellites Using On-board Neuromorphic Processors" ( SEE BELOW).

I guess my line of thinking was that were are still partners with both NASA and TCS and maybe, thanks to these exiting partnerships, we are working behind the scenes, under the cover of darkness (or extreme cloudiness as the case may be🤭) with the likes of Ubotica and QUALCOMM.

I can't help but think that it's BrainChip's technology, not Qualcomm's which appears to be gaining traction in the defence and space sectors and endorsements from the likes of Lockheed Martin, Bascom Hunter, Airbus, Frontgrade Gaisler, Quantum Ventura, etc., might be enough to entice Qualcomm to say:

View attachment 79179


Just random thoughts; nothing more, nothing less..






View attachment 79177








You want fries with that?

Apparently Ubotica has a tie-up with a failing US semiconductor company which may be taken over by Qualcomm:

https://ubotica.com/ubotica-cognisat-xe2/
The Ubotica CogniSAT-XE2 On-Board AI Payload Coprocessor brings the power of Myriad X’s Computer Vision (CV) and Artificial Intelligence (AI) compute acceleration to a PC/104 form-factor for SmallSat and CubeSat missions. It is built around the Intel® Movidius™ Myriad™ X CV and AI COTS Vision Processing Unit (VPU) whose 16 vector cores and application-specific hard blocks provide high-performance parallel and hardware accelerated compute within a low power envelope.

Intel® Movidius™ Myriad™ X Vision Processing Unit (VPU)


The Intel® Movidius™ Myriad™ X VPU is Intel's first VPU to feature the Neural Compute Engine — a dedicated hardware accelerator for deep neural network inference. The Neural Compute Engine in conjunction with the 16 powerful SHAVE cores and high throughput intelligent memory fabric makes Intel® Movidius™ Myriad™ X ideal for on-device deep neural networks and computer vision applications.
 
  • Like
Reactions: 4 users

Slade

Top 20
So… basically no denoising on this clip .. can someone confirm that?
Jeez u are annoying. Watch the clip it provides a great live demo of Akida performing denoising.
 
  • Like
  • Haha
  • Love
Reactions: 29 users

Townyj

Ermahgerd

View attachment 79180

Ok... that de noising demo is awesome.
 
  • Like
  • Fire
  • Love
Reactions: 16 users

Tothemoon24

Top 20
Nice to see the top brass at Tata posting about us

IMG_0777.jpeg
 
  • Like
  • Fire
  • Love
Reactions: 58 users

7für7

Top 20
Nice to see the top brass at Tata posting about us

View attachment 79183

Hm? What? Wow… another praising post about Akida. Nice. Thank you Mr. TATA!! Aaaaaaaand—good night.



1741866291779.gif





Ps..
If anyone is wondering what’s going on… don’t worry, I’m just trying some reverse psychological tactics… I call them the Dolcise Principle.





So… GO BRAINCHIP!

Sorry.. I mean..No don’t go…
 
  • Love
  • Like
Reactions: 2 users

Bravo

If ARM was an arm, BRN would be its biceps💪!
So weird.

I was just checking out the latest news about Trump trying to support Elon Musk's Tesla brand as the company has become a target in the last few days for American's who are incensed about Elon's handling/mishandling of DOGE.

So, Trump has tried "bigging" up Tesla, by "purchasing" a Tesla of his very own. LOL.

Anyway I just wanted to mention this because I was trying to find out more about measuring the max distance electric vehicles can potentially achieve per charge. I believe Mercedes holds the current record of max distance with the EQXX, in which Brainchip featured.

To cut a long story short, whilst attempting this research, I stumbled upon a previous post from July 2022 #91,953 and I found it very interesting to say the least. What's interesting is that Trump actually says at the time that Elon's "electric cars don't drive long enough".

Suffice to say, I am merely trying to point out that Mercedes selected BrainChip's help with their EQXX saying the following from the EETimes article linked here.
The article states “Although neuromorphic computing is still in its infancy, systems like these will be available on the market in just a few years,” according to Mercedes. “When applied at scale throughout a vehicle, they have the potential to radically reduce the energy needed to run the latest AI technologies.”

If Elon doesn't have us on his radar, perhaps he should, since AI will be a feature heavily in every electric vehicle in the future and therefore has the potential to effect the max distance per charge if not managed correctly.





Screenshot 2025-03-13 at 10.48.15 pm.png




Trump hits back at Elon Musk, says he could have made him ‘drop to his knees and beg’​

By
Lee Brown
Published July 13, 2022
Updated July 13, 2022, 9:43 a.m. ET



Former President Donald Trump has hit back at Elon Musk for calling him too old to run again — claiming he could have made the world’s richest man “drop to [his] knees and beg” when he was in the White House.
Trump used his Twitter rival Truth Social to attack Musk, 51, in an ongoing war of words that on Monday saw the Tesla mogul saying it was “time for Trump to hang up his hat & sail into the sunset.”
“When Elon Musk came to the White House asking me for help on all of his many subsidized projects, whether it’s electric cars that don’t drive long enough, driverless cars that crash, or rocketships to nowhere, without which subsidies he’d be worthless, and telling me how he was a big Trump fan and Republican, I could have said, ‘drop to your knees and beg,’ and he would have done it,” the 45th commander-in-chief claimed.
“Now Elon should focus on getting himself out of the Twitter mess because he could owe $44 billion for something that’s perhaps worthless,” Trump wrote of Musk, who faces legal action after pulling out of his much-hyped offer to buy the social media giant.

Former President Donald Trump on Tuesday escalated his feud with billionaire Elon Musk in a series of Truth Social posts.

“Also, lots of competition for electric cars!” Trump insisted of Tesla.

“P.S. Why was Elon allowed to break the $15 million stock purchase barrier on Twitter without any reporting? That is a very serious breach!” the former president added.

“Have fun Elon and [Jack Dorsey] go to it!” he wrote, referring to the Twitter founder who had supported Musk’s plans to take over.



 
Last edited:
  • Like
  • Fire
  • Haha
Reactions: 10 users

Frangipani

Top 20

View attachment 79180

Some still images from the video clip, showing the three demos:

1. Dynamic Gesture Recognition Demo on the new Akida 2.0 FPGA:


616FE52E-1350-40B7-9ACA-AB62C137B517.jpeg



2. “Wake Word, ASR & LLM - a TENNs Story
Performance on the Edge using TENNs”
(LLM FPGA Tech Demonstrator, showing a 1B parameter LLM trained from scratch at BrainChip and running not in software simulation, but on an FPGA, unconnected to the internet and requiring such a small amount of power it could run on a watch battery; the FPGA is running at about 1/10th the speed that the ASIC would run at)

012F37E9-81F4-4154-8237-54F2B6BA73FD.jpeg
26CA1A32-F294-4CF7-BA6A-A76836384A48.jpeg



3. Audio Denoising Demo (TENNs):

FF2DCBEE-2F4E-47E7-BE46-B8DAB4EA0538.jpeg


33F70D49-179B-4AA0-87B6-02604187CEA3.jpeg

A212ED08-0363-41F7-AAC4-9D2BCEACC24B.jpeg
 
  • Like
  • Fire
  • Love
Reactions: 55 users

Tezza

Regular
Some still images from the video clip, showing the three demos:

1. Dynamic Gesture Recognition Demo on the new Akida 2.0 FPGA:


View attachment 79186


2. “Wake Word, ASR & LLM - a TENNs Story
Performance on the Edge using TENNs”
(LLM FPGA Tech Demonstrator, showing a 1B parameter LLM trained from scratch at BrainChip and running not in software simulation, but on an FPGA, unconnected to the internet and requiring such a small amount of power it could run on a watch battery; the FPGA is running at about 1/10th the speed that the ASIC would run at)

View attachment 79189 View attachment 79190


3. Audio Denoising Demo (TENNs):

View attachment 79191

View attachment 79200
View attachment 79199
So you signed some deals whilst all these customers were swamping you?
 
  • Haha
  • Like
Reactions: 4 users

Frangipani

Top 20
Hi Pom and all.
I don't quite get the relevance of gesture recognition.
What are the practical applications that it addresses?
I can see that it may be a handy aid for deaf people that sign and perhaps in the vacuum of space where sound hasn't a medium to propagate within.
Is it something to do with the proposed Nintendo gaming system?
I am not familiar with it so may be missing something relevant there.
Appreciate it if anyone here can enlighten me.

Here’s what I found to be a good overview:


Some excerpts:

What is Gesture Recognition?​

Gesture recognition refers to the technology that interprets human gestures, such as hand movements, facial expressions, or body language, through mathematical algorithms. It enables humans to interact with machines and computers without using mechanical devices like keyboards, mice, or touchscreens. Gesture recognition works by using cameras and sensors to pick up movements from parts of the body like hands or the face. These movements are turned into digital data that computers can understand.

(…)

Gesture Recognition and Detection Technologies​

  • Sensor-Based Hand Gesture Recognition: A sensor-based gesture recognition program detects and analyses human gestures. This can be accomplished using a variety of sensors, including cameras, infrared sensors, and accelerometers. These sensors gather information about the movement and location of a person's body or limbs, which the algorithm subsequently utilizes to recognize specific motions.
  • Vision-Based Hand Gesture Recognition: A vision-based gesture recognition system detects and interprets motions using cameras or other visual sensors. The cameras collect photos or videos of the user's gestures, which are then analyzed and identified using computer vision and machine learning techniques.

Gesture Recognition Examples and Uses​

  • Smart TVs: Modern smart TVs use gesture recognition, allowing viewers to switch channels, adjust the volume, or browse through menus with simple hand movements. This means you don’t always need to use a remote control, making it more convenient and accessible.
  • Home Automation Systems: In smart homes, gesture recognition enhances user interaction by enabling control over the home environment. For instance, waving your hand can turn lights on or off, adjust the thermostat, or manage your home entertainment systems, integrating seamlessly with smart home technology for improved convenience and energy efficiency.
  • Gaming Consoles: Devices like the Microsoft Kinect have transformed gaming, providing a motion-controlled gaming experience where players use their body movements to interact with the game. This adds a level of physical activity and immersion to gaming, making it more engaging and interactive.
  • Automotive: Modern cars incorporate gesture recognition for safer and more convenient control of various features. Drivers can execute commands like adjusting the stereo volume, changing air conditioning settings, or answering phone calls with simple hand gestures, minimizing distractions and enhancing focus on driving.
  • Virtual Reality (VR) and Augmented Reality (AR): These technologies heavily rely on gesture recognition for user interaction. In VR and AR environments, users can manipulate objects, navigate menus, or control applications through gestures, creating a more immersive and interactive experience without needing physical controllers.
  • Kitchen Appliances: Advanced kitchen gadgets are adopting gesture recognition, allowing for hands-free operation. For example, with a wave of your hand, you can operate microwaves, ovens, or smart faucets, adding convenience and hygiene to cooking and kitchen management.
(…)

Conclusion​

Gesture recognition is a technology that allows devices to understand and respond to human movements. Using advanced machine learning algorithms like CNNs and SVMs, it transforms physical gestures into digital commands, making interaction with gadgets more intuitive and seamless. This technology enhances user experience in smart homes, gaming, automotive, and virtual reality, among other areas. As we move towards more interactive and user-friendly technologies, gesture recognition stands out as a key player in bridging the gap between humans and machines, making our interactions more natural and efficient.





Apart from the use cases listed above, human-robot interaction comes to mind - think of the proof-of-concept the researchers from Fraunhofer HHI’s Wireless Communications and Networks Department demonstrated with the help of Spot, the robot dog, as part of 6G-RIC (Research and Innovation Cluster),
funded by Germany’s Federal Ministry of Education and Research:



2082DA08-3D7D-4D50-96BE-B97CBB576419.jpeg



Human-robot interaction via gesture recognition is also of particular interest in the healthcare sector. Halfway through this August 2024 post
https://thestockexchange.com.au/threads/brn-discussion-ongoing.1/post-433491, I summarised a fascinating podcast I had listened to.
Here are some excerpts:

“I chanced upon an intriguing German-language podcast (Feb 1, 2024) titled “6G und die Arbeit des 6G-RIC” (“6G and the work of the 6G-RIC”) with Slawomir Stanczak as guest, who is Professor for Network Information Theory at TU Berlin, Head of Fraunhofer HHI’s Wireless Communications and Networks Department as well as Coordinator of the 6G Research and Innovation Cluster (6G-RIC):

https://www.ip-insider.de/der-nutze...ellschaft-a-cf561755cde0be7b2496c94704668417/

(…)

From 17:12 min onwards, the podcast host picks up the topic of connected robotics and mentions a collaboration with Charité Universitätsmedizin Berlin, which is Germany’s biggest (and very renowned) university hospital, regarding the development of nursing robots and their control via 6G.

Stanczak confirms this and shares with his listeners they are in talks with Charité doctors in order to simplify certain in-hospital-processes and especially to reduce the workload on staff. Two new technological 6G features are currently being discussed: 1. collaborative robots and 2. integrated communication and sensing (ICAS).

Stanczak and his colleagues were told that apart from the global nursing shortage we are already facing, it is also predicted that we will suffer a shortage of medical doctors in the years to come, so the researchers were wondering whether robots could possibly compensate for this loss.

The idea is to connect numerous nursing robots in order to coordinate them and also for them to communicate with each other and cooperate efficiently on certain tasks - e.g., comparatively simple ones such as transporting patients to the operating theatre or serving them something to drink [of a non-alcoholic nature, I presume 😉]. But the researchers even envision complex tasks such as several robots collaborating on turning patients in bed.

Telemedicine will also become more important in the future, such as surgeons operating remotely with the help of an operating robot [you may have heard about the da Vinci Surgical System manufactured by Intuitive Surgical], while being in a totally different location.
[Something Stanczak didn’t specifically mention, but came to my mind when thinking of robot-control via gesture recognition in a hospital setting, is the fact that it would be contactless and thus perfect in an operating theatre, where sterile conditions must be maintained.] (…)”


Think of a surgeon using hand gestures during an operation to instruct a medical assistant robot to pass him/her the correct surgical instruments.



Then there is the whole field of industrial robots.
Fortiss, for example, has an ongoing project in collaboration with NEURA Robotics and TU Chemnitz called CORINNE (Cobots’ Relational Interface with Neuromorphic Networks and Events) that “aims to build robots that can recognise and respond to gestures (known or unknown), to interact with humans on welding tasks”. They are using Loihi for that project running from April 2024 to March 2026, in case you wondered.

https://www.fortiss.org/en/research/projects/detail/corinne


4E27663E-1AFC-4ECE-8662-06241CC6CBE5.jpeg


CE1C4E88-9AC9-463E-9CD2-63BB4A17746F.jpeg


So while gesture recognition and neuromorphic technology undoubtedly make a fruitful liaison, we as BRN shareholders won’t get to taste the sweetness of that ripe fruit until customers actually start signing on the dotted line.
 
  • Like
  • Love
Reactions: 20 users
Top Bottom