BRN Discussion Ongoing

manny100

Regular
TENNS in laymans terms::
Explains in simple terms why TENNS is superior.
Spatial and Temporal Integration:
Imagine you’re watching a video.
Spatial information refers to what you see in each frame (like objects, colors, and shapes).
Temporal information is how things change over time (like motion, patterns, and sequences).
TENNs combine both aspects effectively. They’re like having eyes that not only see the picture but also understand how it changes from frame to frame.
Traditional Approaches:
Think of traditional methods as separate tools: one for pictures (CNNs) and another for understanding sequences (RNNs).
CNNs are great at recognizing objects in images but struggle with dynamic changes.
RNNs handle sequences well but have limitations like slow learning and memory issues.
TENNs Bridge the Gap:
TENNs are like a hybrid tool that merges the best of both worlds.
They process video frames while considering how things evolve over time. This makes them superior for tasks like detecting moving objects or understanding audio patterns.
In summary, TENNs are like smart glasses that see both the picture and the movie, making them better at handling sequential data!
 
  • Like
  • Fire
  • Love
Reactions: 45 users

Getupthere

Regular

eeNews Europe — Renesas is taping out a chip using the spiking neural network (SNN) technology developed by Brainchip.​

Dec 2, 2022 – Nick Flaherty

This is part of a move to boost the leading edge performance of its chips for the Internet of Things, Sailesh Chittipeddi became Executive Vice President and General Manager of IoT and Infrastructure Business Unit at Renesas Electronics and the former CEO of IDT tells eeNews Europe.
This strategy has seen the company develop the first silicon for ARM’s M85 and RISC-V cores, along with new capacity and foundry deals.
“We are very happy to be at the leading edge and now we have made a rapid transition to address our ARM shortfall but we realise the challenges in the marketplace and introduced the RISC-V products to make sure we don’t fall behind in the new architectures,” he said.
“Our next move is to more advanced technology nodes to push the microcontrollers into the gigahertz regime and that’s where the is overlap with microprocessors. The way I look at it is all about the system performance.”
“Now you have accelerators for driving AI with neural processing units rather than a dual core CPU. We are working with a third party taping out a device in December on 22nm CMOS,” said Chittipeddi.
Brainchip and Renesas signed a deal in December 2020 to implement the spiking neural network technology. Tools are vital for this new area. “The partner gives us the training tools that are needed,” he said.
 
  • Like
  • Fire
Reactions: 9 users

Getupthere

Regular

eeNews Europe — Renesas is taping out a chip using the spiking neural network (SNN) technology developed by Brainchip.​

Dec 2, 2022 – Nick Flaherty

This is part of a move to boost the leading edge performance of its chips for the Internet of Things, Sailesh Chittipeddi became Executive Vice President and General Manager of IoT and Infrastructure Business Unit at Renesas Electronics and the former CEO of IDT tells eeNews Europe.
This strategy has seen the company develop the first silicon for ARM’s M85 and RISC-V cores, along with new capacity and foundry deals.
“We are very happy to be at the leading edge and now we have made a rapid transition to address our ARM shortfall but we realise the challenges in the marketplace and introduced the RISC-V products to make sure we don’t fall behind in the new architectures,” he said.
“Our next move is to more advanced technology nodes to push the microcontrollers into the gigahertz regime and that’s where the is overlap with microprocessors. The way I look at it is all about the system performance.”
“Now you have accelerators for driving AI with neural processing units rather than a dual core CPU. We are working with a third party taping out a device in December on 22nm CMOS,” said Chittipeddi.
Brainchip and Renesas signed a deal in December 2020 to implement the spiking neural network technology. Tools are vital for this new area. “The partner gives us the training tools that are needed,” he said.
What ever happened to the tape out

I personally would have had a use it or lose it clause when signing an IP deal.
 
  • Like
  • Fire
  • Thinking
Reactions: 3 users

Fenris78

Regular
What ever happened to the tape out

I personally would have had a use it or lose it clause when signing an IP deal.
Other than the demo with Akida and Arm's M85... who knows? It seems Renesas has used Arm Helium in preference to Akida... for now.

Could the Sifive's intelligence x390 processor with NPU be from this tapeout?? "SiFive is playing a pivotal role in propelling the RISC-V industry into new frontiers of performance and applicability. By unveiling processors like the Performance P870/P870A and Intelligence X390, the company is not merely iterating on existing technology but is introducing transformative architectural innovations."

From 2022.... "Renesas Electronics is looking to catch up in the ARM microcontroller and processor markets, but also looking at the emerging RISC-V cores and new spiking AI accelerators to boost machine learning in the Internet of Things (IoT).
 
  • Like
  • Thinking
Reactions: 5 users

Rach2512

Regular
Are we in a partnership with Accenture! I knew we had done podcasts but had it been mentioned that we were in a partnership, I must have fallen asleep if I missed this?
 

Attachments

  • Screenshot_20240818-191344_Samsung Internet.jpg
    Screenshot_20240818-191344_Samsung Internet.jpg
    421.9 KB · Views: 95
  • Like
  • Love
Reactions: 11 users

Diogenese

Top 20
TENNS in laymans terms::
Explains in simple terms why TENNS is superior.
Spatial and Temporal Integration:
Imagine you’re watching a video.
Spatial information refers to what you see in each frame (like objects, colors, and shapes).
Temporal information is how things change over time (like motion, patterns, and sequences).
TENNs combine both aspects effectively. They’re like having eyes that not only see the picture but also understand how it changes from frame to frame.
Traditional Approaches:
Think of traditional methods as separate tools: one for pictures (CNNs) and another for understanding sequences (RNNs).
CNNs are great at recognizing objects in images but struggle with dynamic changes.
RNNs handle sequences well but have limitations like slow learning and memory issues.
TENNs Bridge the Gap:
TENNs are like a hybrid tool that merges the best of both worlds.
They process video frames while considering how things evolve over time. This makes them superior for tasks like detecting moving objects or understanding audio patterns.
In summary, TENNs are like smart glasses that see both the picture and the movie, making them better at handling sequential data!
Thanks manny,

Another way of looking at it is by comparison with Prophese's DVS camera.

It is possible to design a DVS to act as a still camera or a movie camera.

The following is based on a frame based camera to simplify the explanation, but a DVS has a continuously open light sensor pixel array.

The DVS has pixels which can be designed to produce a 1 or a zero depending on the level of illumination. If the illumination exceeds a threshold value, the pixel turns ON, and if the illumination is below the threshold, the output is zero.

In the still camera mode, the DVS system can thus identify in a single frame where the illumination of adjacent pixels differs, and generate a change from 1 to zero or from zero to 1 depending on whether the transition between adjacent pixels is from white to black or black to white.

This mode will produce an outline of objects

In the movie camera mode, the system assesses the changes of individual pixel illumination in successive frames, and produces the 1 to zero or zero to 1 transitions for the individual pixels whose illumination in the two frames crosses the threshold either up or down.

This mode will produce a moving outline of objects familiar from the Prophesee videos.

So, in the still mode, the system compares the illumination of adjacent pixels in a single frame, and in the movie mode, the system assesses the illumination changes in individual pixels in successive frames.

Late ed: An analogy for TeNNs could be a first monitor system which monitors the magnitude of each individual pixel, and generates a spike or event when there is a change across the threshold, and a second monitor which monitors the difference between adjacent pixels referred to the threshold. This arrangement would be capable of both classifying an object using the first monitor and tracking the object's movement using the second monitor. Incorporating the tracking feature in silicon relieves the CPU of a considerable processing load. .

Now the asynchronous spiking bit comes in where the system does not run on frames, but has a continuously open sensor so changes are registered as they occur and not on a frame basis, ie, taking the time element into account.

Both Prophesee and Akida run in an asynchronous mode which eliminates the frame processing delay.

By taking a continuous stream of data, it is thus possible to track the motion of an object or, in the case of voice signals, to analyse passages of speech.
 
Last edited:
  • Like
  • Fire
  • Love
Reactions: 38 users

IloveLamp

Top 20

Attachments

  • 1000017790.jpg
    1000017790.jpg
    381.6 KB · Views: 36
Last edited:
  • Like
  • Fire
  • Love
Reactions: 15 users

Frangipani

Regular
Our friends at Fraunhofer HHI 👆🏻are part of the ongoing Berlin 6G Conference (July 2 - 4, 2024). While three of the above paper’s co-authors are session chairs or speakers, Yuzhen Ke and Mehdi Heshmati are giving live demos of Spiky Spot’s akidaesque gesture recognition skills at the conference expo.

View attachment 65934


View attachment 65935


The Berlin-based researchers have been rather busy conference-hopping in recent weeks: Stockholm (Best Demonstration Award), Antwerp, Denver, Berlin (2x).

I am just not sure whether they have been letting curious visitors to their booth in on the secret behind their obedient robotic dog... 🤔

View attachment 65937

View attachment 65938


View attachment 65939

Two weeks ago, our usual suspects from Fraunhofer HHI’s Wireless Communications and Networks Department gave a virtual presentation of their paper referenced in their robot dog gesture recognition demo video (from which we know they had utilised Akida) at yet another conference, the International Conference on Neuromorphic Systems (ICONS), hosted by George Mason University in Arlington, VA:

E19DC9D8-DE81-4403-BDE5-A9E5F2C173AF.jpeg



In their YouTube video, Zoran Utkovski describes their demo as “a proof-of-concept implementation of neuromorphic wireless cognition with an application to remote robotic control”, and recent conference presentations were titled “Gesture Recognition for Multi-Robot Control, using Neuromorphic Wireless Cognition and Sidelink Communication” resp. “Neuromorphic Wireless Cognition for Connected Intelligence”.

The words I marked in bold piqued my interest to dive a little deeper, in exploration of the question who would benefit from such research, as I don’t believe in what others here and especially elsewhere (FF) have strongly suggested: that Spot’s manufacturer Boston Dynamics and/or South Korea’s Hyundai Motor Group (which acquired BD in June 2021), is/are the secret customer(s) behind this PoC, allegedly paying Fraunhofer HHI researchers a fee to experiment with Akida on their behalf, as they must be keen on giving their four-legged mobile robot a neuromorphic “upgrade”.

The question you should ask yourselves is: Why would they outsource this type of research, when their own AI experts could easily play around with Akida at their own premises (unless they were buried in work they deemed more important)? Two years ago, the Hyundai Motor Group launched the Boston Dynamics AI Institute, headquartered in Cambridge, MA, to spearhead advancements in artificial intelligence and robotics. In early 2024, another office was opened in Zurich, Switzerland, led by Marco Hutter, who is also Associate Professor for Robotic Systems at ETH Zürich. Why - with all their AI and robotics expertise - would they need Fraunhofer HHI to assist them? Fraunhofer’s contract research is typically commissioned by small- and medium-sized companies that do not have their own R&D departments.

I suggest we let the facts speak for themselves:

The YouTube video’s description box basically says it all:
“(…) The followed approach allows for reduction in communication overhead, implementation complexity and energy consumption, making it amenable for various edge intelligence applications. The work has been conducted within the 6G Research and Innovation Cluster (6G-RIC), funded by the German Ministry of Education and Research (BMBF) in the program “Souverän. Digital. Vernetzt.” Find more information here: https://6G-ric.de”



53156B4E-0C35-4D93-8203-75AD99E46C81.jpeg


And here is a link to a download of a publication detailing the above-mentioned program “Souverän. Digital. Vernetzt.” (German only):


8631A6D0-9FFE-4621-A239-3F43FC7BAC7C.jpeg


So this publicly-funded PoC developed by five researchers from Fraunhofer HHI (the institution coordinating the 6G-RIC research hub) and Osvaldo Simeone from King’s College London is evidently about exploring future use cases that 6G will enable - cutting-edge research aiming “to help establish Germany and Europe as global leaders in the expansion of sustainable 6G technologies”. It is clearly not contract research commissioned by Boston Dynamics or Hyundai, with the intention of upgrading a product of theirs.

The 6G-RIC hub does have a number of illustrious industry partners, by the way, but neither BD nor Hyundai are one of them:


2613E372-0B6F-427C-8F78-0AFB222BDE52.jpeg


Still not convinced? Another hard-to-ignore piece of evidence that refutes the narrative of Boston Dynamics / Hyundai paying Fraunhofer HHI researchers to experiment with Akida and come up with that PoC is the following document that I stumbled across in my online search. It proves that on May 4, 2023 the Fraunhofer Central Purchasing Department in Munich signed a contract to buy a total of three Spot robot dogs directly from Boston Dynamics - the company that had won the public tender - and that they were destined for 6G-RIC project partner Fraunhofer HHI in Berlin.


A413846D-B187-4215-9A9A-67066F679036.jpeg

78187D68-35C2-425D-9539-BF134CE7D66E.jpeg



We can safely assume that Boston Dynamics - had they really been a paying customer of Heinrich Hertz Institute (HHI) - would have supplied the Fraunhofer Institute with their own products free of charge in order for the Berlin telecommunication experts to conduct research on their behalf.

All available evidence points to Spot simply being a popular quadruped robot model the researchers selected for their testbed realisation and demo.


But back to my sleuthing efforts to find out more about what the researchers at Fraunhofer HHI might be up to:

I chanced upon an intriguing German-language podcast (Feb 1, 2024) titled “6G und die Arbeit des 6G-RIC” (“6G and the work of the 6G-RIC”) with Slawomir Stanczak as guest, who is Professor for Network Information Theory at TU Berlin, Head of Fraunhofer HHI’s Wireless Communications and Networks Department as well as Coordinator of the 6G Research and Innovation Cluster (6G-RIC):

https://www.ip-insider.de/der-nutze...ellschaft-a-cf561755cde0be7b2496c94704668417/


The podcast host starts out by introducing his guest and asking him why we will require 6G in the future (first 6G networks are predicted by 2028-2030).
Slawomir Stanczak names mixed reality as a prime use case, as it is combining massive data rates with the need for ultra-low latency, and then - about six minutes into the podcast - for the first time touches upon the topic of collaborative robots that work together towards a common goal, for example in areas such as Industry 4.0 and healthcare. According to him, 5G will be insufficient once many robots are to collaborate on a joint task, especially since an additional functionality will be required: sensing.

[Note that Slawomir Stanczak uses “collaborative robots” here in the sense of two or more robots collaborating with each other, whereas normally the term “collaborative robots” (aka “cobots”) simply means robots that are designed to work along humans in a common workspace as opposed to industrial robots that replace employees, usually for mundane and repetitive tasks that require speed and precision. As industrial robots tend to be in a fixed position and quite large and powerful, they are often caged or fenced-off so as not to endanger any humans who come too close.]

Slawomir Stanczak then briefly talks about autonomous cars and goes on to say that processing autonomously at the edge is not always the most effective solution. He gives the example of two cars trying to find a free lot in a multi-storey car park - in this particular case, a centrally coordinated decision, which is then communicated to the individual cars, would be the most efficient way of solving the problem. Hence, sometimes a centrally coordinated connected network that is able to combine data beats fully autonomous decisions and also helps to anticipate problems in order to pro-actively prevent them from happening. However, in other cases, when low latency is of utmost importance, decentralised decisions (= at the edge) are essential. Ultimately, it is all about finding the optimal compromise (“functional placement” in the mobile network).

From 17:12 min onwards, the podcast host picks up the topic of connected robotics and mentions a collaboration with Charité Universitätsmedizin Berlin, which is Germany’s biggest (and very renowned) university hospital, regarding the development of nursing robots and their control via 6G.

Stanczak confirms this and shares with his listeners they are in talks with Charité doctors in order to simplify certain in-hospital-processes and especially to reduce the workload on staff. Two new technological 6G features are currently being discussed: 1. collaborative robots and 2. integrated communication and sensing (ICAS).

Stanczak and his colleagues were told that apart from the global nursing shortage we are already facing, it is also predicted that we will suffer a shortage of medical doctors in the years to come, so the researchers were wondering whether robots could possibly compensate for this loss.

The idea is to connect numerous nursing robots in order to coordinate them and also for them to communicate with each other and cooperate efficiently on certain tasks - e.g., comparatively simple ones such as transporting patients to the operating theatre or serving them something to drink [of a non-alcoholic nature, I presume 😉]. But the researchers even envision complex tasks such as several robots collaborating on turning patients in bed.

Telemedicine will also become more important in the future, such as surgeons operating remotely with the help of an operating robot [you may have heard about the da Vinci Surgical System manufactured by Intuitive Surgical], while being in a totally different location.
[Something Stanczak didn’t specifically mention, but came to my mind when thinking of robot-control via gesture recognition in a hospital setting, is the fact that it would be contactless and thus perfect in an operating theatre, where sterile conditions must be maintained.]

As for the topic of sensing, the researchers’ vision is to one day use the hospital’s existing communication infrastructure for (radar) sensing tasks as well, such as detection whether a patient is in the room or has left it, monitoring of vital signs such as breathing - camera-less, and hence maintaining privacy.
[I remember reading somewhere else that with ICAS the network itself basically acts as a radar sensor, so there would be no need for additional physical radar sensors - please correct me, if I am wrong, as my grasp of all things technical is extremely superficial.]

Stanczak also views the analysis of liquids as a use case with great potential.
[I assume he was thinking of analysing blood, urine, cerebrospinal fluid etc., but possibly this would also include nasal or oral fluid samples collected for testing of infectious diseases such as COVID-19 or the flu.]

The podcast then moves on to the topic of energy efficiency (6G vs 5G), and Stanczak draws attention to an interesting point, namely that it is not sufficient to merely focus on improving the energy efficiency of mobile networks, as we also need to take into account the so-called rebound effect, which describes the reduction in expected gains from new technologies, as improvement in energy efficiency will lead to an overall increase in energy consumption.
[So, paradoxical as it sounds, saving energy can in fact lead to spending more.]

This is why according to Stanczak we will need a paradigm shift in the years to come and change scaling laws: improving the mobile networks’ energy efficiency while simultaneously decreasing our energy consumption. In addition, R&D in the field of renewable energies continues to be essential.

The remaining 8 or so minutes of the podcast were about frequency bands within the 6G spectrum and surfaces that can channel radio waves - far too technical for me to understand.



After listening to the podcast, I searched the internet for some more information on the cooperation between the institutions involved and discovered two major projects that link Fraunhofer HHI and Charité Universitätsmedizin Berlin (which by the way is the joint medical faculty of FU Berlin and Humboldt-Uni Berlin, both consortium members of 6G-RIC, led by Fraunhofer HHI)
  • TEF-Health (Testing and Experimentation Facility for Health AI and Robotics)
https://www.hhi.fraunhofer.de/en/ne...ucture-for-ai-and-robotics-in-healthcare.html


7872142E-E96B-40D6-8516-A3054938C077.jpeg


B779E80F-4AFF-4EB9-A3D3-CBE127CBF739.jpeg



  • 6G-Health (2023-2025), jointly led by Vodafone Germany and ICCAS (Innovation Center Computer Assisted Surgery) at Uni Leipzig’s Faculty of Medicine

https://www.hhi.fraunhofer.de/en/ne...off-better-healthcare-with-6g-networking.html


The 6G Health project complements the work of Fraunhofer HHI researchers in the BMBF-funded Research Hub 6G-RIC (…) They use the close collaboration in the 6G Health Consortium to coordinate requirements for the mobile communications standard and its future application in the medical field with clinical partners. This enables the experts to identify potential 6G applications at an early stage and lay the foundations for them in 6G standardization.”

4BD34999-864C-4F92-91CB-867EBE939A30.jpeg



All this ties in nicely with Fraunhofer HHI’s job listing I had spotted in November, “looking for several student assistants to support research projects on neuromorphic signal processing in the area of (medical) sensory applications”, during which they would “support the implementation of algorithms on neuromorphic hardware such as SpiNNaker and Akida.



City: Berlin
Date: Nov 17, 2023

Student Assistant* Signal Processing, Sensor Technology​

The Fraunhofer-Gesellschaft (www.fraunhofer.com) currently operates 76 institutes and research institutions throughout Germany and is the world’s leading applied research organization. Around 30 000 employees work with an annual research budget of 2.9 billion euros.

Future. Discover. Together.
The "Wireless Communications and Networks" department of the Fraunhofer Institute for Telecommunications, Heinrich Hertz Institute, develops wireless communication systems with a focus on future generations of cellular communications (5G+ and 6G). The "Signal and Information Processing (SIP)" group works in an international environment in research projects on highly topical issues in the field of signal processing, mobile communications, as well as applications in relevant fields. We are looking for several student assistants to support research projects on neuromorphic signal processing in the area of (medical) sensory applications. Be a part of our team and come on a journey of research and innovation!



What you will do

  • Support in the evaluation of innovative approaches to 6G-based recording of vital parameters (e.g. respiratory rate, pulse, movement patterns) using Integrated Communication and Sensing (ICAS) and their energy-efficient (pre-)processing and transmission in 5G/6G-based networks
  • Implementation of novel sensor and processing concepts on hardware-related processing and transmission platforms
  • Support the implementation of algorithms on neuromorphic hardware architectures (such as SpiNNaker and Akida)
  • Development and implementation of machine learning algorithms as well as the design and implementation of real-time software in C++
  • Carrying out experiments and simulations and evaluation of the performance of the algorithms developed for innovative applications


What you bring to the table

  • Full-time study with good grades at a German university or college in the fields of: electrical engineering, (medical) informatics, communications engineering, applied mathematics, physics or similar
  • Interest in signal processing, communications engineering and wireless communication networks (5G/6G)
  • Good knowledge of C/C++ programming and experience with multi-threaded applications
  • Experience with AI, deep learning and signal processing/sensor fusion
  • Interest and interdisciplinary collaboration in the areas of medicine, data processing, communication technology and AI

Furthermore desirable are:
  • Understanding of basic machine learning algorithms and knowledge of common frameworks (e.g. TensorFlow, PyTorch)
  • Experience with hardware programming, real-time software and event-driven architectures
  • Interest and interdisciplinary collaboration in the areas of medicine, data processing, communication technology and AI


What you can expect

  • Fascinating challenges in a scientific and entrepreneurial setting
  • Attractive salary
  • Modern and excellently equipped workspace in central location
  • Great and cooperative working atmosphere in an international team
  • Opportunities to write a master's or bachelor´s thesis
  • Flexible working hours
  • Opportunities to work from home

The position is initially limited to 6 months. An extension is explicitly desired.


The monthly working time is 80 hours. This position is also available on a part-time basis. We value and promote the diversity of our employees' skills and therefore welcome all applications - regardless of age, gender, nationality, ethnic and social origin, religion, ideology, disability, sexual orientation and identity. Severely disabled persons are given preference in the event of equal suitability.
With its focus on developing key technologies that are vital for the future and enabling the commercial utilization of this work by business and industry, Fraunhofer plays a central role in the innovation process. As a pioneer and catalyst for groundbreaking developments and scientific excellence, Fraunhofer helps shape society now and in the future.
Interested? Apply online now. We look forward to getting to know you!



Dr.-Ing. Martin Kasparick
E-Mail: martin.kasparick@hhi.fraunhofer.de
Tel.: +49 30 31002 853

Fraunhofer Institute for Telecommunications, Heinrich Hertz Institute HHI
www.hhi.fraunhofer.de


So to wrap it all up from my point of view:

We do know from the demo video that Fraunhofer HHI researchers used an Akida Raspberry Pi as part of their PoC, which encouragingly won a “Best Demonstration Award” at the ICMLCN 2024 Conference in Stockholm.

The results of my deep dive suggest to me that this PoC has to do with trying to establish a connected network of robots controlled via 6G, presumably for future 6G-enabled applications in healthcare.

It is likely our company’s role in the development of this PoC was limited to being a seller of a disruptive commercial product, not aware of what it was going to be used for. And of course there is no guarantee that this PoC utilising Akida will ever be commercialised and that Fraunhofer HHI researchers won’t be making a decision to go with a competitor’s neuromorphic hardware for future applications.

Undoubtedly, though, Fraunhofer HHI is one of the entities researching (and evidently liking) Akida. Hopefully this will eventually lead to more, with all those industry partners onboard. But I am afraid I don’t see any immediate commercial engagements resulting in revenue here. Happy to be proven wrong though… 😊


EF175830-70BB-4ACA-83CE-2D839EB009ED.jpeg
 
  • Like
  • Love
  • Fire
Reactions: 53 users

Dallas

Regular
  • Like
  • Love
  • Fire
Reactions: 4 users

Frangipani

Regular
Two weeks ago, our usual suspects from Fraunhofer HHI’s Wireless Communications and Networks Department gave a virtual presentation of their paper referenced in their robot dog gesture recognition demo video (from which we know they had utilised Akida) at yet another conference, the International Conference on Neuromorphic Systems (ICONS), hosted by George Mason University in Arlington, VA:

View attachment 68146


In their YouTube video, Zoran Utkovski describes their demo as “a proof-of-concept implementation of neuromorphic wireless cognition with an application to remote robotic control”, and recent conference presentations were titled “Gesture Recognition for Multi-Robot Control, using Neuromorphic Wireless Cognition and Sidelink Communication” resp. “Neuromorphic Wireless Cognition for Connected Intelligence”.

The words I marked in bold piqued my interest to dive a little deeper, in exploration of the question who would benefit from such research, as I don’t believe in what others here and especially elsewhere (FF) have strongly suggested: that Spot’s manufacturer Boston Dynamics and/or South Korea’s Hyundai Motor Group (which acquired BD in June 2021), is/are the secret customer(s) behind this PoC, allegedly paying Fraunhofer HHI researchers a fee to experiment with Akida on their behalf, as they must be keen on giving their four-legged mobile robot a neuromorphic “upgrade”.

The question you should ask yourselves is: Why would they outsource this type of research, when their own AI experts could easily play around with Akida at their own premises (unless they were buried in work they deemed more important)? Two years ago, the Hyundai Motor Group launched the Boston Dynamics AI Institute, headquartered in Cambridge, MA, to spearhead advancements in artificial intelligence and robotics. In early 2024, another office was opened in Zurich, Switzerland, led by Marco Hutter, who is also Associate Professor for Robotic Systems at ETH Zürich. Why - with all their AI and robotics expertise - would they need Fraunhofer HHI to assist them? Fraunhofer’s contract research is typically commissioned by small- and medium-sized companies that do not have their own R&D departments.

I suggest we let the facts speak for themselves:

The YouTube video’s description box basically says it all:
“(…) The followed approach allows for reduction in communication overhead, implementation complexity and energy consumption, making it amenable for various edge intelligence applications. The work has been conducted within the 6G Research and Innovation Cluster (6G-RIC), funded by the German Ministry of Education and Research (BMBF) in the program “Souverän. Digital. Vernetzt.” Find more information here: https://6G-ric.de”



View attachment 68164

And here is a link to a download of a publication detailing the above-mentioned program “Souverän. Digital. Vernetzt.” (German only):


View attachment 68163

So this publicly-funded PoC developed by five researchers from Fraunhofer HHI (the institution coordinating the 6G-RIC research hub) and Osvaldo Simeone from King’s College London is evidently about exploring future use cases that 6G will enable - cutting-edge research aiming “to help establish Germany and Europe as global leaders in the expansion of sustainable 6G technologies”. It is clearly not contract research commissioned by Boston Dynamics or Hyundai, with the intention of upgrading a product of theirs.

The 6G-RIC hub does have a number of illustrious industry partners, by the way, but neither BD nor Hyundai are one of them:


View attachment 68165

Still not convinced? Another hard-to-ignore piece of evidence that refutes the narrative of Boston Dynamics / Hyundai paying Fraunhofer HHI researchers to experiment with Akida and come up with that PoC is the following document that I stumbled across in my online search. It proves that on May 4, 2023 the Fraunhofer Central Purchasing Department in Munich signed a contract to buy a total of three Spot robot dogs directly from Boston Dynamics - the company that had won the public tender - and that they were destined for 6G-RIC project partner Fraunhofer HHI in Berlin.


View attachment 68147
View attachment 68148


We can safely assume that Boston Dynamics - had they really been a paying customer of Heinrich Hertz Institute (HHI) - would have supplied the Fraunhofer Institute with their own products free of charge in order for the Berlin telecommunication experts to conduct research on their behalf.

All available evidence points to Spot simply being a popular quadruped robot model the researchers selected for their testbed realisation and demo.


But back to my sleuthing efforts to find out more about what the researchers at Fraunhofer HHI might be up to:

I chanced upon an intriguing German-language podcast (Feb 1, 2024) titled “6G und die Arbeit des 6G-RIC” (“6G and the work of the 6G-RIC”) with Slawomir Stanczak as guest, who is Professor for Network Information Theory at TU Berlin, Head of Fraunhofer HHI’s Wireless Communications and Networks Department as well as Coordinator of the 6G Research and Innovation Cluster (6G-RIC):

https://www.ip-insider.de/der-nutze...ellschaft-a-cf561755cde0be7b2496c94704668417/


The podcast host starts out by introducing his guest and asking him why we will require 6G in the future (first 6G networks are predicted by 2028-2030).
Slawomir Stanczak names mixed reality as a prime use case, as it is combining massive data rates with the need for ultra-low latency, and then - about six minutes into the podcast - for the first time touches upon the topic of collaborative robots that work together towards a common goal, for example in areas such as Industry 4.0 and healthcare. According to him, 5G will be insufficient once many robots are to collaborate on a joint task, especially since an additional functionality will be required: sensing.

[Note that Slawomir Stanczak uses “collaborative robots” here in the sense of two or more robots collaborating with each other, whereas normally the term “collaborative robots” (aka “cobots”) simply means robots that are designed to work along humans in a common workspace as opposed to industrial robots that replace employees, usually for mundane and repetitive tasks that require speed and precision. As industrial robots tend to be in a fixed position and quite large and powerful, they are often caged or fenced-off so as not to endanger any humans who come too close.]

Slawomir Stanczak then briefly talks about autonomous cars and goes on to say that processing autonomously at the edge is not always the most effective solution. He gives the example of two cars trying to find a free lot in a multi-storey car park - in this particular case, a centrally coordinated decision, which is then communicated to the individual cars, would be the most efficient way of solving the problem. Hence, sometimes a centrally coordinated connected network that is able to combine data beats fully autonomous decisions and also helps to anticipate problems in order to pro-actively prevent them from happening. However, in other cases, when low latency is of utmost importance, decentralised decisions (= at the edge) are essential. Ultimately, it is all about finding the optimal compromise (“functional placement” in the mobile network).

From 17:12 min onwards, the podcast host picks up the topic of connected robotics and mentions a collaboration with Charité Universitätsmedizin Berlin, which is Germany’s biggest (and very renowned) university hospital, regarding the development of nursing robots and their control via 6G.

Stanczak confirms this and shares with his listeners they are in talks with Charité doctors in order to simplify certain in-hospital-processes and especially to reduce the workload on staff. Two new technological 6G features are currently being discussed: 1. collaborative robots and 2. integrated communication and sensing (ICAS).

Stanczak and his colleagues were told that apart from the global nursing shortage we are already facing, it is also predicted that we will suffer a shortage of medical doctors in the years to come, so the researchers were wondering whether robots could possibly compensate for this loss.

The idea is to connect numerous nursing robots in order to coordinate them and also for them to communicate with each other and cooperate efficiently on certain tasks - e.g., comparatively simple ones such as transporting patients to the operating theatre or serving them something to drink [of a non-alcoholic nature, I presume 😉]. But the researchers even envision complex tasks such as several robots collaborating on turning patients in bed.

Telemedicine will also become more important in the future, such as surgeons operating remotely with the help of an operating robot [you may have heard about the da Vinci Surgical System manufactured by Intuitive Surgical], while being in a totally different location.
[Something Stanczak didn’t specifically mention, but came to my mind when thinking of robot-control via gesture recognition in a hospital setting, is the fact that it would be contactless and thus perfect in an operating theatre, where sterile conditions must be maintained.]

As for the topic of sensing, the researchers’ vision is to one day use the hospital’s existing communication infrastructure for (radar) sensing tasks as well, such as detection whether a patient is in the room or has left it, monitoring of vital signs such as breathing - camera-less, and hence maintaining privacy.
[I remember reading somewhere else that with ICAS the network itself basically acts as a radar sensor, so there would be no need for additional physical radar sensors - please correct me, if I am wrong, as my grasp of all things technical is extremely superficial.]

Stanczak also views the analysis of liquids as a use case with great potential.
[I assume he was thinking of analysing blood, urine, cerebrospinal fluid etc., but possibly this would also include nasal or oral fluid samples collected for testing of infectious diseases such as COVID-19 or the flu.]

The podcast then moves on to the topic of energy efficiency (6G vs 5G), and Stanczak draws attention to an interesting point, namely that it is not sufficient to merely focus on improving the energy efficiency of mobile networks, as we also need to take into account the so-called rebound effect, which describes the reduction in expected gains from new technologies, as improvement in energy efficiency will lead to an overall increase in energy consumption.
[So, paradoxical as it sounds, saving energy can in fact lead to spending more.]

This is why according to Stanczak we will need a paradigm shift in the years to come and change scaling laws: improving the mobile networks’ energy efficiency while simultaneously decreasing our energy consumption. In addition, R&D in the field of renewable energies continues to be essential.

The remaining 8 or so minutes of the podcast were about frequency bands within the 6G spectrum and surfaces that can channel radio waves - far too technical for me to understand.



After listening to the podcast, I searched the internet for some more information on the cooperation between the institutions involved and discovered two major projects that link Fraunhofer HHI and Charité Universitätsmedizin Berlin (which by the way is the joint medical faculty of FU Berlin and Humboldt-Uni Berlin, both consortium members of 6G-RIC, led by Fraunhofer HHI)
  • TEF-Health (Testing and Experimentation Facility for Health AI and Robotics)
https://www.hhi.fraunhofer.de/en/ne...ucture-for-ai-and-robotics-in-healthcare.html


View attachment 68149

View attachment 68151


  • 6G-Health (2023-2025), jointly led by Vodafone Germany and ICCAS (Innovation Center Computer Assisted Surgery) at Uni Leipzig’s Faculty of Medicine

https://www.hhi.fraunhofer.de/en/ne...off-better-healthcare-with-6g-networking.html


The 6G Health project complements the work of Fraunhofer HHI researchers in the BMBF-funded Research Hub 6G-RIC (…) They use the close collaboration in the 6G Health Consortium to coordinate requirements for the mobile communications standard and its future application in the medical field with clinical partners. This enables the experts to identify potential 6G applications at an early stage and lay the foundations for them in 6G standardization.”

View attachment 68150


All this ties in nicely with Fraunhofer HHI’s job listing I had spotted in November, “looking for several student assistants to support research projects on neuromorphic signal processing in the area of (medical) sensory applications”, during which they would “support the implementation of algorithms on neuromorphic hardware such as SpiNNaker and Akida.





So to wrap it all up from my point of view:

We do know from the demo video that Fraunhofer HHI researchers used an Akida Raspberry Pi as part of their PoC, which encouragingly won a “Best Demonstration Award” at the ICMLCN 2024 Conference in Stockholm.

The results of my deep dive suggest to me that this PoC has to do with trying to establish a connected network of robots controlled via 6G, presumably for future 6G-enabled applications in healthcare.

It is likely our company’s role in the development of this PoC was limited to being a seller of a disruptive commercial product, not aware of what it was going to be used for. And of course there is no guarantee that this PoC utilising Akida will ever be commercialised and that Fraunhofer HHI researchers won’t be making a decision to go with a competitor’s neuromorphic hardware for future applications.

Undoubtedly, though, Fraunhofer HHI is one of the entities researching (and evidently liking) Akida. Hopefully this will eventually lead to more, with all those industry partners onboard. But I am afraid I don’t see any immediate commercial engagements resulting in revenue here. Happy to be proven wrong though… 😊


View attachment 68152

Here are some more images I wanted to share, but couldn’t due to the limit of 10 attachments per post (phew, I am glad there is no word limit 🤣):

5AC69FCA-9F0A-4EEE-A03F-7B1500F2A7B1.jpeg


Slawomir Stanczak talking about “scenarios involving swarms of collaborative robots” at the 6G-RIC Berlin 6G Conference in July:

FAF8094C-0B5B-4725-AE33-1AB466EE3B14.jpeg




By the way, the first paper cited in the Fraunhofer HHI video (co-authored by Osvaldo Simeone and two other researchers from King’s College London) is actually not more than a decade old:

View attachment 63677

It was just a typo…


View attachment 63678


View attachment 63679
View attachment 63680

The above 👆🏻 paper’s first author, Jiechen Chen (or Chen Jiechen, in case you prefer the Chinese naming convention of putting the surname first), recently published his PhD dissertation, in which Akida gets mentioned twice (alongside other neuromorphic computing platforms). Osvaldo Simeone, the only co-author of the 6G-RIC PoC paper who is not from Fraunhofer HHI, was one of his supervisors. The other was Simeone’s faculty colleague Bipin Rajendran. Both professors have very generally acknowledged Akida in recent papers, similar to their PhD student here:





F5E2141A-275F-49DB-9187-B8F44BB41B0E.jpeg

64D13C77-6256-4423-924D-2DB97F4E9195.jpeg


3E4D9C3D-EAD2-4640-A381-9F85714CAF2C.jpeg



On a side note: In August 2023 and January 2024, the two King’s College London professors co-published two papers (that did not mention Akida) with a number of SIGCOM (Signal Processing and Communications) researchers from Uni Luxembourg’s SnT (Interdisciplinary Centre for Security, Reliability and Trust).

Now that those Luxembourg researchers have revealed they had some fun demonstrating keyword spotting implemented on Akida 👇🏻, I suspect it is only a question of time before we see another joint Luxembourg & London paper, this time favourably mentioning BrainChip…

Fast forward to April 20, 2024 when @Pmel shared a great find, namely a LinkedIn post by SnT researcher Geoffrey Eappen, in which Flor Ortiz is named as part of a team that successfully demonstrated keyword spotting implemented on Akida. (You won’t get this post as a search result for “Flor Ortiz”on TSE, though, as her name appears in an image, not in a text).


View attachment 64489

While it is heartwarming for us BRN shareholders to read about the Uni Luxembourg researchers’ enthusiasm and catch a glimpse of the Akida Shuttle PC in action, this reveal about the SnT SIGCOM researchers playing with AKD1000 didn’t really come as a surprise, given we had previously spotted SnT colleagues Jorge Querol and Swetha Varadarajulu liking BrainChip posts on LinkedIn:

https://thestockexchange.com.au/threads/brn-discussion-ongoing.1/post-408941

Nevertheless, it is another exciting validation of Akida technology for the researchers’ whole professional network to see!




While there is no 100% guarantee that future neuromorphic research at Uni Luxembourg will continue to involve Akida, I doubt the SnT SIGCOM research group would have splurged US$ 9,995 on a Brainchip Shuttle PC Dev Kit, if they hadn’t been serious about utilising it intensively… 🇱🇺 🛰


Don’t forget what ISL’s Joe Guerci said in a podcast (the one in which he was gushing over Akida) earlier this year with regards to the wireless community:

3AC5FD51-47B0-456F-ABC8-F84F47EA2F2C.jpeg
 
  • Like
  • Love
  • Fire
Reactions: 51 users

Tels61

Member
An outstanding piece of work Frangipani. Your research is very comprehensive and your assessments of your findings are well reasoned. Well done, admire your efforts.
 
  • Like
  • Fire
  • Love
Reactions: 39 users

Frangipani

Regular
Congratulations Peter 🥳

‘ Achieving milestones that once seemed beyond our reach ‘

👏👏👏

View attachment 66858

View attachment 66955

Announcement published by the Pearcey Foundation today:


“As the WA Pearcey Award recipient, Peter van der Made will represent the state in the Pearcey National Awards to be held on 18 November 2024 in Brisbane.”


C95C08F7-C8DA-4F55-8932-E8DD5BBDD236.jpeg

14A30792-32B0-43C3-81A1-807736861A86.jpeg
 
  • Like
  • Love
  • Fire
Reactions: 43 users
F5A5A8F0-55FB-47BD-9D66-E7E448EE0750.png
 
  • Like
  • Fire
Reactions: 8 users
Not saying this is Akida (though UAV etc in our wheelhouse) but I'm curious who they using and can't find a lot of info yet.


Nemesis AI Project​

Nemesis AI is an artificial intelligence solution, the result of over 8 months of intensive research and technological development, which we will make available to security and defence companies. This will give them the possibility to implement autonomous systems capable making quick and accurate decisions in critical scenarios.

The solution was made possible thanks to the emergence of new global hardware equipment, significant advancements in the AI algorithms area, which allowed our AI to be trained at an unprecedented level, thus facilitating the development of an AI solution with superior capabilities

The systems on which the solution has been integrated become fully autonomous, capable of executing complex missions without direct human intervention. This extended autonomy is vital in operations where reaction time and precision are crucial.


The first I saw of it was this article.


Excerpt.

By Romania Journal on August 13, 2024
OVES Enterprise announces the official launch of Nemesis AI, a revolutionary artificial intelligence solution that marks a new chapter in the development of autonomous military technology (without direct human intervention). Nemesis AI represents the evolution of the previous Storm Shadow AI project and brings significant improvements by integrating a neuromorphic architecture inspired by the functioning of the human brain.

Nemesis AI is designed to offer enhanced energy efficiency, superior adaptability, and advanced learning capabilities, all contributing to creating a faster, smarter, and more responsible AI solution. Unlike the traditional deep learning architecture used in the previous version of the solution, which relied on artificial neural networks implemented on conventional hardware, Nemesis AI is based on circuits and algorithms that mimic biological neurons and synapses. This innovative approach allows for more natural and efficient data processing, significantly reducing the time and amount of data required to train models.



Mihai Filip
Founder & CEO @ Oves Enterprise Software Offices in Romania - UK - Germany - Dubai - US
4d

Feels great to be in the media spotlight once again, as we as we officially launch 𝗡𝗲𝗺𝗲𝘀𝗶𝘀 𝗔𝗜! Integrating a neuromorphic architecture inspired by the human brain, this cutting-edge #technology sets a new benchmark in autonomous systems, offering enhanced speed, intelligence, and adaptability for military applications. With its ability to process data more efficiently, 𝗡𝗲𝗺𝗲𝘀𝗶𝘀 𝗔𝗜 significantly reduces training time for UAVs and improves navigation, target recognition, and situational awareness, even in the most challenging environments. While designed for defense, its advanced features also extend to critical applications in sectors like emergency response. A huge shoutout to all the publishers for spreading the news. More great things to come! #OvesEnterprise https://lnkd.in/dhdJDhA5
Dezvoltatorul de software OVES Enterprise lansează Nemesis AI, soluţie autonomă destinată companiilor de securitate şi apărare. „Nemesis AI este o inovaţie tehnologică şi un răspuns la nevoile urgente ale industriei de apărare, unde eficienţa, precizia şi autonomia sunt critice”

Dezvoltatorul de software OVES Enterprise lansează Nemesis AI, soluţie autonomă destinată companiilor de securitate şi apărare. „Nemesis AI este o inovaţie tehnologică şi un răspuns la nevoile urgente ale industriei de apărare, unde eficienţa, precizia şi autonomia sunt critice”

zf.ro


40



Mihai Filip
Founder & CEO @ Oves Enterprise Software Offices in Romania - UK - Germany - Dubai - US
5d

Team Oves Enterprise at work! I had an outstanding day yesterday, sitting down with Marius Valentin Muresan, PhD, Irina Muresan, PhD and Mihai Sarto to check in on the progress of our new AI-powered board with an integrated flight controller. We’re so close to bringing our vision to life! It’s been a journey of ongoing development and adjustments, and we’re fully committed to fine-tuning it for the best possible performance. This project has been our main focus, and I’m thrilled to see how fast we’re moving forward. One thing that makes me incredibly proud—beyond the opportunity to work closely with my awesome team—is that these boards are manufactured right here in Romania 🇷🇴. Not in China, or any other country that could raise security concerns, but within our own borders, under the reliable protection of both the EU and NATO. It feels amazing to be part of something special—innovation made in Romania, with a focus on making a global impact. Excited for what’s ahead! #OvesEnterprise
  • No alternative text description for this image
  • No alternative text description for this image
  • No alternative text description for this image
  • No alternative text description for this image
1198 Comments
 
  • Like
  • Love
  • Fire
Reactions: 13 users

7für7

Top 20
Not saying this is Akida (though UAV etc in our wheelhouse) but I'm curious who they using and can't find a lot of info yet.


Nemesis AI Project​

Nemesis AI is an artificial intelligence solution, the result of over 8 months of intensive research and technological development, which we will make available to security and defence companies. This will give them the possibility to implement autonomous systems capable making quick and accurate decisions in critical scenarios.

The solution was made possible thanks to the emergence of new global hardware equipment, significant advancements in the AI algorithms area, which allowed our AI to be trained at an unprecedented level, thus facilitating the development of an AI solution with superior capabilities

The systems on which the solution has been integrated become fully autonomous, capable of executing complex missions without direct human intervention. This extended autonomy is vital in operations where reaction time and precision are crucial.


The first I saw of it was this article.


Excerpt.

By Romania Journal on August 13, 2024
OVES Enterprise announces the official launch of Nemesis AI, a revolutionary artificial intelligence solution that marks a new chapter in the development of autonomous military technology (without direct human intervention). Nemesis AI represents the evolution of the previous Storm Shadow AI project and brings significant improvements by integrating a neuromorphic architecture inspired by the functioning of the human brain.

Nemesis AI is designed to offer enhanced energy efficiency, superior adaptability, and advanced learning capabilities, all contributing to creating a faster, smarter, and more responsible AI solution. Unlike the traditional deep learning architecture used in the previous version of the solution, which relied on artificial neural networks implemented on conventional hardware, Nemesis AI is based on circuits and algorithms that mimic biological neurons and synapses. This innovative approach allows for more natural and efficient data processing, significantly reducing the time and amount of data required to train models.



Mihai Filip
Founder & CEO @ Oves Enterprise Software Offices in Romania - UK - Germany - Dubai - US
4d

Feels great to be in the media spotlight once again, as we as we officially launch 𝗡𝗲𝗺𝗲𝘀𝗶𝘀 𝗔𝗜! Integrating a neuromorphic architecture inspired by the human brain, this cutting-edge #technology sets a new benchmark in autonomous systems, offering enhanced speed, intelligence, and adaptability for military applications. With its ability to process data more efficiently, 𝗡𝗲𝗺𝗲𝘀𝗶𝘀 𝗔𝗜 significantly reduces training time for UAVs and improves navigation, target recognition, and situational awareness, even in the most challenging environments. While designed for defense, its advanced features also extend to critical applications in sectors like emergency response. A huge shoutout to all the publishers for spreading the news. More great things to come! #OvesEnterprise https://lnkd.in/dhdJDhA5
Dezvoltatorul de software OVES Enterprise lansează Nemesis AI, soluţie autonomă destinată companiilor de securitate şi apărare. „Nemesis AI este o inovaţie tehnologică şi un răspuns la nevoile urgente ale industriei de apărare, unde eficienţa, precizia şi autonomia sunt critice”

Dezvoltatorul de software OVES Enterprise lansează Nemesis AI, soluţie autonomă destinată companiilor de securitate şi apărare. „Nemesis AI este o inovaţie tehnologică şi un răspuns la nevoile urgente ale industriei de apărare, unde eficienţa, precizia şi autonomia sunt critice”

zf.ro


40



Mihai Filip
Founder & CEO @ Oves Enterprise Software Offices in Romania - UK - Germany - Dubai - US
5d

Team Oves Enterprise at work! I had an outstanding day yesterday, sitting down with Marius Valentin Muresan, PhD, Irina Muresan, PhD and Mihai Sarto to check in on the progress of our new AI-powered board with an integrated flight controller. We’re so close to bringing our vision to life! It’s been a journey of ongoing development and adjustments, and we’re fully committed to fine-tuning it for the best possible performance. This project has been our main focus, and I’m thrilled to see how fast we’re moving forward. One thing that makes me incredibly proud—beyond the opportunity to work closely with my awesome team—is that these boards are manufactured right here in Romania 🇷🇴. Not in China, or any other country that could raise security concerns, but within our own borders, under the reliable protection of both the EU and NATO. It feels amazing to be part of something special—innovation made in Romania, with a focus on making a global impact. Excited for what’s ahead! #OvesEnterprise
  • No alternative text description for this image
  • No alternative text description for this image
  • No alternative text description for this image
  • No alternative text description for this image
1198 Comments
I have no time to to make researches but I asked for fun ChatGPT 😂 enjoy the answer

“The description of "Nemesis AI" suggests a company active in the field of neuromorphic systems or neuromorphic hardware. Such companies develop AI solutions that mimic biological neurons and synapses to enable more efficient and faster data processing.


A company that fits this description could be BrainChip Holdings. BrainChip develops neuromorphic processors and hardware, particularly the Akida technology, which is designed to offer energy-efficient and adaptive AI solutions that mimic biological neural structures. This technology enables fast learning processes with lower energy consumption and reduced data requirements, which closely resembles the description of "Nemesis AI."
 
  • Like
  • Fire
  • Haha
Reactions: 17 users

manny100

Regular
In simple terms this is why Airbus wants and needs AKIDA on board:
The benefits we bring to Airbus will likely lead to many other space engagements and eventually revenue.
Enhanced Data Processing:
BrainChip’s Akida processors enable real-time, efficient data processing on board Airbus’s aerospace systems. This is crucial for tasks like space surveillance and in-orbit services, where timely and accurate data analysis is essential.
Reduced Power Consumption:
Akida’s neuromorphic technology is designed to operate with ultra-low power consumption. This is particularly beneficial in aerospace applications where power resources are limited and efficiency is critical.
Improved Autonomy:
The ability of Akida processors to learn and adapt on the fly enhances the autonomy of Airbus’s systems. This means that the systems can handle unexpected situations and make decisions without needing constant human intervention.
Increased Reliability:
By integrating BrainChip’s technology, Airbus can improve the reliability and robustness of its aerospace systems. The advanced AI capabilities help in predictive maintenance and anomaly detection, reducing the risk of system failures.These benefits collectively help Airbus to advance its aerospace technology, making its systems more efficient, reliable, and capable of handling complex tasks in challenging environments
 
  • Like
  • Fire
  • Love
Reactions: 29 users
Just bought another 5000 shares Pom 😛

Actually caught me a little off guard..
Had them in at 18 cents (at the bottom) and still working out transfer times etc..

And then Boom, they are bought, via a big dump into the sell side.

I may have to do some creative accounting, over the next couple of days.. (money coming from PayPal, from sale of goods and they say 3 -5 business days..) So I think I'm going to have to double up payment, from my bank account, which genuinely leaves me skits, until I can transfer the money back from my Comsec account..
 
Last edited:
  • Like
  • Thinking
Reactions: 7 users

Diogenese

Top 20
Just bought another 5000 shares Pom 😛

Actually caught me a little off guard..
Had them in at 18 cents (at the bottom) and still working out transfer times etc..

And then Boom, they are bought, via a big dump into the sell side.

I may have to do some creative accounting, over the next couple of days.. (money coming from PayPal, from sale of goods and they say 3 -5 business days..) So I think I'm going to have to double up payment, from my bank account, which genuinely leaves me skits, until I can transfer the money back from my Comsec account..
Shouldn't dangle your toes in the water when there are sharks about.
 
  • Haha
  • Like
Reactions: 10 users

HopalongPetrovski

I'm Spartacus!
  • Haha
  • Wow
Reactions: 5 users
Shouldn't dangle your toes in the water when there are sharks about.
Yeah, my feet are getting cold already..

Not because of the sharks, just cutting everything else, just a bit too fine..
 
  • Like
Reactions: 5 users
Top Bottom