BRN Discussion Ongoing

Xray1

Regular
1. AKD1000 SoC – A limited run of chips was produced with no plans for further production. The chips did not generate significant revenue or sell and are now being repurposed for applications like edge boxes.

2. AKD1000 IP – Two IP licenses have been sold (Renesas and Megaship), and Sean mentions that ongoing engagement continues with others.

3. AKD1500 Chip - The AKD1500 is an accelerator reference chip, which assists partners in developing and demonstrating their solutions as a stepping stone to integrating the Akida IP into their production SoCs - It’s not meant to be a revenue stream (just like edge boxes).

4. AKD1500 IP – No licenses have been sold. However, it's worth noting that Megachip said on LinkedIn they played a role in developing Akida1500, which I found interesting, as it suggested they might have had an end client or their own use in mind.

5. AKIDA 2.0 IP – No licenses sold.

6. AKIDA 2.0 TENNS IP - not a product still in development.

7. TENNS software - not a product still in development.

8. TENNS Pleiades software - not a product still in development.

9. VVDN AKIDA Edge Box – It's not intended to be a source of significant revenue but rather to showcase Akida's capabilities.

10. EDGX-1 Brain – This is not a product; it is a partnership project being undertaken under a non-binding Memorandum of Understanding with EDGX.

11. (To be released) Unigen AKIDA Ai Cupcake Edge Server – as per VVDN box

12. (Under Development) Cloud based AKIDA FPGA Development Environment.

13. Models for Noise Cancellation and Keyword Spotting.

14. Optimised models for GENAi applications at the Edge including ASR.

Items 12 to 14 are not products but are under development to support the IP being sold. Sean has repeatedly emphasised that we are an IP-focused company. Our current IP product portfolio available now is AKIDA1.0, AKIDA1500, and AKIDA 2.0, and these are the products we are aiming to sell to reach viability.
Once again, a most impressive and unbiased factual summation of the current state of affairs concerning our various patented technologies.
 
  • Like
Reactions: 8 users

Xray1

Regular
Your not helpful at all
Obviously, you are either unable or unwilling to accept the realities and factual information of what AI-Inquirer has provided and detailed in his post. I note your response post back to AI-Inquirer stating.. "Your not helpful at all" ... then, why don't you take the time out to detail each of his points made as to why you think they are unhelpful...

It's about time, that some posters here should take off their rose coloured glasses and refrain from being somewhat Co stooges.
 
  • Like
Reactions: 4 users
Once again, a most impressive and unbiased factual summation of the current state of affairs concerning our various patented technologies.
Except, I think @AI_Inquirer, is mistaken about the following not being "products".

"6. AKIDA 2.0 TENNS IP - not a product still in development"

"7 TENNS software - not a product still in development"

"8. TENNS Pleiades software - not a product still in development"



While "still being developed" as are all BrainChip technologies (nothing remains "fixed" when it come to high technology).

The language from BrainChip, tells me, that they are ready to be utilised, as they stand now.


"The implementation of TENN within BrainChip’s hardware in the Akida 2.0, showcases a significant step forward in hardware-accelerated AI. Akida 2.0’s architecture is designed to fully exploit TENN’s capabilities, featuring a mesh network of nodes each equipped with an event-based TENN processing unit. This design ensures scalability and enhances computational efficiency, making it suitable for deployment in environments where power and space are limited"



"TENNs-PLEIADES is the latest technological advancement added to BrainChip’s IP portfolio and an expansion of Temporal Event-Based Neural Nets (TENNs), the company’s approach to streaming and sequential data. It complements the company’s neural processor, Akida™ IP, an event-based technology that is inherently lower power when compared to conventional neural network accelerators. Lower power affords greater scalability and lower operational costs"


Also, I don't think there is an "AKIDA 1.5 IP" and that AKD1500, comes under AKIDA 1.0 IP.

I'm happy to be corrected.

Maybe I "am" wearing these?...

20240818_154436.jpg
 
Last edited:
  • Like
  • Love
  • Fire
Reactions: 20 users

manny100

Top 20
Once again, a most impressive and unbiased factual summation of the current state of affairs concerning our various patented technologies.
Its actually a very positive post. It highlights the strength of our portfolio which underpins the value of BRN in the absence of deals which will materialize in due course.
 
  • Like
Reactions: 5 users

Diogenese

Top 20
hi @Diogenese .................. like to ask you if you would agree that SNR (signal to noise ratio) could be seen as the same principals as SNNs ?
And would the use of SNNs give a higher SNR?
Appreciate your thoughts.
Cheers
Hi mrgds,

There is a similarity in that we are talking about selecting correct signals from erroneous signals, but analog NNs, which rely on the amplitude of signals to convey information, suffer from manufacturing variability which means that a signal applied to two different circuits could produce different amplitudes outputs. Given that there are thousands of operations involved, the error can be significant.

SNR is more a problem of selecting a signal from a lot of background "static". This would be more apposite for noise-cancelling systems.

Analog SNNs are about variations in the inherent value (amplitude) of the signal rather than selecting signals from extraneous noise.

Digital uses a simple ON/OFF switch, which has more than enough tolerance to override manufacturing variability.

As a rudimentary example, where 10 analog signals are combined in a 5v system with 0.2v manufacturing variability per device, the cumulative error could be 2v (= 40%), whereas, in a 5v digital system, the switching threshold could be set at 3v, which would easily accommodate the 0.2v variation.
 
  • Like
  • Love
  • Fire
Reactions: 18 users

Diogenese

Top 20
1. AKD1000 SoC – A limited run of chips was produced with no plans for further production. The chips did not generate significant revenue or sell and are now being repurposed for applications like edge boxes.

2. AKD1000 IP – Two IP licenses have been sold (Renesas and Megaship), and Sean mentions that ongoing engagement continues with others.

3. AKD1500 Chip - The AKD1500 is an accelerator reference chip, which assists partners in developing and demonstrating their solutions as a stepping stone to integrating the Akida IP into their production SoCs - It’s not meant to be a revenue stream (just like edge boxes).

4. AKD1500 IP – No licenses have been sold. However, it's worth noting that Megachip said on LinkedIn they played a role in developing Akida1500, which I found interesting, as it suggested they might have had an end client or their own use in mind.

5. AKIDA 2.0 IP – No licenses sold.

6. AKIDA 2.0 TENNS IP - not a product still in development.

7. TENNS software - not a product still in development.

8. TENNS Pleiades software - not a product still in development.

9. VVDN AKIDA Edge Box – It's not intended to be a source of significant revenue but rather to showcase Akida's capabilities.

10. EDGX-1 Brain – This is not a product; it is a partnership project being undertaken under a non-binding Memorandum of Understanding with EDGX.

11. (To be released) Unigen AKIDA Ai Cupcake Edge Server – as per VVDN box

12. (Under Development) Cloud based AKIDA FPGA Development Environment.

13. Models for Noise Cancellation and Keyword Spotting.

14. Optimised models for GENAi applications at the Edge including ASR.

Items 12 to 14 are not products but are under development to support the IP being sold. Sean has repeatedly emphasised that we are an IP-focused company. Our current IP product portfolio available now is AKIDA1.0, AKIDA1500, and AKIDA 2.0, and these are the products we are aiming to sell to reach viability.


cc @DingoBorat

Hi AI_I,

Mercedes and Valeo are at least 2 EAPs we have been strongly involved with for a few years. We know that, in their upcoming products, both are planning to use software for signal processing.

Both would have had access to Akida 2/TeNNs simulation software since the filing of the TeNNs patent over 2 years ago.

Since TeNNs is still in development, they would be reluctant to commit to silicon, so the absence of Akida 2/TeNNs silicon from SCALA 3 and Mercedes DMS for example is not surprising. Software can be continually updated.

Given the previously expressed enthusiasm for Akida by both companies, I am hopeful that they are using the Akida 2/TeNNs software for signal processing. Only today I read a post here (@Tuliptrader ?) about a linkedin a post by a senior MB exec in charge of MBOS, which referred to end-to-end-NNs.

As I've said a few times now, I suspect that both Valeo and Mercedes are using Akida2/TeNNs software for signal processing in their new releases until the development of TeNNs has plateaued and been proven at a satisfactory performance level. If this is the case, I would expect that there will be a commercial licence, albeit under NDA.

Also, software NNs are not as big a problem with ICEs as they are with EVs,

However, such speculation is notoriously unreliable and should not form the basis if investment decisions.
 
Last edited:
  • Like
  • Love
  • Fire
Reactions: 42 users

mrgds

Regular
Hi mrgds,

There is a similarity in that we are talking about selecting correct signals from erroneous signals, but analog NNs, which rely on the amplitude of signals to convey information, suffer from manufacturing variability which means that a signal applied to two different circuits could produce different amplitudes outputs. Given that there are thousands of operations involved, the error can be significant.

SNR is more a problem of selecting a signal from a lot of background "static". This would be more apposite for noise-cancelling systems.

Analog SNNs are about variations in the inherent value (amplitude) of the signal rather than selecting signals from extraneous noise.

Digital uses a simple ON/OFF switch, which has more than enough tolerance to override manufacturing variability.

As a rudimentary example, where 10 analog signals are combined in a 5v system with 0.2v manufacturing variability per device, the cumulative error could be 2v (= 40%), whereas, in a 5v digital system, the switching threshold could be set at 3v, which would easily accommodate the 0.2v variation.
Hi "Dodgyknees"
Thanks very much for the reply, and the time you put into your response.
Helps making it a little "less muddy" for noobs like myself.
Cheers

Akida Ballista
 
  • Like
  • Fire
Reactions: 12 users

Rach2512

Regular
View attachment 68081


BrainChip Podcast Epi 34: The State of Neuromorphic Computing​

In this episode of the “This is Our Mission” podcast, Sean Hehir interviews Dr. Eric Gallo, a Senior Principal at Accenture Labs. They discuss the advantages of neuromorphic technology and its impact on edge computing, as well as advancements in SpaceTech.​

author avatar

Hiba Akbar
15 Aug, 2024. 4 min read
Follow
BrainChip Podcast Epi 34: The State of Neuromorphic Computing


Topic
A.I.
Follow
Tags
artificial intelligence
Edge AI
neuromorphic computing
space
Accenture is at the forefront of technological innovation with a focus on developing next-generation computing technologies. In a recent podcast, Dr. Eric Gallo, a senior principal at Accenture Labs, shared insights into the promising field of neuromorphic computing. This technology mimics how the human brain processes information and offers significant advantages in power efficiency and real-time data processing.
As industries increasingly rely on edge computing, neuromorphic systems present a unique solution to the challenges of energy consumption and integration in smart devices. This episode of the BrainChip podcast explores Accenture's initiatives in neuromorphic computing. They also explore its applications in various sectors and the potential it holds for the future of edge intelligence.

The Rise of Edge Computing and the Need for Heterogeneous Computing​

Edge computing is rapidly gaining traction as industries increasingly rely on real-time data processing and intelligence at the edge of the network. As the industry matures, there is a growing understanding that a heterogeneous set of computing devices is necessary to achieve optimal results.
Traditional computing architectures will continue to play a role, but they will make way for other specialized architectures that excel in different situations and locations. This will create a continuum where edge architectures, cloud architectures, and other specialized architectures work together to provide the right amount of compute power where it's needed most.
Companies with edge devices are actively examining their AI strategies to avoid being left behind. While computing power is readily available, the ability to perform inference very close to the edge is a key focus for many organizations. This is where neuromorphic computing shines, offering a low-power solution for real-time data processing at the edge. This energy efficiency is not just a minor improvement; it’s a game-changer.
Dr. Gallo highlighted how neuromorphic technology could potentially achieve power savings of up to 100,000 times compared to current methods. This opens up new possibilities for developing intelligent devices that can operate in environments where power is limited, such as in remote locations, wearable devices, or even deep space missions. Neuromorphic computing is still in its early stages, but its potential to transform industries is immense.

The Power of Neuromorphic Technology​

One of the most significant advantages of neuromorphic technology is its remarkable energy efficiency. Traditional computing systems often require a large amount of power to perform complex tasks. However, neuromorphic systems can achieve the same results while using only a fraction of that energy. This makes them ideal for applications where power is a critical concern.
Dr. Eric Gallo explained that this technology could be a game-changer in various fields. For example, in defense, neuromorphic chips could be used to create advanced situational awareness systems for soldiers. These systems could process vast amounts of data in real time, helping soldiers make better decisions in the field without the need for bulky, power-hungry equipment.
Neuromorphic technology could also enhance the intelligence of factory equipment in industrial settings. Machines equipped with neuromorphic chips could adapt to changing conditions on the fly, improving efficiency and reducing downtime. These chips can also be powered by small batteries or energy harvesters, making them suitable for environments where access to power is limited.
As this technology continues to develop, its impact on different sectors will also grow. This will lead to more efficient and intelligent solutions.

Accenture's Neuromorphic Computing Initiatives in Space​

Eric Gallo believes that neuromorphic architectures can enable smart satellites and other space devices without the significant power and thermal constraints of traditional computing systems.
Accenture is working towards demonstrating real-time neuromorphic computing in space. By leveraging the low-power capabilities of neuromorphic chips, Accenture aims to make space devices more intelligent and responsive.
The space industry has traditionally relied on less advanced computing technologies due to the challenges of power and heat dissipation. However, with the emergence of neuromorphic computing, there is a sudden realization that space systems can be made much smarter without the usual constraints.

Accenture's Partnership with BrainChip​

Accenture has formed a strong partnership with BrainChip, a leader in neuromorphic computing technology. This collaboration uses the neuromorphic chip to explore practical applications in various industries. Dr. Eric Gallo, who leads Accenture's neuromorphic initiatives, has shared valuable insights from their work with this advanced technology.
The chip stands out for its ability to save power compared to traditional computing systems. In practical tests, Accenture observed that systems using the chip consumed only a fraction of the power—about one-fifth—compared to conventional setups. This efficiency is important for applications where power resources are limited, such as edge devices and satellites.
Working with BrainChip has allowed Accenture to access cutting-edge neuromorphic technology and gain practical experience in real-world environments. The partnership has been characterized by strong support and collaboration that enables both teams to tackle challenges and continuously improve their systems.

The Future of Neuromorphic Computing: Spanning Material Spaces and Scales​

One of the most exciting aspects of neuromorphic computing is its potential to span a wide range of material spaces and scales. Dr. Gallo envisions the possibility of creating tiny, biodegradable neuromorphic sensors that can be used in applications like water quality monitoring. These sensors could transmit data to small neural networks, which could then determine if there is a need for concern.
At the other end of the spectrum, neuromorphic architectures are enabling large-scale neural networks like Spike GPT. These advanced systems demonstrate the versatility of neuromorphic computing, which can be applied from the smallest sensors to the most powerful artificial intelligence systems.
Dr. Gallo emphasizes that anyone, even those without expertise in computing architectures, can get involved and contribute to the advancement of neuromorphic computing. The field is open to new ideas and innovations, and there are many opportunities for individuals to make meaningful contributions.

Final Words​

Looking ahead, the future of neuromorphic computing appears bright. Its ability to span a wide range of materials and scales opens doors for anyone interested in contributing to this exciting field. As organizations like Accenture lead the charge, we can expect to see more practical implementations that harness the power of neuromorphic technology, ultimately making our world smarter and more efficient. To learn more about the future of neuromorphic computing watch the full podcast above!


Are we in a partnership with Accenture! I knew we had done podcasts but had it been mentioned that we were in a partnership, I must have fallen asleep if I missed this?
 
  • Like
  • Love
Reactions: 7 users

MegaportX

Regular
If in 4 months we sign a 5 year multi licence with a semiconductor company Brainchilp will instantly be profitable.. :unsure:. The reference to 4 months. :unsure:.
We shall see..
 
  • Like
  • Thinking
Reactions: 13 users

manny100

Top 20
TENNS in laymans terms::
Explains in simple terms why TENNS is superior.
Spatial and Temporal Integration:
Imagine you’re watching a video.
Spatial information refers to what you see in each frame (like objects, colors, and shapes).
Temporal information is how things change over time (like motion, patterns, and sequences).
TENNs combine both aspects effectively. They’re like having eyes that not only see the picture but also understand how it changes from frame to frame.
Traditional Approaches:
Think of traditional methods as separate tools: one for pictures (CNNs) and another for understanding sequences (RNNs).
CNNs are great at recognizing objects in images but struggle with dynamic changes.
RNNs handle sequences well but have limitations like slow learning and memory issues.
TENNs Bridge the Gap:
TENNs are like a hybrid tool that merges the best of both worlds.
They process video frames while considering how things evolve over time. This makes them superior for tasks like detecting moving objects or understanding audio patterns.
In summary, TENNs are like smart glasses that see both the picture and the movie, making them better at handling sequential data!
 
  • Like
  • Fire
  • Love
Reactions: 45 users

Getupthere

Regular

eeNews Europe — Renesas is taping out a chip using the spiking neural network (SNN) technology developed by Brainchip.​

Dec 2, 2022 – Nick Flaherty

This is part of a move to boost the leading edge performance of its chips for the Internet of Things, Sailesh Chittipeddi became Executive Vice President and General Manager of IoT and Infrastructure Business Unit at Renesas Electronics and the former CEO of IDT tells eeNews Europe.
This strategy has seen the company develop the first silicon for ARM’s M85 and RISC-V cores, along with new capacity and foundry deals.
“We are very happy to be at the leading edge and now we have made a rapid transition to address our ARM shortfall but we realise the challenges in the marketplace and introduced the RISC-V products to make sure we don’t fall behind in the new architectures,” he said.
“Our next move is to more advanced technology nodes to push the microcontrollers into the gigahertz regime and that’s where the is overlap with microprocessors. The way I look at it is all about the system performance.”
“Now you have accelerators for driving AI with neural processing units rather than a dual core CPU. We are working with a third party taping out a device in December on 22nm CMOS,” said Chittipeddi.
Brainchip and Renesas signed a deal in December 2020 to implement the spiking neural network technology. Tools are vital for this new area. “The partner gives us the training tools that are needed,” he said.
 
  • Like
  • Fire
Reactions: 9 users

Getupthere

Regular

eeNews Europe — Renesas is taping out a chip using the spiking neural network (SNN) technology developed by Brainchip.​

Dec 2, 2022 – Nick Flaherty

This is part of a move to boost the leading edge performance of its chips for the Internet of Things, Sailesh Chittipeddi became Executive Vice President and General Manager of IoT and Infrastructure Business Unit at Renesas Electronics and the former CEO of IDT tells eeNews Europe.
This strategy has seen the company develop the first silicon for ARM’s M85 and RISC-V cores, along with new capacity and foundry deals.
“We are very happy to be at the leading edge and now we have made a rapid transition to address our ARM shortfall but we realise the challenges in the marketplace and introduced the RISC-V products to make sure we don’t fall behind in the new architectures,” he said.
“Our next move is to more advanced technology nodes to push the microcontrollers into the gigahertz regime and that’s where the is overlap with microprocessors. The way I look at it is all about the system performance.”
“Now you have accelerators for driving AI with neural processing units rather than a dual core CPU. We are working with a third party taping out a device in December on 22nm CMOS,” said Chittipeddi.
Brainchip and Renesas signed a deal in December 2020 to implement the spiking neural network technology. Tools are vital for this new area. “The partner gives us the training tools that are needed,” he said.
What ever happened to the tape out

I personally would have had a use it or lose it clause when signing an IP deal.
 
  • Like
  • Fire
  • Thinking
Reactions: 3 users

Fenris78

Regular
What ever happened to the tape out

I personally would have had a use it or lose it clause when signing an IP deal.
Other than the demo with Akida and Arm's M85... who knows? It seems Renesas has used Arm Helium in preference to Akida... for now.

Could the Sifive's intelligence x390 processor with NPU be from this tapeout?? "SiFive is playing a pivotal role in propelling the RISC-V industry into new frontiers of performance and applicability. By unveiling processors like the Performance P870/P870A and Intelligence X390, the company is not merely iterating on existing technology but is introducing transformative architectural innovations."

From 2022.... "Renesas Electronics is looking to catch up in the ARM microcontroller and processor markets, but also looking at the emerging RISC-V cores and new spiking AI accelerators to boost machine learning in the Internet of Things (IoT).
 
  • Like
  • Thinking
Reactions: 5 users

Rach2512

Regular
Are we in a partnership with Accenture! I knew we had done podcasts but had it been mentioned that we were in a partnership, I must have fallen asleep if I missed this?
 

Attachments

  • Screenshot_20240818-191344_Samsung Internet.jpg
    Screenshot_20240818-191344_Samsung Internet.jpg
    421.9 KB · Views: 120
  • Like
  • Love
Reactions: 11 users

Diogenese

Top 20
TENNS in laymans terms::
Explains in simple terms why TENNS is superior.
Spatial and Temporal Integration:
Imagine you’re watching a video.
Spatial information refers to what you see in each frame (like objects, colors, and shapes).
Temporal information is how things change over time (like motion, patterns, and sequences).
TENNs combine both aspects effectively. They’re like having eyes that not only see the picture but also understand how it changes from frame to frame.
Traditional Approaches:
Think of traditional methods as separate tools: one for pictures (CNNs) and another for understanding sequences (RNNs).
CNNs are great at recognizing objects in images but struggle with dynamic changes.
RNNs handle sequences well but have limitations like slow learning and memory issues.
TENNs Bridge the Gap:
TENNs are like a hybrid tool that merges the best of both worlds.
They process video frames while considering how things evolve over time. This makes them superior for tasks like detecting moving objects or understanding audio patterns.
In summary, TENNs are like smart glasses that see both the picture and the movie, making them better at handling sequential data!
Thanks manny,

Another way of looking at it is by comparison with Prophese's DVS camera.

It is possible to design a DVS to act as a still camera or a movie camera.

The following is based on a frame based camera to simplify the explanation, but a DVS has a continuously open light sensor pixel array.

The DVS has pixels which can be designed to produce a 1 or a zero depending on the level of illumination. If the illumination exceeds a threshold value, the pixel turns ON, and if the illumination is below the threshold, the output is zero.

In the still camera mode, the DVS system can thus identify in a single frame where the illumination of adjacent pixels differs, and generate a change from 1 to zero or from zero to 1 depending on whether the transition between adjacent pixels is from white to black or black to white.

This mode will produce an outline of objects

In the movie camera mode, the system assesses the changes of individual pixel illumination in successive frames, and produces the 1 to zero or zero to 1 transitions for the individual pixels whose illumination in the two frames crosses the threshold either up or down.

This mode will produce a moving outline of objects familiar from the Prophesee videos.

So, in the still mode, the system compares the illumination of adjacent pixels in a single frame, and in the movie mode, the system assesses the illumination changes in individual pixels in successive frames.

Late ed: An analogy for TeNNs could be a first monitor system which monitors the magnitude of each individual pixel, and generates a spike or event when there is a change across the threshold, and a second monitor which monitors the difference between adjacent pixels referred to the threshold. This arrangement would be capable of both classifying an object using the first monitor and tracking the object's movement using the second monitor. Incorporating the tracking feature in silicon relieves the CPU of a considerable processing load. .

Now the asynchronous spiking bit comes in where the system does not run on frames, but has a continuously open sensor so changes are registered as they occur and not on a frame basis, ie, taking the time element into account.

Both Prophesee and Akida run in an asynchronous mode which eliminates the frame processing delay.

By taking a continuous stream of data, it is thus possible to track the motion of an object or, in the case of voice signals, to analyse passages of speech.
 
Last edited:
  • Like
  • Fire
  • Love
Reactions: 38 users

IloveLamp

Top 20
🤔🤔

Worth a read...



1000017795.jpg


1000017793.jpg
1000017797.jpg
 

Attachments

  • 1000017790.jpg
    1000017790.jpg
    381.6 KB · Views: 60
Last edited:
  • Like
  • Fire
  • Love
Reactions: 15 users

Frangipani

Top 20
Our friends at Fraunhofer HHI 👆🏻are part of the ongoing Berlin 6G Conference (July 2 - 4, 2024). While three of the above paper’s co-authors are session chairs or speakers, Yuzhen Ke and Mehdi Heshmati are giving live demos of Spiky Spot’s akidaesque gesture recognition skills at the conference expo.

View attachment 65934


View attachment 65935


The Berlin-based researchers have been rather busy conference-hopping in recent weeks: Stockholm (Best Demonstration Award), Antwerp, Denver, Berlin (2x).

I am just not sure whether they have been letting curious visitors to their booth in on the secret behind their obedient robotic dog... 🤔

View attachment 65937

View attachment 65938


View attachment 65939

Two weeks ago, our usual suspects from Fraunhofer HHI’s Wireless Communications and Networks Department gave a virtual presentation of their paper referenced in their robot dog gesture recognition demo video (from which we know they had utilised Akida) at yet another conference, the International Conference on Neuromorphic Systems (ICONS), hosted by George Mason University in Arlington, VA:

E19DC9D8-DE81-4403-BDE5-A9E5F2C173AF.jpeg



In their YouTube video, Zoran Utkovski describes their demo as “a proof-of-concept implementation of neuromorphic wireless cognition with an application to remote robotic control”, and recent conference presentations were titled “Gesture Recognition for Multi-Robot Control, using Neuromorphic Wireless Cognition and Sidelink Communication” resp. “Neuromorphic Wireless Cognition for Connected Intelligence”.

The words I marked in bold piqued my interest to dive a little deeper, in exploration of the question who would benefit from such research, as I don’t believe in what others here and especially elsewhere (FF) have strongly suggested: that Spot’s manufacturer Boston Dynamics and/or South Korea’s Hyundai Motor Group (which acquired BD in June 2021), is/are the secret customer(s) behind this PoC, allegedly paying Fraunhofer HHI researchers a fee to experiment with Akida on their behalf, as they must be keen on giving their four-legged mobile robot a neuromorphic “upgrade”.

The question you should ask yourselves is: Why would they outsource this type of research, when their own AI experts could easily play around with Akida at their own premises (unless they were buried in work they deemed more important)? Two years ago, the Hyundai Motor Group launched the Boston Dynamics AI Institute, headquartered in Cambridge, MA, to spearhead advancements in artificial intelligence and robotics. In early 2024, another office was opened in Zurich, Switzerland, led by Marco Hutter, who is also Associate Professor for Robotic Systems at ETH Zürich. Why - with all their AI and robotics expertise - would they need Fraunhofer HHI to assist them? Fraunhofer’s contract research is typically commissioned by small- and medium-sized companies that do not have their own R&D departments.

I suggest we let the facts speak for themselves:

The YouTube video’s description box basically says it all:
“(…) The followed approach allows for reduction in communication overhead, implementation complexity and energy consumption, making it amenable for various edge intelligence applications. The work has been conducted within the 6G Research and Innovation Cluster (6G-RIC), funded by the German Ministry of Education and Research (BMBF) in the program “Souverän. Digital. Vernetzt.” Find more information here: https://6G-ric.de”



53156B4E-0C35-4D93-8203-75AD99E46C81.jpeg


And here is a link to a download of a publication detailing the above-mentioned program “Souverän. Digital. Vernetzt.” (German only):


8631A6D0-9FFE-4621-A239-3F43FC7BAC7C.jpeg


So this publicly-funded PoC developed by five researchers from Fraunhofer HHI (the institution coordinating the 6G-RIC research hub) and Osvaldo Simeone from King’s College London is evidently about exploring future use cases that 6G will enable - cutting-edge research aiming “to help establish Germany and Europe as global leaders in the expansion of sustainable 6G technologies”. It is clearly not contract research commissioned by Boston Dynamics or Hyundai, with the intention of upgrading a product of theirs.

The 6G-RIC hub does have a number of illustrious industry partners, by the way, but neither BD nor Hyundai are one of them:


2613E372-0B6F-427C-8F78-0AFB222BDE52.jpeg


Still not convinced? Another hard-to-ignore piece of evidence that refutes the narrative of Boston Dynamics / Hyundai paying Fraunhofer HHI researchers to experiment with Akida and come up with that PoC is the following document that I stumbled across in my online search. It proves that on May 4, 2023 the Fraunhofer Central Purchasing Department in Munich signed a contract to buy a total of three Spot robot dogs directly from Boston Dynamics - the company that had won the public tender - and that they were destined for 6G-RIC project partner Fraunhofer HHI in Berlin.


A413846D-B187-4215-9A9A-67066F679036.jpeg

78187D68-35C2-425D-9539-BF134CE7D66E.jpeg



We can safely assume that Boston Dynamics - had they really been a paying customer of Heinrich Hertz Institute (HHI) - would have supplied the Fraunhofer Institute with their own products free of charge in order for the Berlin telecommunication experts to conduct research on their behalf.

All available evidence points to Spot simply being a popular quadruped robot model the researchers selected for their testbed realisation and demo.


But back to my sleuthing efforts to find out more about what the researchers at Fraunhofer HHI might be up to:

I chanced upon an intriguing German-language podcast (Feb 1, 2024) titled “6G und die Arbeit des 6G-RIC” (“6G and the work of the 6G-RIC”) with Slawomir Stanczak as guest, who is Professor for Network Information Theory at TU Berlin, Head of Fraunhofer HHI’s Wireless Communications and Networks Department as well as Coordinator of the 6G Research and Innovation Cluster (6G-RIC):

https://www.ip-insider.de/der-nutze...ellschaft-a-cf561755cde0be7b2496c94704668417/


The podcast host starts out by introducing his guest and asking him why we will require 6G in the future (first 6G networks are predicted by 2028-2030).
Slawomir Stanczak names mixed reality as a prime use case, as it is combining massive data rates with the need for ultra-low latency, and then - about six minutes into the podcast - for the first time touches upon the topic of collaborative robots that work together towards a common goal, for example in areas such as Industry 4.0 and healthcare. According to him, 5G will be insufficient once many robots are to collaborate on a joint task, especially since an additional functionality will be required: sensing.

[Note that Slawomir Stanczak uses “collaborative robots” here in the sense of two or more robots collaborating with each other, whereas normally the term “collaborative robots” (aka “cobots”) simply means robots that are designed to work along humans in a common workspace as opposed to industrial robots that replace employees, usually for mundane and repetitive tasks that require speed and precision. As industrial robots tend to be in a fixed position and quite large and powerful, they are often caged or fenced-off so as not to endanger any humans who come too close.]

Slawomir Stanczak then briefly talks about autonomous cars and goes on to say that processing autonomously at the edge is not always the most effective solution. He gives the example of two cars trying to find a free lot in a multi-storey car park - in this particular case, a centrally coordinated decision, which is then communicated to the individual cars, would be the most efficient way of solving the problem. Hence, sometimes a centrally coordinated connected network that is able to combine data beats fully autonomous decisions and also helps to anticipate problems in order to pro-actively prevent them from happening. However, in other cases, when low latency is of utmost importance, decentralised decisions (= at the edge) are essential. Ultimately, it is all about finding the optimal compromise (“functional placement” in the mobile network).

From 17:12 min onwards, the podcast host picks up the topic of connected robotics and mentions a collaboration with Charité Universitätsmedizin Berlin, which is Germany’s biggest (and very renowned) university hospital, regarding the development of nursing robots and their control via 6G.

Stanczak confirms this and shares with his listeners they are in talks with Charité doctors in order to simplify certain in-hospital-processes and especially to reduce the workload on staff. Two new technological 6G features are currently being discussed: 1. collaborative robots and 2. integrated communication and sensing (ICAS).

Stanczak and his colleagues were told that apart from the global nursing shortage we are already facing, it is also predicted that we will suffer a shortage of medical doctors in the years to come, so the researchers were wondering whether robots could possibly compensate for this loss.

The idea is to connect numerous nursing robots in order to coordinate them and also for them to communicate with each other and cooperate efficiently on certain tasks - e.g., comparatively simple ones such as transporting patients to the operating theatre or serving them something to drink [of a non-alcoholic nature, I presume 😉]. But the researchers even envision complex tasks such as several robots collaborating on turning patients in bed.

Telemedicine will also become more important in the future, such as surgeons operating remotely with the help of an operating robot [you may have heard about the da Vinci Surgical System manufactured by Intuitive Surgical], while being in a totally different location.
[Something Stanczak didn’t specifically mention, but came to my mind when thinking of robot-control via gesture recognition in a hospital setting, is the fact that it would be contactless and thus perfect in an operating theatre, where sterile conditions must be maintained.]

As for the topic of sensing, the researchers’ vision is to one day use the hospital’s existing communication infrastructure for (radar) sensing tasks as well, such as detection whether a patient is in the room or has left it, monitoring of vital signs such as breathing - camera-less, and hence maintaining privacy.
[I remember reading somewhere else that with ICAS the network itself basically acts as a radar sensor, so there would be no need for additional physical radar sensors - please correct me, if I am wrong, as my grasp of all things technical is extremely superficial.]

Stanczak also views the analysis of liquids as a use case with great potential.
[I assume he was thinking of analysing blood, urine, cerebrospinal fluid etc., but possibly this would also include nasal or oral fluid samples collected for testing of infectious diseases such as COVID-19 or the flu.]

The podcast then moves on to the topic of energy efficiency (6G vs 5G), and Stanczak draws attention to an interesting point, namely that it is not sufficient to merely focus on improving the energy efficiency of mobile networks, as we also need to take into account the so-called rebound effect, which describes the reduction in expected gains from new technologies, as improvement in energy efficiency will lead to an overall increase in energy consumption.
[So, paradoxical as it sounds, saving energy can in fact lead to spending more.]

This is why according to Stanczak we will need a paradigm shift in the years to come and change scaling laws: improving the mobile networks’ energy efficiency while simultaneously decreasing our energy consumption. In addition, R&D in the field of renewable energies continues to be essential.

The remaining 8 or so minutes of the podcast were about frequency bands within the 6G spectrum and surfaces that can channel radio waves - far too technical for me to understand.



After listening to the podcast, I searched the internet for some more information on the cooperation between the institutions involved and discovered two major projects that link Fraunhofer HHI and Charité Universitätsmedizin Berlin (which by the way is the joint medical faculty of FU Berlin and Humboldt-Uni Berlin, both consortium members of 6G-RIC, led by Fraunhofer HHI)
  • TEF-Health (Testing and Experimentation Facility for Health AI and Robotics)
https://www.hhi.fraunhofer.de/en/ne...ucture-for-ai-and-robotics-in-healthcare.html


7872142E-E96B-40D6-8516-A3054938C077.jpeg


B779E80F-4AFF-4EB9-A3D3-CBE127CBF739.jpeg



  • 6G-Health (2023-2025), jointly led by Vodafone Germany and ICCAS (Innovation Center Computer Assisted Surgery) at Uni Leipzig’s Faculty of Medicine

https://www.hhi.fraunhofer.de/en/ne...off-better-healthcare-with-6g-networking.html


The 6G Health project complements the work of Fraunhofer HHI researchers in the BMBF-funded Research Hub 6G-RIC (…) They use the close collaboration in the 6G Health Consortium to coordinate requirements for the mobile communications standard and its future application in the medical field with clinical partners. This enables the experts to identify potential 6G applications at an early stage and lay the foundations for them in 6G standardization.”

4BD34999-864C-4F92-91CB-867EBE939A30.jpeg



All this ties in nicely with Fraunhofer HHI’s job listing I had spotted in November, “looking for several student assistants to support research projects on neuromorphic signal processing in the area of (medical) sensory applications”, during which they would “support the implementation of algorithms on neuromorphic hardware such as SpiNNaker and Akida.



City: Berlin
Date: Nov 17, 2023

Student Assistant* Signal Processing, Sensor Technology​

The Fraunhofer-Gesellschaft (www.fraunhofer.com) currently operates 76 institutes and research institutions throughout Germany and is the world’s leading applied research organization. Around 30 000 employees work with an annual research budget of 2.9 billion euros.

Future. Discover. Together.
The "Wireless Communications and Networks" department of the Fraunhofer Institute for Telecommunications, Heinrich Hertz Institute, develops wireless communication systems with a focus on future generations of cellular communications (5G+ and 6G). The "Signal and Information Processing (SIP)" group works in an international environment in research projects on highly topical issues in the field of signal processing, mobile communications, as well as applications in relevant fields. We are looking for several student assistants to support research projects on neuromorphic signal processing in the area of (medical) sensory applications. Be a part of our team and come on a journey of research and innovation!



What you will do

  • Support in the evaluation of innovative approaches to 6G-based recording of vital parameters (e.g. respiratory rate, pulse, movement patterns) using Integrated Communication and Sensing (ICAS) and their energy-efficient (pre-)processing and transmission in 5G/6G-based networks
  • Implementation of novel sensor and processing concepts on hardware-related processing and transmission platforms
  • Support the implementation of algorithms on neuromorphic hardware architectures (such as SpiNNaker and Akida)
  • Development and implementation of machine learning algorithms as well as the design and implementation of real-time software in C++
  • Carrying out experiments and simulations and evaluation of the performance of the algorithms developed for innovative applications


What you bring to the table

  • Full-time study with good grades at a German university or college in the fields of: electrical engineering, (medical) informatics, communications engineering, applied mathematics, physics or similar
  • Interest in signal processing, communications engineering and wireless communication networks (5G/6G)
  • Good knowledge of C/C++ programming and experience with multi-threaded applications
  • Experience with AI, deep learning and signal processing/sensor fusion
  • Interest and interdisciplinary collaboration in the areas of medicine, data processing, communication technology and AI

Furthermore desirable are:
  • Understanding of basic machine learning algorithms and knowledge of common frameworks (e.g. TensorFlow, PyTorch)
  • Experience with hardware programming, real-time software and event-driven architectures
  • Interest and interdisciplinary collaboration in the areas of medicine, data processing, communication technology and AI


What you can expect

  • Fascinating challenges in a scientific and entrepreneurial setting
  • Attractive salary
  • Modern and excellently equipped workspace in central location
  • Great and cooperative working atmosphere in an international team
  • Opportunities to write a master's or bachelor´s thesis
  • Flexible working hours
  • Opportunities to work from home

The position is initially limited to 6 months. An extension is explicitly desired.


The monthly working time is 80 hours. This position is also available on a part-time basis. We value and promote the diversity of our employees' skills and therefore welcome all applications - regardless of age, gender, nationality, ethnic and social origin, religion, ideology, disability, sexual orientation and identity. Severely disabled persons are given preference in the event of equal suitability.
With its focus on developing key technologies that are vital for the future and enabling the commercial utilization of this work by business and industry, Fraunhofer plays a central role in the innovation process. As a pioneer and catalyst for groundbreaking developments and scientific excellence, Fraunhofer helps shape society now and in the future.
Interested? Apply online now. We look forward to getting to know you!



Dr.-Ing. Martin Kasparick
E-Mail: martin.kasparick@hhi.fraunhofer.de
Tel.: +49 30 31002 853

Fraunhofer Institute for Telecommunications, Heinrich Hertz Institute HHI
www.hhi.fraunhofer.de


So to wrap it all up from my point of view:

We do know from the demo video that Fraunhofer HHI researchers used an Akida Raspberry Pi as part of their PoC, which encouragingly won a “Best Demonstration Award” at the ICMLCN 2024 Conference in Stockholm.

The results of my deep dive suggest to me that this PoC has to do with trying to establish a connected network of robots controlled via 6G, presumably for future 6G-enabled applications in healthcare.

It is likely our company’s role in the development of this PoC was limited to being a seller of a disruptive commercial product, not aware of what it was going to be used for. And of course there is no guarantee that this PoC utilising Akida will ever be commercialised and that Fraunhofer HHI researchers won’t be making a decision to go with a competitor’s neuromorphic hardware for future applications.

Undoubtedly, though, Fraunhofer HHI is one of the entities researching (and evidently liking) Akida. Hopefully this will eventually lead to more, with all those industry partners onboard. But I am afraid I don’t see any immediate commercial engagements resulting in revenue here. Happy to be proven wrong though… 😊


EF175830-70BB-4ACA-83CE-2D839EB009ED.jpeg
 
  • Like
  • Love
  • Fire
Reactions: 53 users

Dallas

Regular
  • Like
  • Love
  • Fire
Reactions: 4 users

Frangipani

Top 20
Two weeks ago, our usual suspects from Fraunhofer HHI’s Wireless Communications and Networks Department gave a virtual presentation of their paper referenced in their robot dog gesture recognition demo video (from which we know they had utilised Akida) at yet another conference, the International Conference on Neuromorphic Systems (ICONS), hosted by George Mason University in Arlington, VA:

View attachment 68146


In their YouTube video, Zoran Utkovski describes their demo as “a proof-of-concept implementation of neuromorphic wireless cognition with an application to remote robotic control”, and recent conference presentations were titled “Gesture Recognition for Multi-Robot Control, using Neuromorphic Wireless Cognition and Sidelink Communication” resp. “Neuromorphic Wireless Cognition for Connected Intelligence”.

The words I marked in bold piqued my interest to dive a little deeper, in exploration of the question who would benefit from such research, as I don’t believe in what others here and especially elsewhere (FF) have strongly suggested: that Spot’s manufacturer Boston Dynamics and/or South Korea’s Hyundai Motor Group (which acquired BD in June 2021), is/are the secret customer(s) behind this PoC, allegedly paying Fraunhofer HHI researchers a fee to experiment with Akida on their behalf, as they must be keen on giving their four-legged mobile robot a neuromorphic “upgrade”.

The question you should ask yourselves is: Why would they outsource this type of research, when their own AI experts could easily play around with Akida at their own premises (unless they were buried in work they deemed more important)? Two years ago, the Hyundai Motor Group launched the Boston Dynamics AI Institute, headquartered in Cambridge, MA, to spearhead advancements in artificial intelligence and robotics. In early 2024, another office was opened in Zurich, Switzerland, led by Marco Hutter, who is also Associate Professor for Robotic Systems at ETH Zürich. Why - with all their AI and robotics expertise - would they need Fraunhofer HHI to assist them? Fraunhofer’s contract research is typically commissioned by small- and medium-sized companies that do not have their own R&D departments.

I suggest we let the facts speak for themselves:

The YouTube video’s description box basically says it all:
“(…) The followed approach allows for reduction in communication overhead, implementation complexity and energy consumption, making it amenable for various edge intelligence applications. The work has been conducted within the 6G Research and Innovation Cluster (6G-RIC), funded by the German Ministry of Education and Research (BMBF) in the program “Souverän. Digital. Vernetzt.” Find more information here: https://6G-ric.de”



View attachment 68164

And here is a link to a download of a publication detailing the above-mentioned program “Souverän. Digital. Vernetzt.” (German only):


View attachment 68163

So this publicly-funded PoC developed by five researchers from Fraunhofer HHI (the institution coordinating the 6G-RIC research hub) and Osvaldo Simeone from King’s College London is evidently about exploring future use cases that 6G will enable - cutting-edge research aiming “to help establish Germany and Europe as global leaders in the expansion of sustainable 6G technologies”. It is clearly not contract research commissioned by Boston Dynamics or Hyundai, with the intention of upgrading a product of theirs.

The 6G-RIC hub does have a number of illustrious industry partners, by the way, but neither BD nor Hyundai are one of them:


View attachment 68165

Still not convinced? Another hard-to-ignore piece of evidence that refutes the narrative of Boston Dynamics / Hyundai paying Fraunhofer HHI researchers to experiment with Akida and come up with that PoC is the following document that I stumbled across in my online search. It proves that on May 4, 2023 the Fraunhofer Central Purchasing Department in Munich signed a contract to buy a total of three Spot robot dogs directly from Boston Dynamics - the company that had won the public tender - and that they were destined for 6G-RIC project partner Fraunhofer HHI in Berlin.


View attachment 68147
View attachment 68148


We can safely assume that Boston Dynamics - had they really been a paying customer of Heinrich Hertz Institute (HHI) - would have supplied the Fraunhofer Institute with their own products free of charge in order for the Berlin telecommunication experts to conduct research on their behalf.

All available evidence points to Spot simply being a popular quadruped robot model the researchers selected for their testbed realisation and demo.


But back to my sleuthing efforts to find out more about what the researchers at Fraunhofer HHI might be up to:

I chanced upon an intriguing German-language podcast (Feb 1, 2024) titled “6G und die Arbeit des 6G-RIC” (“6G and the work of the 6G-RIC”) with Slawomir Stanczak as guest, who is Professor for Network Information Theory at TU Berlin, Head of Fraunhofer HHI’s Wireless Communications and Networks Department as well as Coordinator of the 6G Research and Innovation Cluster (6G-RIC):

https://www.ip-insider.de/der-nutze...ellschaft-a-cf561755cde0be7b2496c94704668417/


The podcast host starts out by introducing his guest and asking him why we will require 6G in the future (first 6G networks are predicted by 2028-2030).
Slawomir Stanczak names mixed reality as a prime use case, as it is combining massive data rates with the need for ultra-low latency, and then - about six minutes into the podcast - for the first time touches upon the topic of collaborative robots that work together towards a common goal, for example in areas such as Industry 4.0 and healthcare. According to him, 5G will be insufficient once many robots are to collaborate on a joint task, especially since an additional functionality will be required: sensing.

[Note that Slawomir Stanczak uses “collaborative robots” here in the sense of two or more robots collaborating with each other, whereas normally the term “collaborative robots” (aka “cobots”) simply means robots that are designed to work along humans in a common workspace as opposed to industrial robots that replace employees, usually for mundane and repetitive tasks that require speed and precision. As industrial robots tend to be in a fixed position and quite large and powerful, they are often caged or fenced-off so as not to endanger any humans who come too close.]

Slawomir Stanczak then briefly talks about autonomous cars and goes on to say that processing autonomously at the edge is not always the most effective solution. He gives the example of two cars trying to find a free lot in a multi-storey car park - in this particular case, a centrally coordinated decision, which is then communicated to the individual cars, would be the most efficient way of solving the problem. Hence, sometimes a centrally coordinated connected network that is able to combine data beats fully autonomous decisions and also helps to anticipate problems in order to pro-actively prevent them from happening. However, in other cases, when low latency is of utmost importance, decentralised decisions (= at the edge) are essential. Ultimately, it is all about finding the optimal compromise (“functional placement” in the mobile network).

From 17:12 min onwards, the podcast host picks up the topic of connected robotics and mentions a collaboration with Charité Universitätsmedizin Berlin, which is Germany’s biggest (and very renowned) university hospital, regarding the development of nursing robots and their control via 6G.

Stanczak confirms this and shares with his listeners they are in talks with Charité doctors in order to simplify certain in-hospital-processes and especially to reduce the workload on staff. Two new technological 6G features are currently being discussed: 1. collaborative robots and 2. integrated communication and sensing (ICAS).

Stanczak and his colleagues were told that apart from the global nursing shortage we are already facing, it is also predicted that we will suffer a shortage of medical doctors in the years to come, so the researchers were wondering whether robots could possibly compensate for this loss.

The idea is to connect numerous nursing robots in order to coordinate them and also for them to communicate with each other and cooperate efficiently on certain tasks - e.g., comparatively simple ones such as transporting patients to the operating theatre or serving them something to drink [of a non-alcoholic nature, I presume 😉]. But the researchers even envision complex tasks such as several robots collaborating on turning patients in bed.

Telemedicine will also become more important in the future, such as surgeons operating remotely with the help of an operating robot [you may have heard about the da Vinci Surgical System manufactured by Intuitive Surgical], while being in a totally different location.
[Something Stanczak didn’t specifically mention, but came to my mind when thinking of robot-control via gesture recognition in a hospital setting, is the fact that it would be contactless and thus perfect in an operating theatre, where sterile conditions must be maintained.]

As for the topic of sensing, the researchers’ vision is to one day use the hospital’s existing communication infrastructure for (radar) sensing tasks as well, such as detection whether a patient is in the room or has left it, monitoring of vital signs such as breathing - camera-less, and hence maintaining privacy.
[I remember reading somewhere else that with ICAS the network itself basically acts as a radar sensor, so there would be no need for additional physical radar sensors - please correct me, if I am wrong, as my grasp of all things technical is extremely superficial.]

Stanczak also views the analysis of liquids as a use case with great potential.
[I assume he was thinking of analysing blood, urine, cerebrospinal fluid etc., but possibly this would also include nasal or oral fluid samples collected for testing of infectious diseases such as COVID-19 or the flu.]

The podcast then moves on to the topic of energy efficiency (6G vs 5G), and Stanczak draws attention to an interesting point, namely that it is not sufficient to merely focus on improving the energy efficiency of mobile networks, as we also need to take into account the so-called rebound effect, which describes the reduction in expected gains from new technologies, as improvement in energy efficiency will lead to an overall increase in energy consumption.
[So, paradoxical as it sounds, saving energy can in fact lead to spending more.]

This is why according to Stanczak we will need a paradigm shift in the years to come and change scaling laws: improving the mobile networks’ energy efficiency while simultaneously decreasing our energy consumption. In addition, R&D in the field of renewable energies continues to be essential.

The remaining 8 or so minutes of the podcast were about frequency bands within the 6G spectrum and surfaces that can channel radio waves - far too technical for me to understand.



After listening to the podcast, I searched the internet for some more information on the cooperation between the institutions involved and discovered two major projects that link Fraunhofer HHI and Charité Universitätsmedizin Berlin (which by the way is the joint medical faculty of FU Berlin and Humboldt-Uni Berlin, both consortium members of 6G-RIC, led by Fraunhofer HHI)
  • TEF-Health (Testing and Experimentation Facility for Health AI and Robotics)
https://www.hhi.fraunhofer.de/en/ne...ucture-for-ai-and-robotics-in-healthcare.html


View attachment 68149

View attachment 68151


  • 6G-Health (2023-2025), jointly led by Vodafone Germany and ICCAS (Innovation Center Computer Assisted Surgery) at Uni Leipzig’s Faculty of Medicine

https://www.hhi.fraunhofer.de/en/ne...off-better-healthcare-with-6g-networking.html


The 6G Health project complements the work of Fraunhofer HHI researchers in the BMBF-funded Research Hub 6G-RIC (…) They use the close collaboration in the 6G Health Consortium to coordinate requirements for the mobile communications standard and its future application in the medical field with clinical partners. This enables the experts to identify potential 6G applications at an early stage and lay the foundations for them in 6G standardization.”

View attachment 68150


All this ties in nicely with Fraunhofer HHI’s job listing I had spotted in November, “looking for several student assistants to support research projects on neuromorphic signal processing in the area of (medical) sensory applications”, during which they would “support the implementation of algorithms on neuromorphic hardware such as SpiNNaker and Akida.





So to wrap it all up from my point of view:

We do know from the demo video that Fraunhofer HHI researchers used an Akida Raspberry Pi as part of their PoC, which encouragingly won a “Best Demonstration Award” at the ICMLCN 2024 Conference in Stockholm.

The results of my deep dive suggest to me that this PoC has to do with trying to establish a connected network of robots controlled via 6G, presumably for future 6G-enabled applications in healthcare.

It is likely our company’s role in the development of this PoC was limited to being a seller of a disruptive commercial product, not aware of what it was going to be used for. And of course there is no guarantee that this PoC utilising Akida will ever be commercialised and that Fraunhofer HHI researchers won’t be making a decision to go with a competitor’s neuromorphic hardware for future applications.

Undoubtedly, though, Fraunhofer HHI is one of the entities researching (and evidently liking) Akida. Hopefully this will eventually lead to more, with all those industry partners onboard. But I am afraid I don’t see any immediate commercial engagements resulting in revenue here. Happy to be proven wrong though… 😊


View attachment 68152

Here are some more images I wanted to share, but couldn’t due to the limit of 10 attachments per post (phew, I am glad there is no word limit 🤣):

5AC69FCA-9F0A-4EEE-A03F-7B1500F2A7B1.jpeg


Slawomir Stanczak talking about “scenarios involving swarms of collaborative robots” at the 6G-RIC Berlin 6G Conference in July:

FAF8094C-0B5B-4725-AE33-1AB466EE3B14.jpeg




By the way, the first paper cited in the Fraunhofer HHI video (co-authored by Osvaldo Simeone and two other researchers from King’s College London) is actually not more than a decade old:

View attachment 63677

It was just a typo…


View attachment 63678


View attachment 63679
View attachment 63680

The above 👆🏻 paper’s first author, Jiechen Chen (or Chen Jiechen, in case you prefer the Chinese naming convention of putting the surname first), recently published his PhD dissertation, in which Akida gets mentioned twice (alongside other neuromorphic computing platforms). Osvaldo Simeone, the only co-author of the 6G-RIC PoC paper who is not from Fraunhofer HHI, was one of his supervisors. The other was Simeone’s faculty colleague Bipin Rajendran. Both professors have very generally acknowledged Akida in recent papers, similar to their PhD student here:





F5E2141A-275F-49DB-9187-B8F44BB41B0E.jpeg

64D13C77-6256-4423-924D-2DB97F4E9195.jpeg


3E4D9C3D-EAD2-4640-A381-9F85714CAF2C.jpeg



On a side note: In August 2023 and January 2024, the two King’s College London professors co-published two papers (that did not mention Akida) with a number of SIGCOM (Signal Processing and Communications) researchers from Uni Luxembourg’s SnT (Interdisciplinary Centre for Security, Reliability and Trust).

Now that those Luxembourg researchers have revealed they had some fun demonstrating keyword spotting implemented on Akida 👇🏻, I suspect it is only a question of time before we see another joint Luxembourg & London paper, this time favourably mentioning BrainChip…

Fast forward to April 20, 2024 when @Pmel shared a great find, namely a LinkedIn post by SnT researcher Geoffrey Eappen, in which Flor Ortiz is named as part of a team that successfully demonstrated keyword spotting implemented on Akida. (You won’t get this post as a search result for “Flor Ortiz”on TSE, though, as her name appears in an image, not in a text).


View attachment 64489

While it is heartwarming for us BRN shareholders to read about the Uni Luxembourg researchers’ enthusiasm and catch a glimpse of the Akida Shuttle PC in action, this reveal about the SnT SIGCOM researchers playing with AKD1000 didn’t really come as a surprise, given we had previously spotted SnT colleagues Jorge Querol and Swetha Varadarajulu liking BrainChip posts on LinkedIn:

https://thestockexchange.com.au/threads/brn-discussion-ongoing.1/post-408941

Nevertheless, it is another exciting validation of Akida technology for the researchers’ whole professional network to see!




While there is no 100% guarantee that future neuromorphic research at Uni Luxembourg will continue to involve Akida, I doubt the SnT SIGCOM research group would have splurged US$ 9,995 on a Brainchip Shuttle PC Dev Kit, if they hadn’t been serious about utilising it intensively… 🇱🇺 🛰


Don’t forget what ISL’s Joe Guerci said in a podcast (the one in which he was gushing over Akida) earlier this year with regards to the wireless community:

3AC5FD51-47B0-456F-ABC8-F84F47EA2F2C.jpeg
 
  • Like
  • Love
  • Fire
Reactions: 51 users

Tels61

Member
An outstanding piece of work Frangipani. Your research is very comprehensive and your assessments of your findings are well reasoned. Well done, admire your efforts.
 
  • Like
  • Fire
  • Love
Reactions: 39 users
Top Bottom