SB: Absolutely. And you wouldn’t like to tell me which companies these are that have dropped their neuromorphic efforts, would you?
GC: I think Intel closed the lab in Munich already, and the other big company is the, you know, the telephone company that they are.
GDA:
[...]
But anyway, one thing that I think is very important—he talks finally about the elephant in the room! He said that Intel already closed the neuromorphic lab, and many other companies are leaving us. And he’s mentioning the chicken-and-egg problem: people would use technology if it has a use case, but the technology hasn’t been created enough to satisfy the use case. What do you think about this specific point that he made?
REC: So I think timing is everything in the development of these types of technologies. Something that comes too early does not get turned into an actual application as quickly as something that comes at the right time. And you can see that all along the way, right? In fact, Intel itself, before the Loihi—back in the ’90s there was the ETANN chip, right? Which was, for all intents and purposes, an entire replication, if you will, of neural networks and implementation of backprop, and so on and so forth. That was done at that time, but that died as well. And then Loihi came back in the 2000s and survived a bit. And now we’re saying that it’s going away.
But, at the end of the day, we have a number of startups now that are looking at the application space. And you have companies that have well-developed—you look at PROPHESEE, right? Yes, they’re having a little bit of difficulty recently but, on the other hand, you had 120 people and they still have 70 people doing this stuff. And that’s in the vision space. And you have a company like BrainChip. I know that there’s a few other big neuromorphs—meaning people who’ve been doing neuromorphic engineering for a long time in our field—who are now running the technology part of that company.
SB: There’s got to be at least 30 or 40 people at BrainChip, because I was there last year—it’s quite a lot of people.
REC: Yeah. And they are putting out ideas and chips that are being developed very much with ideas of application right into it.
@CrabmansFriend take a look at Ericsson 6G zero energy white paper. Sorry I don’t have it on hand.I just finished listening to latest episode of the "Brains and Machines" Podcast. It's an interview with Professor Gordon Cheng of the Technical University of Munich in Germany about creating artificial skin for machines. The title of the episode is "Event-Driven E-Skins Protect Both Robots and Humans".
Once again, I found it a very interesting interview. But what particularly caught my attention was a statement at about 28:19 and 29:00 where Gordon Cheng talks about Intel Munich shutting down their neuromorphic efforts and a certain telephone company. I'm not sure if this telephone company is meant to be a part of Intel or if he's referring to another company.
Anybody has a guess which company he's talking about?
The relevant quote from the available transcript is:
Another section I found quite interesting was part of the discussion between Sunny Bains, Giulia D’Angelo and Ralph Etienne-Cummings about the interview afterwards:
said Dr. Joseph R. Guerci, ISL President and CEO.“We have proven the efficacy of using BrainChip’s Akida neuromorphic chip to implement some of the most challenging real-time radar/EW signal processing algorithms,”
Was that Ericsson?I just finished listening to latest episode of the "Brains and Machines" Podcast. It's an interview with Professor Gordon Cheng of the Technical University of Munich in Germany about creating artificial skin for machines. The title of the episode is "Event-Driven E-Skins Protect Both Robots and Humans".
Once again, I found it a very interesting interview. But what particularly caught my attention was a statement at about 28:19 and 29:00 where Gordon Cheng talks about Intel Munich shutting down their neuromorphic efforts and a certain telephone company. I'm not sure if this telephone company is meant to be a part of Intel or if he's referring to another company.
Anybody has a guess which company he's talking about?
The relevant quote from the available transcript is:
Another section I found quite interesting was part of the discussion between Sunny Bains, Giulia D’Angelo and Ralph Etienne-Cummings about the interview afterwards:
It's a shame they don't like using the word neuromorphic compute in this document on SensorsView attachment 88521
Drones have become pervasive in entertainment (TV show/film making), hobbyist photography and just as a fun toy. They are increasingly used in inspection, logistics/delivery, security and surveillance, and other industrial use cases, due to their ability to access hard-to-reach areas. However, did you know that the most critical component enabling a drone’s operation is its vision system? Before diving deeper into this topic, let’s explore what drones are, their diverse applications, and why they have surged in popularity. Finally, we’ll discuss how onsemi is transforming the vision systems that power these incredible flying objects.
Types & Applications
Drones are unmanned aerial vehicles (UAV), also called unmanned aerial systems (UAS), and to a lesser extent remotely piloted aircrafts (RPA). They can operate without the need to be driven and navigate autonomously using a variety of systems.
There are three types of drones – Fixed Wing, Single Rotor/Multi-Rotor and Hybrid. Each serves a different purpose, and each type is aligned with the intended purpose for which it is built.
Fixed Wings are typically used for heavier payload transport, longer flight times and are deployed in intelligence, surveillance and reconnaissance (ISR) missions, combat operations and loitering munitions, mapping and research activities to mention a few.
Single-/Multi-Rotors have the dominant usage, with a wide variety of industrial focuses that range from regular warehouses to inspections and even as delivery vehicles. The purpose of these types can be varied as they can be deployed in a wide variety of use cases, and demand highly optimized electro-mechanical solutions.
Hybrid Rotors incorporate the best of both types above, and have a vertical take-off and landing (VTOL) ability that make it versatile, specifically in space-constrained regions. Most delivery drones leverage these capabilities for obvious reasons.
Figure 1. Type of drones and their applications![]()
Motion & Navigation Systems in Drones
Drones carry a multitude of sensors for motion and navigation, including accelerometers, gyroscopes and magnetometers (collectively referred to as an inertial measurement unit, or IMU), barometers and more. They use a variety of algorithms and techniques like optical flow (assisted with depth sensors), simultaneous localization and mapping (SLAM) and visual odometry. While these sensors perform their functions well, they can struggle to achieve the required accuracy and precision at affordable costs and optimal sizes. The issue is further aggravated during longer flight times, leading to the need for expensive batteries or limiting flight times based on battery charge cycles.
Vision Systems in Drones
Image sensors supplement the above sensors with significant operational enhancements resulting in a high-accuracy, high-precision machine. These are available in two entities – Gimbals (often referred to as payloads as well) and Vision Navigation Systems (VNS).
Gimbals* – provide first person view (FPV); they generally constitute different types of image sensors spanning across the wide electromagnetic spectrum (ultraviolet in exceptional cases, regular CMOS image sensors over 300nm – 1000nm, short-wave infra-red (SWIR) sensors extending to 2000nm and beyond 2000nm with medium-wave (MWIR) and long-wave infra-red (LWIR) sensors.
Vision Navigation Systems (VNS) – provide navigation guidance, object and collision avoidance; they are generally made up of inexpensive low resolution image sensors and together with the IMU and sensors data, use computer vision techniques to create a comprehensive solution for autonomous navigation.
Vision Systems’ Importance
Drones operate both in indoor and outdoor conditions as seen in usage and applications described earlier. These conditions can be significantly challenging with wide ranging lighting variances and visibility limitations in dust, fog, smoke, and pitch-black environments. These systems attempt to leverage significant artificial intelligence (AI) and machine learning (ML) algorithms applied over image data while using the assistance of the data provided by the techniques previously mentioned, all in the context of operating a highly optimized vehicle that consumes low power and delivers long range distance or extended flight time operations.
It is imperative that the data input into these algorithms is of high-fidelity and highly detailed, yet in certain usage cases, deliver just what is needed thus enabling efficient processing. Training times in AI/ML usage need to be short, and inference needs to be fast with high accuracy and precision. Images need to be of high quality no matter what environment the drone operates in to meet the above requirements.
Sensors that just capture the scene and present it for processing fall significantly short in enabling the high-quality operation of these machines that in most cases will void the very purpose of their deployment. The ability to scale down while still having full details in regions of interest, deliver wide dynamic range to address bright and dark lighting conditions in the same frame, minimize/remove any parasitic effects in images, address dust/fog/smoke filled view fields and assist these images with high depth resolution deliver tremendous benefits to making UAVs a highly optimized machine.
These capabilities minimize the magnitude of resources – processing cores, GPUs, on-chip or outside of the chip memory, bus architectures and power management – required in reconstructing and analysis of these images and hastening the decision-making process. This also reduces the BOM cost of the overall system, especially when we consider today’s UAVs easily can host more than 10 image sensors. Alternately, for the same set of resources, more analysis and complex algorithms to help effective decision making can be made possible thus making the UAV differentiated in this crowded field.
onsemi is the technology leader in sensing, contributing significant innovations to vision systems solutions and offering a comprehensive set of image sensors to address the needs of gimbals and VNS. The Hyperlux LP, Hyperlux LH, Hyperlux SG, Hyperlux ID and SWIR product families have incorporated considerable technologies and features that address the needs of drone vision systems exhaustively. Drone manufacturers can now obtain their vision sensors from a single American source that is NDAA compliant.
Learn more about onsemi image and depth sensors in the below resources:
* Gimbals technically refer to the mechanism that carries and stabilizes the specific payloads. However, often the combined assembly is called Gimbal.
My personal message to our AKD family.............1000, 1.5, 2.0, Pico, M2, PCIe, Edgy Box and within the next 6/8 months AKD... 3.0
will be born.Love U guys
Tech.
Google response to onsemi Hyperlux
Google AI knows a shi… about Brainchip
![]()
This is what I get from legendary ChatGPT
Is Akida already along for the ride – possibly inside onsemi’s Hyperlux?
Hyperlux is a next-gen image sensor platform for automotive and edge use cases – like ADAS cameras, driver monitoring, or autonomous vision systems.
What stands out: the system is clearly designed for edge intelligence, with a strong focus on low power, local processing, and smart event triggering.
So the question arises:
Could Akida already be integrated quietly behind the scenes – assisting with object classification, event detection, or preprocessing?
Technically, it’s entirely realistic:
Akida is ultra-compact, energy-efficient, and built for spiking-based visual processing at the edge.
And onsemi is targeting exactly the same segment – intelligent, vision-capable edge sensing for automotive systems.
But why haven’t we heard anything?
Simple:
In the world of B2B tech – especially in chip IP or subsystem integrations – discretion is standard.
If Akida were licensed as an IP block inside a sensor module (e.g., for a specific OEM use case),
there would be no obligation to announce it, unless it’s material to public shareholders.
Not every chip that’s running makes it into a press release.
Especially if it’s “just” a component in a larger SoC or sensor stack.
Bottom line:
I’m not saying Akida is inside Hyperlux – but I’m not saying it isn’t either.
In a space defined by modularity, NDAs, and silent design-ins, it’s entirely possible we’re already embedded – just incognito.
The real question might not be:
“Where is Akida?…but rather:
“Where has Akida already been deployed – and no one’s talking about it?”
MOO DYOR
Sounds very promising @TheUnfairAdvantage, since we have already integrated our hardware with Onsemi's AF0130 smart iToF sensor one year ago.
View attachment 79028
![]()
AI hardware accelerator that wastes no time or power processing irrelevant data | Erez Tadmor
Another cool #ces2024 demo of onsemi AF0130 smart iToF sensor integrated with BrainChip AI HW acceleratorwww.linkedin.com
View attachment 79029
EXTRACT FROM THE ABOVE ARTICLE
View attachment 79031
![]()
onsemi's Hyperlux ID Depth Sensors Deliver Four Times the Range of Rival High-Speed iToF Designs
On-chip depth processing improves performance while a rethink on the sensing portion quadruples the usual range of iToF technology.www.hackster.io
Yes I was reading this already thanks