BRN Discussion Ongoing

Rach2512

Regular

Could we be on it?

Screenshot_20250714_155546_Samsung Internet.jpg

Screenshot_20250714_155558_Samsung Internet.jpg
 
  • Like
  • Thinking
  • Love
Reactions: 6 users
Interesting channel for BRN , hopefully they are speaking with them soon 🤔


 
  • Like
  • Love
  • Thinking
Reactions: 6 users

Frangipani

Top 20

4.2 Use Case: If the Akida accelerator is deployed in an autonomous driving system, V2X communication allows other vehicles or infrastructure to receive AI alerts based on neuromorphic-based vision​

This Use Cases simulates a lightweight V2X (Vehicle-to-Everything) communication system using Python. It demonstrates how neuromorphic AI event results, such as pedestrian detection, can be broadcast over a network and received by nearby infrastructure or vehicles.

(…) Given that Raúl Parada Medina describes himself as an “IoT research specialist within the connected car project at a Spanish automobile manufacturer”, I had already suggested a connection to the Volkswagen Group via SEAT or CUPRA at the time.

https://thestockexchange.com.au/threads/brn-discussion-ongoing.1/post-424590

View attachment 88422
View attachment 88424





View attachment 88425

Extremely likely the same Raúl Parada Medina whom you recently spotted asking for help with Akida in the DeGirum Community - very disappointingly, no one from our company appears to have been willing to help solve this problem for more than 3 months!

Why promote DeGirum for developers wanting to work with Akida and then not give assistance when needed? Not a good look, if we are to believe shashi from the DeGirum team, who wrote on February 12 he would forward Parada’s request to the BrainChip team, but apparently never got a reply.

https://thestockexchange.com.au/threads/brn-discussion-ongoing.1/post-461608

View attachment 88428

The issue continued, until it was eventually solved on 27 May by another DeGirum team member, stephan-degirum (presumably Stephan Sokolov, who recently demonstrated running the DeGirum PySDK directly on BrainChip hardware at the 2025 Embedded Vision Summit - see the video here: https://thestockexchange.com.au/threads/brn-discussion-ongoing.1/post-469037)





raul.parada.medina
May 27

Hi @alex and @shashi for your reply, it looks there is no update from Brianchip in this sense. Please, could you tell me how to upload this model in the platform? Age estimation (regression) example — Akida Examples documentation. Thanks!

1 Reply



shashiDeGirum Team
May 27

@stephan-degirum
Can you please help @raul.parada.medina ?




stephan-degirum
143_2.png
raul.parada.medina
May 27

Hello @raul.parada.medina , conversion of a model from BrainChip’s model zoo into our format is straightforward:
Once you have an Akida model object, like Step 4 in the example:
model_akida = convert(model_quantized_keras)

You’ll need to map the model to your device and then convert it to a compatible binary:


from akida import devices

# Map model onto your Akida device
dev = devices()[0]
try:
model_akida.map(dev, hw_only=True)
except RuntimeError:
model_akida.map(dev, hw_only=False)

# Extract the C++-compatible program blob
blob = model_akida.sequences[0].program
with open("model_cxx.fbz", "wb") as f:
f.write(blob)

print("C++-compatible model written to model_cxx.fbz")

Note: You want to be sure that the model is supported on your Akida device. There are many models on the BrainChip model zoo that are not compatible with their “version 1 IP” devices.
If your device is a v1 device, you’ll need to add a set_akida_version guard:

from cnn2snn import convert, set_akida_version, AkidaVersion
# Convert the model
with set_akida_version(AkidaVersion.v1):
model_akida = convert(model_quantized_keras)
model_akida.summary()

from akida import devices
# Map model onto your Akida device
# ... (see above)

for more information on v1/v2 model compatibility please see their docs: Akida models zoo — Akida Examples documentation

Once you have a model binary blob created:

Create a model JSON file adjacent to the blob by following Model JSON Structure | DeGirum Docs or by looking at existing BrainChip models on our AI Hub for reference: https://hub.degirum.com/degirum/brainchip
ModelPath is your binary model file
RuntimeAgent is AKIDA
DeviceType is the middle output from akida devices in all caps.
For example for if akida devices shows: PCIe/NSoC_v2/0 you put: NSOC_V2
Your JSON + binary model blob are now compatible with PySDK. Try running the inference on your device locally by specifying the full path to the JSON as a zoo_url, see: PySDK Package | DeGirum Docs
“For local AI hardware inferences you specify zoo_urlparameter as either a path to a local model zoo directory, or a path to model’s .json configuration file.”
You can then zip them up and upload them to your model zoo in our AI Hub.
Let me know if this helped.
P.S. we currently have v1 hardware in our cloud farm, and this model is the face estimation model for NSoC_v2:
https://hub.degirum.com/degirum/brainchip/vgg_regress_age_utkface--32x32_quant_akida_NSoC_1


Anyway, as you had already noticed in your first post on this DeGirum enquiry, Raúl Parada Medina (assuming it is the same person, which I have no doubt about) and Fernando Sevilla Martínez are both co-authors of a paper on autonomous driving:

https://thestockexchange.com.au/threads/brn-discussion-ongoing.1/post-450543

View attachment 88429

In fact, they have co-published two papers on autonomous driving, together with another researcher: Jordi Casas-Roma. He is director of the Master in Data Science at the Barcelona-based private online university Universitat Oberta de Catalunya, the same department where Fernando Sevilla Martínez got his Master’s degree in 2022 before moving to Wolfsburg the following year, where he now works as a data scientist at the headquarters of the Volkswagen Group.


View attachment 88430


View attachment 88426 View attachment 88427

After digging a little further, it looks more and more likely to me that CUPRA is the Spanish automobile manufacturer with the connected car project Raúl Parada Medina is currently involved in (cf. his LinkedIn profile).

Which in turn greatly heightens the probability that he and Fernando Sevilla Martínez (who works for the Volkswagen Group as a data scientist in the Volkswagen logistics data lake) have been collaborating once again, this time jointly experimenting on “networked neuromorphic AI for distributed intelligence” with the help of an Akida PCIe board paired with a Raspberry Pi 5. (https://github.com/SevillaFe/SNN_Akida_RPI5)

While the GitHub repository SNN_Akida_RPI5 is described very generally as “Eco-Efficient Deployment of Spiking Neural Networks on Low-Cost Edge Hardware”, and one of the use cases (involving MQTT - Message Queuing Telemetry Transport) “supports event-based edge AI and real-time feedback in smart environments, such as surveillance, mobility, and robotics” - and hence a very broad range of applications - one focus is evidently on V2X (= Vehicle-to-Everything) communication systems: Use case 4.2 in the GitHub repository demonstrates how “neuromorphic AI event results, such as pedestrian detection, can be broadcast over a network and received by nearby infrastructure or vehicles.”

F7B8CD08-3233-4F94-B3D2-E228670574D6.jpeg



CUPRA is nowadays a standalone car brand owned by SEAT, a Spanish (Catalonian to be precise) automobile manufacturer headquartered in Martorell near Barcelona. In fact, CUPRA originated from SEAT’s motorsport division Cupra Racing. Both car brands are part of the Volkswagen Group.

CUPRA’s first EV, the CUPRA Born, introduced in 2021 and named after a Barcelona neighbourhood, is already equipped with Car2X technology as standard (see video below). Two more CUPRA models with Car2X functions have since been released: CUPRA Tavascan and CUPRA Terramar.

Broadly speaking, Car2X/V2X (often, but not always used interchangeably, see the Telekom article below) stands for technologies that enable vehicles to communicate with one another and their environment in real time. They help to prevent accidents by warning other nearby vehicles of hazards ahead without visibility, as V2X can “see” around corners and through obstacles in a radius of several hundred meters, connecting all users who have activated V2X (obviously provided their vehicles support it) to a safety network.

“B. V2X technology

I. Principles

Your vehicle is equipped with V2X technology. If you activate the V2X technology, your vehicle can exchange important road traffic information, for example about accidents or traffic jams, with other road users or traffic infrastructure if they also support V2X technology. This makes your participation in road traffic even safer. When you log into the vehicle for the first time, you must check whether the V2X setting is right for you and you can deactivate V2X manually as needed.

Communication takes place directly between your vehicle and other road users or the traffic infrastructure within a close range of approximately 200 m to 800 m. This range can vary depending on the environment, such as in tunnels or in the city.

(…)


III. V2X functionalities

V2X can assist you in the following situations:

1. Warning of local hazards

The V2X function scans the range described above around your vehicle in order to inform you of relevant local hazards. To do this, driving information from other V2X users is received and analysed. For example, if a vehicle travelling in front initiates emergency braking and sends this information via V2X, your vehicle can display a warning message. Please note that your vehicle does not perform automatic driving interventions due to such warnings. In other words, it does not automatically initiate emergency braking, for example.

2. Supplement to adaptive cruise control

The V2X technology can supplement your vehicle's predictive sensor system (e.g. radar and camera systems) and detect traffic situations even more quickly to give you more time to react to them. With more precise information about a traffic situation, adaptive cruise control, for example, can respond to a tail end of a traffic jam in conjunction with the cruise control system and automatically adjust the speed. Other functions, such as manual lane change assistance, are also improved.

3. Other functionalities

Further V2X functions may be developed in future. We will inform you separately about data processing in connection with new V2X functions.

IV. Data exchange

If you activate the V2X technology, it continuously sends general traffic information to other V2X users (e.g. other vehicles, infrastructure) and allows them to evaluate the current traffic situation. The following data is transmitted for this: information about the V2X transmitter (temporary ID, type), vehicle information (vehicle dimensions), driving information (acceleration, geographical position, direction of movement, speed), information from vehicle sensors (yaw rate, cornering, light status, pedal status and steering angle) and route (waypoints, i.e. positioning data, of the last 200 m to 500 m driven).

The activated V2X technology also transmits additional data to other V2X users when certain events occur. In particular, these events include a vehicle stopping, breakdowns, accidents, interventions by an active safety system and the tail end of traffic jams. The data is only transmitted when these events occur. The following data is additionally transmitted: event information (type of event, time of event and time of message, geographical position, event area, direction of movement) and route (waypoints, i.e. positioning data, of the last 600 m to 1,000 m driven).

The data sent to other V2X users is pseudonymised. This means that you are not displayed as the sender of the information to other V2X users.

Volkswagen AG does not have access to this data and does not store it.”




Here is a February 2024 overview of Car2X hazard warning dashboard symbols in Volkswagen Group automobiles, which also shows 8 models that were already equipped with this innovative technology early last year: 7 VW models as well as the CUPRA Born. Meanwhile the number has increased to at least 13 - see the screenshot of a Motoreport video uploaded earlier this month.


2D9278F8-0660-499F-9306-FDDCA70707C1.jpeg



And here is an informative article on Car2X/V2X by a well-know German telecommunications provider that - just as numerous competitors in the field - has a vested interest in the expansion of this technology, especially relating to the “development towards nationwide 5G Car2X communications”.


255ED097-709F-4C17-B90B-2D4EE50BBBA2.jpeg
2D633CA0-F4FD-455A-B7D9-878110B11994.jpeg
E03843A1-AF78-413B-BBFD-D618DBE46694.jpeg



A5D2E96C-C609-480C-A56D-4A2850A13D21.jpeg

AA6C28C7-BE7B-43BB-998F-1D5172ABD30F.jpeg




76798919-57CB-40B2-A8F3-1FB4685312C6.jpeg




B84FD131-41AB-4E2F-8663-CF28FAA9E522.jpeg
 

Attachments

  • 1BF7BB9F-41DF-4A65-8746-FB48998E9FA5.jpeg
    1BF7BB9F-41DF-4A65-8746-FB48998E9FA5.jpeg
    524.9 KB · Views: 53
  • Like
  • Fire
  • Love
Reactions: 31 users
I just finished listening to latest episode of the "Brains and Machines" Podcast. It's an interview with Professor Gordon Cheng of the Technical University of Munich in Germany about creating artificial skin for machines. The title of the episode is "Event-Driven E-Skins Protect Both Robots and Humans".


Once again, I found it a very interesting interview. But what particularly caught my attention was a statement at about 28:19 and 29:00 where Gordon Cheng talks about Intel Munich shutting down their neuromorphic efforts and a certain telephone company. I'm not sure if this telephone company is meant to be a part of Intel or if he's referring to another company.

Anybody has a guess which company he's talking about?

The relevant quote from the available transcript is:
SB: Absolutely. And you wouldn’t like to tell me which companies these are that have dropped their neuromorphic efforts, would you?

GC: I think Intel closed the lab in Munich already, and the other big company is the, you know, the telephone company that they are.


Another section I found quite interesting was part of the discussion between Sunny Bains, Giulia D’Angelo and Ralph Etienne-Cummings about the interview afterwards:

GDA:
[...]
But anyway, one thing that I think is very important—he talks finally about the elephant in the room! He said that Intel already closed the neuromorphic lab, and many other companies are leaving us. And he’s mentioning the chicken-and-egg problem: people would use technology if it has a use case, but the technology hasn’t been created enough to satisfy the use case. What do you think about this specific point that he made?

REC: So I think timing is everything in the development of these types of technologies. Something that comes too early does not get turned into an actual application as quickly as something that comes at the right time. And you can see that all along the way, right? In fact, Intel itself, before the Loihi—back in the ’90s there was the ETANN chip, right? Which was, for all intents and purposes, an entire replication, if you will, of neural networks and implementation of backprop, and so on and so forth. That was done at that time, but that died as well. And then Loihi came back in the 2000s and survived a bit. And now we’re saying that it’s going away.

But, at the end of the day, we have a number of startups now that are looking at the application space. And you have companies that have well-developed—you look at PROPHESEE, right? Yes, they’re having a little bit of difficulty recently but, on the other hand, you had 120 people and they still have 70 people doing this stuff. And that’s in the vision space. And you have a company like BrainChip. I know that there’s a few other big neuromorphs—meaning people who’ve been doing neuromorphic engineering for a long time in our field—who are now running the technology part of that company.

SB: There’s got to be at least 30 or 40 people at BrainChip, because I was there last year—it’s quite a lot of people.

REC: Yeah. And they are putting out ideas and chips that are being developed very much with ideas of application right into it.
 
  • Like
  • Love
  • Fire
Reactions: 25 users
I just finished listening to latest episode of the "Brains and Machines" Podcast. It's an interview with Professor Gordon Cheng of the Technical University of Munich in Germany about creating artificial skin for machines. The title of the episode is "Event-Driven E-Skins Protect Both Robots and Humans".


Once again, I found it a very interesting interview. But what particularly caught my attention was a statement at about 28:19 and 29:00 where Gordon Cheng talks about Intel Munich shutting down their neuromorphic efforts and a certain telephone company. I'm not sure if this telephone company is meant to be a part of Intel or if he's referring to another company.

Anybody has a guess which company he's talking about?

The relevant quote from the available transcript is:



Another section I found quite interesting was part of the discussion between Sunny Bains, Giulia D’Angelo and Ralph Etienne-Cummings about the interview afterwards:
@CrabmansFriend take a look at Ericsson 6G zero energy white paper. Sorry I don’t have it on hand.

I’m sure the white paper was talking neuromorphic, can’t recall if it was BC specific but I’m thinking them and Loihi.

Then Ericsson shelved it for whatever reason. Possibly poor timing for various global strategic reasons and maybe technical issues also.

The good news is most of the works been done. They can take it off the shelf when the time is right, they have the issues ironed out and the worlds ready to implement it!

Ericsson at the time laid off thousands. Unfortunate set of circumstances for all concerned. If that had gone ahead we’d all be fighting over berths for our yachts instead of watching paint dry.

The RTX, AFRL announcement earlier will still get us a return. Once the initial development is completed it should lead to a contract and trailing revenue. Defence and drones will be massive!

There’s plenty of irons in the fire. 🔥

(DYOR)
 
  • Like
  • Love
  • Fire
Reactions: 30 users

Diogenese

Top 20

Hi TTM,

This is very encouraging. The microDoppler project was announced 7 months ago.

The icing on the cake is that this article refers to ISL as endorsing Akida for the project, even though RTX is the sub-contractor.


The initiative builds on prior successful demonstrations of radar processing algorithms on our Akida™ neuromorphic hardware. These demonstrations were independently validated by the RTX team and the team at ISL 2, who each reported significant performance benefits, with ISL further endorsing our participation in the AFRL program,” commented Brightfield.

We know that ISL has been engaged with Akida for a while,

https://www.edgeir.com/brainchip-an...red-radar-for-military-and-aerospace-20250407

BrainChip and ISL advance AI-powered radar for military and aerospace​

Apr 7, 2025


and ISL is in good standing with AFRL:

ISL has a successful record of commercialization (recipient of three (3) Phase III SBIR contracts) and was the subject of an Air Force Research Laboratory (AFRL) SBIR Success story (see https://www.sbir.gov/node/1526807 ).

https://www.islinc.com/company-bio

I recall Dr Guerci's enthusiastic anticipation of TENNs, and more recently:


“We have proven the efficacy of using BrainChip’s Akida neuromorphic chip to implement some of the most challenging real-time radar/EW signal processing algorithms,”
said Dr. Joseph R. Guerci, ISL President and CEO.

https://brainchip.com/
- 3rd dot along in What binnovators are saying

So to bring this up again after 7 months, "Where there's smoke, ..."
 
  • Like
  • Fire
  • Love
Reactions: 41 users

Diogenese

Top 20
I just finished listening to latest episode of the "Brains and Machines" Podcast. It's an interview with Professor Gordon Cheng of the Technical University of Munich in Germany about creating artificial skin for machines. The title of the episode is "Event-Driven E-Skins Protect Both Robots and Humans".


Once again, I found it a very interesting interview. But what particularly caught my attention was a statement at about 28:19 and 29:00 where Gordon Cheng talks about Intel Munich shutting down their neuromorphic efforts and a certain telephone company. I'm not sure if this telephone company is meant to be a part of Intel or if he's referring to another company.

Anybody has a guess which company he's talking about?

The relevant quote from the available transcript is:



Another section I found quite interesting was part of the discussion between Sunny Bains, Giulia D’Angelo and Ralph Etienne-Cummings about the interview afterwards:
Was that Ericsson?

Edit: um ... SNAP! @Stable Genius
 
  • Like
  • Love
  • Fire
Reactions: 11 users

CHIPS

Regular
https://medium.com/m/signin?operati...------------new_post_topnav------------------






Neuromorphic Computing: Mimicking the Brain for Next-Gen AI

https://medium.com/@marketing_30607...f9d67c---------------------------------------
3 days ago




1*Wyem3H6J-sUGnwVTmu15MQ.png

The digital age is constantly pushing the boundaries of computing. As Artificial Intelligence (AI) becomes more complex and pervasive, the energy demands and processing limitations of traditional computer architectures (known as von Neumann architectures) are becoming increasingly apparent. Our current computers, with their separate processing and memory units, face a “memory bottleneck” — a constant back-and-forth movement of data that consumes significant power and time.
But what if we could design computers that work more like the most efficient, parallel processing machine known: the human brain? This is the promise of Neuromorphic Computing, a revolutionary paradigm poised to redefine the future of AI.
What is Neuromorphic Computing?
Inspired by the intricate structure and function of the human brain, neuromorphic computing aims to build hardware and software that mimic biological neural networks. Unlike traditional computers that process instructions sequentially, neuromorphic systems feature processing and memory integrated into the same unit, much like neurons and synapses in the brain.
This fundamental architectural shift allows them to process information in a highly parallel, event-driven, and energy-efficient manner, making them uniquely suited for the demands of next-generation AI and real-time cognitive tasks.
How Does it Work? The Brain-Inspired Blueprint
The core of neuromorphic computing lies in replicating key aspects of neural activity:
  1. Spiking Neural Networks (SNNs): Instead of continuous data flow, neuromorphic chips use Spiking Neural Networks (SNNs). In SNNs, artificial neurons “fire” or “spike” only when a certain threshold of input is reached, similar to how biological neurons communicate via electrical impulses. This “event-driven” processing drastically reduces power consumption compared to constantly active traditional circuits.
  2. Event-Driven Processing: Computations occur only when and where there is relevant information (an “event” or a “spike”). This contrasts with conventional CPUs/GPUs that execute instructions continuously, even when processing redundant data.
  3. Synaptic Plasticity: Neuromorphic systems implement artificial synapses that can strengthen or weaken their connections over time based on the activity patterns, mirroring the brain’s ability to learn and adapt (synaptic plasticity). This allows for on-chip learning and continuous adaptation without extensive retraining.
  4. Parallelism: Billions of artificial neurons and synapses operate in parallel, enabling highly efficient concurrent processing of complex information, much like the human brain handles multiple sensory inputs simultaneously.
Leading the charge in hardware development are chips like Intel’s Loihi and IBM’s TrueNorth, alongside innovative startups like BrainChip with its Akida processor. These chips are designed from the ground up to embody these brain-inspired principles. For example, Intel’s recently launched Hala Point (April 2024), built with 1,125 Loihi 2 chips, represents the world’s largest neuromorphic system, pushing the boundaries of brain-inspired AI.
Why is it the “Next Frontier”? Unlocking AI’s Potential
Neuromorphic computing offers critical advantages over traditional architectures for AI workloads:
  • Superior Energy Efficiency: This is perhaps the biggest draw. By processing data only when an event occurs and integrating memory and processing, neuromorphic chips can achieve orders of magnitude greater energy efficiency compared to GPUs, making powerful AI feasible for edge devices and continuous operations where power is limited.
  • Real-Time Processing: The event-driven and parallel nature allows for ultra-low latency decision-making, crucial for applications like autonomous vehicles, robotics, and real-time sensor data analysis.
  • On-Device Learning & Adaptability: With built-in synaptic plasticity, neuromorphic systems can learn and adapt from new data in real-time, reducing the need for constant cloud connectivity and retraining on large datasets.
  • Enhanced Pattern Recognition: Mimicking the brain’s ability to recognize patterns even from noisy or incomplete data, neuromorphic chips excel at tasks like image, speech, and natural language processing.
  • Fault Tolerance: Just like the brain can compensate for damage, neuromorphic systems, with their distributed processing, can exhibit greater resilience to component failures.
Real-World Applications: From Smart Homes to Space
The unique capabilities of neuromorphic computing are opening doors to revolutionary applications:
  • Edge AI & IoT: Enabling billions of connected devices (smart home sensors, industrial IoT, wearables) to perform complex AI tasks locally and efficiently, reducing reliance on cloud processing and enhancing privacy. Imagine a wearable that can detect complex health anomalies in real-time, or a smart city sensor that predicts pollution patterns without constantly sending data to the cloud.
  • Autonomous Systems: Powering self-driving cars and drones with ultra-fast, energy-efficient decision-making capabilities, allowing them to react instantly to dynamic environments.
  • Robotics: Giving robots more adaptive perception and real-time learning capabilities, enabling them to navigate complex factory layouts or interact more naturally with humans.
  • Advanced Sensing: Developing smart sensors that can process complex data (e.g., visual or auditory) with minimal power, leading to breakthroughs in areas like medical imaging and environmental monitoring.
  • Cybersecurity: Enhancing anomaly detection by rapidly recognizing unusual patterns in network traffic or user behavior that could signify cyberattacks, with low latency.
  • Biomedical Research: Providing platforms to simulate brain functions and model neurological disorders, potentially leading to new treatments for conditions like epilepsy or Parkinson’s.
Challenges and the Road Ahead
Despite its immense promise, neuromorphic computing is still in its nascent stages and faces significant challenges:
  • Hardware Limitations: Developing neuromorphic chips that can scale to the complexity of the human brain (trillions of synapses) while remaining manufacturable and cost-effective is a monumental engineering feat.
  • Software Ecosystem: There’s a lack of standardized programming languages, development tools, and frameworks tailored specifically for neuromorphic architectures, making it challenging for developers to easily create and port algorithms.
  • Integration with Existing Systems: Integrating these fundamentally different architectures with existing IT infrastructure poses compatibility challenges.
  • Algorithm Development: While SNNs are powerful, developing efficient algorithms that fully leverage the unique strengths of neuromorphic hardware is an active area of research.
  • Ethical Considerations: As AI becomes more brain-like, concerns around conscious AI, accountability, and the ethical implications of mimicking biological intelligence will become increasingly relevant.
Conclusion
Neuromorphic computing represents a profound shift in how we approach computation. By learning from the brain’s incredible efficiency and parallelism, it offers a pathway to overcome the limitations of traditional computing for the ever-increasing demands of AI. While significant research and development are still required to bring it to widespread commercialization, the momentum is palpable.
As we move forward, neuromorphic computing holds the potential to unlock new frontiers in AI, creating intelligent systems that are not just powerful, but also remarkably energy-efficient, adaptable, and truly integrated with the world around us. It’s a journey to build the next generation of AI, one synapse at a time.
 
  • Like
  • Fire
Reactions: 21 users

CHIPS

Regular
Do you see the above post in English or German? Sometimes it translates automatically...
 

Tothemoon24

Top 20
IMG_1256.jpeg




Drones have become pervasive in entertainment (TV show/film making), hobbyist photography and just as a fun toy. They are increasingly used in inspection, logistics/delivery, security and surveillance, and other industrial use cases, due to their ability to access hard-to-reach areas. However, did you know that the most critical component enabling a drone’s operation is its vision system? Before diving deeper into this topic, let’s explore what drones are, their diverse applications, and why they have surged in popularity. Finally, we’ll discuss how onsemi is transforming the vision systems that power these incredible flying objects.

Types & Applications​

Drones are unmanned aerial vehicles (UAV), also called unmanned aerial systems (UAS), and to a lesser extent remotely piloted aircrafts (RPA). They can operate without the need to be driven and navigate autonomously using a variety of systems.

There are three types of drones – Fixed Wing, Single Rotor/Multi-Rotor and Hybrid. Each serves a different purpose, and each type is aligned with the intended purpose for which it is built.

Fixed Wings are typically used for heavier payload transport, longer flight times and are deployed in intelligence, surveillance and reconnaissance (ISR) missions, combat operations and loitering munitions, mapping and research activities to mention a few.

Single-/Multi-Rotors have the dominant usage, with a wide variety of industrial focuses that range from regular warehouses to inspections and even as delivery vehicles. The purpose of these types can be varied as they can be deployed in a wide variety of use cases, and demand highly optimized electro-mechanical solutions.

Hybrid Rotors incorporate the best of both types above, and have a vertical take-off and landing (VTOL) ability that make it versatile, specifically in space-constrained regions. Most delivery drones leverage these capabilities for obvious reasons.

fig1-drone-types.jpg
Figure 1. Type of drones and their applications

Motion & Navigation Systems in Drones​

Drones carry a multitude of sensors for motion and navigation, including accelerometers, gyroscopes and magnetometers (collectively referred to as an inertial measurement unit, or IMU), barometers and more. They use a variety of algorithms and techniques like optical flow (assisted with depth sensors), simultaneous localization and mapping (SLAM) and visual odometry. While these sensors perform their functions well, they can struggle to achieve the required accuracy and precision at affordable costs and optimal sizes. The issue is further aggravated during longer flight times, leading to the need for expensive batteries or limiting flight times based on battery charge cycles.

Vision Systems in Drones​

Image sensors supplement the above sensors with significant operational enhancements resulting in a high-accuracy, high-precision machine. These are available in two entities – Gimbals (often referred to as payloads as well) and Vision Navigation Systems (VNS).

Gimbals* – provide first person view (FPV); they generally constitute different types of image sensors spanning across the wide electromagnetic spectrum (ultraviolet in exceptional cases, regular CMOS image sensors over 300nm – 1000nm, short-wave infra-red (SWIR) sensors extending to 2000nm and beyond 2000nm with medium-wave (MWIR) and long-wave infra-red (LWIR) sensors.

Vision Navigation Systems (VNS) – provide navigation guidance, object and collision avoidance; they are generally made up of inexpensive low resolution image sensors and together with the IMU and sensors data, use computer vision techniques to create a comprehensive solution for autonomous navigation.

Vision Systems’ Importance​

Drones operate both in indoor and outdoor conditions as seen in usage and applications described earlier. These conditions can be significantly challenging with wide ranging lighting variances and visibility limitations in dust, fog, smoke, and pitch-black environments. These systems attempt to leverage significant artificial intelligence (AI) and machine learning (ML) algorithms applied over image data while using the assistance of the data provided by the techniques previously mentioned, all in the context of operating a highly optimized vehicle that consumes low power and delivers long range distance or extended flight time operations.

It is imperative that the data input into these algorithms is of high-fidelity and highly detailed, yet in certain usage cases, deliver just what is needed thus enabling efficient processing. Training times in AI/ML usage need to be short, and inference needs to be fast with high accuracy and precision. Images need to be of high quality no matter what environment the drone operates in to meet the above requirements.

Sensors that just capture the scene and present it for processing fall significantly short in enabling the high-quality operation of these machines that in most cases will void the very purpose of their deployment. The ability to scale down while still having full details in regions of interest, deliver wide dynamic range to address bright and dark lighting conditions in the same frame, minimize/remove any parasitic effects in images, address dust/fog/smoke filled view fields and assist these images with high depth resolution deliver tremendous benefits to making UAVs a highly optimized machine.

These capabilities minimize the magnitude of resources – processing cores, GPUs, on-chip or outside of the chip memory, bus architectures and power management – required in reconstructing and analysis of these images and hastening the decision-making process. This also reduces the BOM cost of the overall system, especially when we consider today’s UAVs easily can host more than 10 image sensors. Alternately, for the same set of resources, more analysis and complex algorithms to help effective decision making can be made possible thus making the UAV differentiated in this crowded field.

onsemi is the technology leader in sensing, contributing significant innovations to vision systems solutions and offering a comprehensive set of image sensors to address the needs of gimbals and VNS. The Hyperlux LP, Hyperlux LH, Hyperlux SG, Hyperlux ID and SWIR product families have incorporated considerable technologies and features that address the needs of drone vision systems exhaustively. Drone manufacturers can now obtain their vision sensors from a single American source that is NDAA compliant.

Learn more about onsemi image and depth sensors in the below resources:

* Gimbals technically refer to the mechanism that carries and stabilizes the specific payloads. However, often the combined assembly is called Gimbal.
 

Attachments

  • IMG_1256.jpeg
    IMG_1256.jpeg
    1 MB · Views: 61
Last edited:
  • Like
  • Fire
  • Love
Reactions: 34 users
View attachment 88521



Drones have become pervasive in entertainment (TV show/film making), hobbyist photography and just as a fun toy. They are increasingly used in inspection, logistics/delivery, security and surveillance, and other industrial use cases, due to their ability to access hard-to-reach areas. However, did you know that the most critical component enabling a drone’s operation is its vision system? Before diving deeper into this topic, let’s explore what drones are, their diverse applications, and why they have surged in popularity. Finally, we’ll discuss how onsemi is transforming the vision systems that power these incredible flying objects.

Types & Applications​

Drones are unmanned aerial vehicles (UAV), also called unmanned aerial systems (UAS), and to a lesser extent remotely piloted aircrafts (RPA). They can operate without the need to be driven and navigate autonomously using a variety of systems.

There are three types of drones – Fixed Wing, Single Rotor/Multi-Rotor and Hybrid. Each serves a different purpose, and each type is aligned with the intended purpose for which it is built.

Fixed Wings are typically used for heavier payload transport, longer flight times and are deployed in intelligence, surveillance and reconnaissance (ISR) missions, combat operations and loitering munitions, mapping and research activities to mention a few.

Single-/Multi-Rotors have the dominant usage, with a wide variety of industrial focuses that range from regular warehouses to inspections and even as delivery vehicles. The purpose of these types can be varied as they can be deployed in a wide variety of use cases, and demand highly optimized electro-mechanical solutions.

Hybrid Rotors incorporate the best of both types above, and have a vertical take-off and landing (VTOL) ability that make it versatile, specifically in space-constrained regions. Most delivery drones leverage these capabilities for obvious reasons.

fig1-drone-types.jpg
Figure 1. Type of drones and their applications

Motion & Navigation Systems in Drones​

Drones carry a multitude of sensors for motion and navigation, including accelerometers, gyroscopes and magnetometers (collectively referred to as an inertial measurement unit, or IMU), barometers and more. They use a variety of algorithms and techniques like optical flow (assisted with depth sensors), simultaneous localization and mapping (SLAM) and visual odometry. While these sensors perform their functions well, they can struggle to achieve the required accuracy and precision at affordable costs and optimal sizes. The issue is further aggravated during longer flight times, leading to the need for expensive batteries or limiting flight times based on battery charge cycles.

Vision Systems in Drones​

Image sensors supplement the above sensors with significant operational enhancements resulting in a high-accuracy, high-precision machine. These are available in two entities – Gimbals (often referred to as payloads as well) and Vision Navigation Systems (VNS).

Gimbals* – provide first person view (FPV); they generally constitute different types of image sensors spanning across the wide electromagnetic spectrum (ultraviolet in exceptional cases, regular CMOS image sensors over 300nm – 1000nm, short-wave infra-red (SWIR) sensors extending to 2000nm and beyond 2000nm with medium-wave (MWIR) and long-wave infra-red (LWIR) sensors.

Vision Navigation Systems (VNS) – provide navigation guidance, object and collision avoidance; they are generally made up of inexpensive low resolution image sensors and together with the IMU and sensors data, use computer vision techniques to create a comprehensive solution for autonomous navigation.

Vision Systems’ Importance​

Drones operate both in indoor and outdoor conditions as seen in usage and applications described earlier. These conditions can be significantly challenging with wide ranging lighting variances and visibility limitations in dust, fog, smoke, and pitch-black environments. These systems attempt to leverage significant artificial intelligence (AI) and machine learning (ML) algorithms applied over image data while using the assistance of the data provided by the techniques previously mentioned, all in the context of operating a highly optimized vehicle that consumes low power and delivers long range distance or extended flight time operations.

It is imperative that the data input into these algorithms is of high-fidelity and highly detailed, yet in certain usage cases, deliver just what is needed thus enabling efficient processing. Training times in AI/ML usage need to be short, and inference needs to be fast with high accuracy and precision. Images need to be of high quality no matter what environment the drone operates in to meet the above requirements.

Sensors that just capture the scene and present it for processing fall significantly short in enabling the high-quality operation of these machines that in most cases will void the very purpose of their deployment. The ability to scale down while still having full details in regions of interest, deliver wide dynamic range to address bright and dark lighting conditions in the same frame, minimize/remove any parasitic effects in images, address dust/fog/smoke filled view fields and assist these images with high depth resolution deliver tremendous benefits to making UAVs a highly optimized machine.

These capabilities minimize the magnitude of resources – processing cores, GPUs, on-chip or outside of the chip memory, bus architectures and power management – required in reconstructing and analysis of these images and hastening the decision-making process. This also reduces the BOM cost of the overall system, especially when we consider today’s UAVs easily can host more than 10 image sensors. Alternately, for the same set of resources, more analysis and complex algorithms to help effective decision making can be made possible thus making the UAV differentiated in this crowded field.

onsemi is the technology leader in sensing, contributing significant innovations to vision systems solutions and offering a comprehensive set of image sensors to address the needs of gimbals and VNS. The Hyperlux LP, Hyperlux LH, Hyperlux SG, Hyperlux ID and SWIR product families have incorporated considerable technologies and features that address the needs of drone vision systems exhaustively. Drone manufacturers can now obtain their vision sensors from a single American source that is NDAA compliant.

Learn more about onsemi image and depth sensors in the below resources:

* Gimbals technically refer to the mechanism that carries and stabilizes the specific payloads. However, often the combined assembly is called Gimbal.
It's a shame they don't like using the word neuromorphic compute in this document on Sensors 😞.
Always seem to be avoiding it like the plague with an alternative narrative 🙄.
 
  • Like
Reactions: 2 users

TECH

Regular


My personal message to our AKD family.............1000, 1.5, 2.0, Pico, M2, PCIe, Edgy Box and within the next 6/8 months AKD... 3.0
will be born. :ROFLMAO::ROFLMAO: Love U guys 💕 Tech.
 
  • Like
  • Love
  • Haha
Reactions: 22 users
Google response to onsemi Hyperlux
 

Attachments

  • Screenshot_20250715_115334_Google.jpg
    Screenshot_20250715_115334_Google.jpg
    452.7 KB · Views: 111
  • Like
  • Sad
Reactions: 4 users

Rskiff

Regular


My personal message to our AKD family.............1000, 1.5, 2.0, Pico, M2, PCIe, Edgy Box and within the next 6/8 months AKD... 3.0
will be born. :ROFLMAO::ROFLMAO: Love U guys 💕 Tech.

I sure want this and not end up like Amy
 
  • Haha
  • Like
  • Fire
Reactions: 11 users

7für7

Top 20
I believe we’ve now reached the point where, in the Edge-AI space, we’ve brought almost every relevant name to the table.

ARM. Intel Foundry. Mercedes. Edge Impulse. NVISO. NASA. GlobalFoundries. Now raytheon And yes – Dell.

What many might have missed: Dell Technologies already spoke very specifically about Akida in an official BrainChip podcast.

Rob Lincourt, Distinguished Engineer in Dell’s CTO Office, discussed its applications in Smart Home, Smart Health, Smart City, and Smart Transportation – and not as some generic PR fluff, but in a real tech-to-tech dialogue.

If you listen closely, it’s clear: Dell is watching. Carefully.


Well done, BrainChip. Truly.

Even though we (still) don’t have official unit numbers, customers, or design wins, one thing is clear:

The interest is there. The quality is real. The stage is set.

Honestly? I’m more confident than ever about what lies ahead.

Until then:
Patience. Trust. And a bit of faith in the invisible.
Because once the knot unravels, Akida will be everywhere.


And… I wouldn’t be 7 if I didn’t add a little self-ironic closing line:


Maybe we’ll hear something soon? An announcement, perhaps?
“Unquoted Securities” hidden in a side note?

Or maybe, just maybe… an update that doesn’t start with a Broadway quote? 😄
 
Last edited:
  • Like
  • Love
  • Fire
Reactions: 16 users

7für7

Top 20
Google response to onsemi Hyperlux


Google AI knows a shi… about Brainchip


spit GIF


This is what I get from legendary ChatGPT

Is Akida already along for the ride – possibly inside onsemi’s Hyperlux?

Hyperlux is a next-gen image sensor platform for automotive and edge use cases – like ADAS cameras, driver monitoring, or autonomous vision systems.

What stands out: the system is clearly designed for edge intelligence, with a strong focus on low power, local processing, and smart event triggering.

So the question arises:

Could Akida already be integrated quietly behind the scenes – assisting with object classification, event detection, or preprocessing?

👉 Technically, it’s entirely realistic:

Akida is ultra-compact, energy-efficient, and built for spiking-based visual processing at the edge.

And onsemi is targeting exactly the same segment – intelligent, vision-capable edge sensing for automotive systems.

But why haven’t we heard anything?

Simple:
In the world of B2B tech – especially in chip IP or subsystem integrations – discretion is standard.
If Akida were licensed as an IP block inside a sensor module (e.g., for a specific OEM use case),

➡️ there would be no obligation to announce it, unless it’s material to public shareholders.

Not every chip that’s running makes it into a press release.

Especially if it’s “just” a component in a larger SoC or sensor stack.

Bottom line:

I’m not saying Akida is inside Hyperlux – but I’m not saying it isn’t either.

In a space defined by modularity, NDAs, and silent design-ins, it’s entirely possible we’re already embedded – just incognito.

The real question might not be:
“Where is Akida?…but rather:
“Where has Akida already been deployed – and no one’s talking about it?”

MOO DYOR
 
Last edited:
  • Like
  • Fire
  • Haha
Reactions: 7 users

Cardpro

Regular
Quarterly report due soon - I am hoping for a miracle!
 
  • Like
  • Haha
Reactions: 2 users

Bravo

If ARM was an arm, BRN would be its biceps💪!
On Sunday @Fullmoonfever discovered Fernando Sevilla Martínez's (Volkswagen) Github activity demonstrating Akida-powered neuromorphic processing for V2X. This morning I noticed that Valeo have just unveiled (a few days ago) a real-road demonstration of its 5G-V2X direct technology.

There's no direct mention of neuromorphic or BrainChip in Valeo’s announcement, but it's pretty coincidental, especially noting that Volkswagen Group is collaborating with Valeo to upgrade the advanced driver assistance systems and we've been partnered with Valeo for quite some time.

Valeo’s V2X platform heavily relies on AI-driven sensor fusion with ultra-low latency - exactly where BrainChip’s Akida + TENNs would excel.





Smart Technology For Smarter mobility


  • Valeo V2X Technology: Protecting Vulnerable Road Users
Valeo Group | 10 Jul, 2025 | 6 min

Valeo V2X Technology: Protecting Vulnerable Road Users​


Valeo unveiled at the 5G Automotive Association (5GAA) conference in Paris the world's first live, real-road demonstration of its 5G-V2X direct technology. This demo represents a major step toward safer, smarter, and more connected mobility.

Road traffic injuries are a leading cause of death and disability worldwide. According to the World Health Organization, nearly 1.2 million people are killed and as many as 50 million people injured each year. Globally, more than 1 in 4 road-accident deaths involve pedestrians and cyclists.

Urban traffic has become denser and more complex with an increasing number of vehicles, bikes, buses, cyclists and pedestrians. Intersections, in particular, can be hazardous. Dangerous situations in which vulnerable road users (VRUs) are hidden by a bus or vehicle can potentially cause accidents between vehicles and VRUs.
Built to enhance road safety, traffic efficiency, and the capabilities of autonomous vehicles, Valeo’s V2X platform enables real-time communication between vehicles and everything around them, including infrastructure, pedestrians, and the broader network.
The world-premiere demonstration highlights how technology can actively prevent collisions, reduce congestion, and protect vulnerable road users.
Vehicle-to-Everything (V2X) is a transformative communication technology that enables vehicles to interact with their environment. Sensor sharing or collaborative perception, one of the main V2X advanced use cases, allows vehicles to exchange data in real time through V2X systems in order to foresee and help prevent potential hazards, optimize traffic flow, and support high-level autonomous driving systems.
A world leader in ADAS systems, Valeo’s complete solution is powered by its ADAS sensor suite, connectivity hardware, and AI-driven software. These components work together to enable instantaneous data exchange with ultra-low latency, crucial for life-saving decisions on the road.
For example, two vehicles are approaching an intersection from different directions. One vehicle’s sensors observe the presence of a pedestrian crossing the street or a cyclist that’s not in view of the other vehicle. The vehicle can inform the second vehicle of the presence of the VRU so that the driver of the L2/L2+ vehicle or the autonomous vehicle can react accordingly.
V2X_banner.jpg

Key functions of the V2X platform include:
  • Collision warning and emergency braking alerts
  • Smart traffic light and intersection coordination
  • Support for autonomous vehicles
V2X plays a pivotal role in the future of autonomous driving, particularly in scenarios where traditional sensors reach their limits. V2X extends a vehicle’s awareness beyond line-of-sight to detect hazards or vulnerable road users even before they are visible.
Valeo’s V2X solutions are designed to integrate with smart city infrastructure, helping cities meet their goals for, Vision Zero (eliminating all road fatalities and severe injuries), reduced CO2 emissions, and efficient urban mobility.
From safer roads to smarter cities, Valeo’s V2X technology is helping shape the future of transportation—where vehicles not only drive, but also think, communicate, and protect.

 
Last edited:
  • Like
  • Love
  • Fire
Reactions: 39 users

FJ-215

Regular
Slow trading this arvo.

1752553686555.png
 
  • Like
  • Sad
  • Fire
Reactions: 5 users
Top Bottom