BRN Discussion Ongoing

That's a serve Pom not a forehand smash volley :) I do know SUMMFING
Well you try and find a gif to suit ๐Ÿ˜‚
 
  • Like
  • Haha
Reactions: 3 users
Not listened too

 
  • Like
  • Love
  • Thinking
Reactions: 4 users

Frangipani

Top 20

Gregor Lenz, until recently CTO of our partner Neurobus (https://thestockexchange.com.au/threads/brn-discussion-ongoing.1/post-456183) and co-author of Low-power Ship Detection in Satellite Images Using Neuromorphic Hardware alongside Douglas McLelland (https://arxiv.org/pdf/2406.11319) has joined the London-based startup Paddington Robotics (https://paddington-robotics.com/ - the website doesnโ€™t yet have any information other than โ€œPaddington Robotics - Embodied AI in Actionโ€):

View attachment 81384



View attachment 81386



Some further info I was able to find about the London-based startup founded late last year, whose co-founder and CEO is Zehan Wang:

View attachment 81419


https://www.northdata.de/Paddington%20Robotics%20Ltdยท,%20London/Companies%20House%2016015385

View attachment 81420 View attachment 81421 View attachment 81422 View attachment 81423

The other day, I came across two recent blog posts on the topic of event cameras, written by Gregor Lenz, who stepped down as CTO of BrainChipโ€™s partner Neurobus six months ago to join Paddington Robotics, an โ€œEmbodied AIโ€ startup based in London.

Definitely a sobering read for those who have been expecting a much faster adoption of event cameras across various industries. However, I personally prefer an honest โ€œwhatโ€™s holding back the technology-assessmentโ€ by someone with first-hand experience in the field of neuromorphic sensing and computing over one of those numerous (and partly AI-generated) event camera market size trajectory predictions, posted by anonymous forum users who are highly likely just BRN retail shareholders without any real insight into the actual technology and the difficulties that have so far hindered broader mainstream adoption.

While โ€œEvent cameras in 2025, Part 2โ€ is very technical in nature, the preceding blog post Part 1 makes an easier read.
[The bold print in the middle part is not intended, by the way - I keep trying to remove it, but I for some reason it wonโ€™t go away]





Event cameras in 2025, Part 1

August 13, 2025 ยท 20 min ยท 4123 words

Earlier this year, stepped down as CTO of Neurobus and transitioned to a role in the field of robotics in London. Despite that shift, I still believe in the potential of event cameras, especially for edge computing. Their asynchronous data capture model is promising, but the technology isnโ€™t quite there yet. In two parts, I want to outline the main markets that I think could drive the adoption of event cameras and also talk about whatโ€™s currently holding the technology back.

Industry landscape

Fifteen years ago, the market for event cameras barely existed. Today, itโ€™s worth around 220 million dollars. That rate of growth is actually in line with how most sensor technologies develop. LiDAR, for example, was originally created in the 1970s for military and aerospace applications. It took decades before it found its way into mainstream products, with broader adoption only starting in the 2010s when autonomous vehicles began to emerge. Time-of-flight sensors were originally explored in the 1980s, but only became widespread after Apple introduced Face ID in 2015.

Event cameras appear to be following the same trajectory. Theyโ€™ve been tested across various industries, but none have yet revealed a compelling, large-scale use case that pushes them into the mainstream. When we started Neurobus in 2023, there were already several companies building neuromorphic hardware such as event-based sensors and processors. What was missing was a focus on the software and real-world applications. Camera makers were happy to ship dev kits, but few people were actually close to end users. Understanding when and why an event camera outperforms a traditional RGB sensor requires deep domain knowledge. Thatโ€™s the gap I tried to bridge.

It quickly became clear that finding the right applications is incredibly difficult. Adoption doesnโ€™t just depend on technical merit. It needs mature software, supporting hardware, and a well-defined use case. In that sense, event cameras are still early in their journey. Over the past year, Iโ€™ve explored several sectors to understand where they might gain a foothold.


Space

The private space sector has grown quickly in the last two decades, largely thanks to SpaceX. By driving down launch costs to a fraction of what they used to be, the company has made it far easier to get satellites into orbit. Here is a lovely graph showing launch costs over the past decades. Notice the logarithmic y axis! Those launch costs are going to continue to drop as rockets with bigger payloads, such as Starship, are developed.

037B25D8-C22D-449B-83E8-343D1BE1BB35.jpeg



Space Situational Awareness (SSA) from the ground

With more satellites in orbit, the risk of collisions increases, and thatโ€™s where space situational awareness, or SSA, comes into play. At the moment, the US maintains a catalogue of orbital objects and shares it freely with the world, but itโ€™s unlikely that this will remain free forever. Other countries are starting to build their own tracking capabilities, particularly for objects in Low Earth Orbit (LEO), which spans altitudes up to 2,000 kilometers. SSA is mostly handled from the ground, using powerful RADAR systems. These systems act like virtual fences that detect any object passing through, even those as small as eight centimeters at an altitude of 1,500 kilometers. RADARs are expensive to build and operate, but their range and reliability are unmatched. For SSA solutions on the ground, optical systems play a smaller role of real-time tracking of specific objects. People built ground-based event camera SSA systems, but it is not clear what advantages they bring over conventional, high resolution, integrating sensors. Thereโ€™s nothing that I know of up there that is spinning so fast that you need microsecond resolution to capture it.


Space Domain Awareness (SDA) in orbit

As orbit becomes more crowded and militarized, the need to monitor areas in real time is growing, especially regions not visible from existing ground stations (as in, anywhere other than your country and allies). Doing this from space itself offers a significant advantage, but using RADAR in orbit isnโ€™t practical due to the power constraints of small satellites. Instead, optical sensors that can quickly pivot to observe specific areas are a better fit. To achieve good coverage, youโ€™d need a large number of these satellites, which means that payloads must be compact and low-cost. This is where event cameras could come in. Their power efficiency makes them ideal for persistent monitoring, especially in a sparse visual environment like space. Since they only capture changes in brightness, pointing them into mostly dark space allows them to exploit sparsity very well. The data they generate is already compressed, reducing the bandwidth needed to transmit observations back to Earth. For low-power surveillance satellites in LEO, thatโ€™s a significant advantage.


Earth Observation (EO)

In Earth observation, optical sensors sit on satellites that need to orbit as low as possible in order to increase angular resolution / swath. They revolve around the Earth roughly every 90 minutes, capturing the texture-rich surface. Using an event camera for that would just generate enormous amounts of data that is of the wrong kind anyway because in EO you are interested in multi spectral bands and high spatial resolution. However there is a case that might make it worthwhile: when you compensate for the lateral motion of the satellite and fixate the event camera on a specific spot for continuous monitoring. Such systems exist today already (check out this dataset) to monitor airports, city traffic and probably some missile launch sites. Using an event camera for that would reduce processing to a bare minimum, and provide high temporal resolution objects that move. The low data rate would also allow for super low bandwidth live video streaming! Unless weโ€™re talking constellations of 10k+ satellites however, it remains a niche use case for the military to monitor enemy terrain.

event-cameras-in-space

Potential applications of event cameras in orbit. 1. Space Domain Awareness (SDA) is about receiving requests from the ground to monitor an object in LEO in real time. 2. Video live streaming Earth observation (EO). Compensating for the satelliteโ€™s lateral motion, event cameras could monitor sites for ~10 minutes per orbit, depending on the altitude. 3. Star tracking. Lots of redundant background means that we can focus on the signal of interest using little processing.

In-orbit servicing

In-orbit servicing includes approaching and grabbing debris to de-orbit it or docking with another satellite or spacecraft for refueling or repair. These operations are delicate and often span several hours, typically controlled from the ground. With the number of satellites in orbit continuously increasing as well as many as 10 space stations planned to be built within the next decade, reliable and autonomous docking solutions will become essential. Current state-of-the-art systems already use stereo RGB and LiDAR, but event cameras might offer benefits in challenging lighting conditions or when very fast, reactive maneuvers are needed in emergency situations. I think that in-orbit servicing has many other challenges before event cameras fix the most pressing problem in that area.

Star tracking

Star trackers are a standard subsystem on most satellites. Companies like Sodern sell compact, relatively low-cost units that point upward into the sky to determine a satelliteโ€™s orientation relative to known star constellations. Todayโ€™s trackers typically deliver attitude estimates at around 10โ€“30 Hz, consuming 3โ€“7 W depending on the model. For most missions, thatโ€™s good enough, but thereโ€™s room for improvement. Since stars appear as sparse, bright points against an almost entirely dark background, they align perfectly with the event cameraโ€™s strengths. Instead of continuously integrating the whole scene, an event-based tracker could focus only on the few pixels where stars actually appear, cutting down on unnecessary processing. In principle, this allows attitude updates at kilohertz rates while using less compute and bandwidth. Faster updates could improve control loops during high-dynamic maneuvers or enable more precise pointing for small satellites that lack bulky stabilization hardware. From a software perspective, the task remains relatively simple: once the events are clustered into star positions, the rest of the pipeline is the conventional map-matching problem of aligning observations to a star catalogue. No complex machine learning is needed. The main challenge, as with any space payload, lies in making the hardware resilient to radiation and thermal extremes.

Revenue generation

In space applications, cost isnโ€™t the limiting factor, which would make it a great place to start testing equipment that is not mass produced yet. Space-grade systems already command premium prices, so a $1,000+ event camera is not out of place. Compared to SDA and EO video streaming, which can generate recurring revenue as a service by providing recent and real-time data, in-orbit servicing systems or star trackers are more likely to be sold as one-off hardware solutions, which makes the business case less scalable. In either case, thereโ€™s a growing need for advanced vision systems that can operate efficiently on the edge in space. Right now, the space market for event cameras is still at an early stage, but the interest is real.

Manufacturing / Science

In industrial vision, specifically in manufacturing and scientific environments, objects move fast and precision matters. These settings are full of high-speed conveyor belts, precise milling machines, and equipment that needs real-time monitoring. On paper, this seems like a great match for event cameras, which excel in capturing rapid motion with minimal latency. But the reality is more complicated. In most factories, computer vision systems are already deeply integrated into broader automation pipelines. Changing a single sensor, even for one with better temporal precision, often isnโ€™t worth the disruption. If a factory wants slightly better accuracy, they can usually just upgrade to a 100+ Hz version of their existing system. Need to count bottles flying past on a line? A cheap line-scanning camera will do the trick. Want to monitor vibrations on a machine? A one-dollar vibration sensor is simpler and more reliable.


Some also consider battery-powered monitoring devices for warehouses and other low-maintenance settings, where low-power vision sensors could make sense. But even there, the appeal is limited in my experience. Someone still has to replace or recharge the battery eventually, and most computer vision systems already do a good enough job without needing an event-based solution. That said, there are niche applications where event cameras could shine. High-speed scientific imaging is one example, such as tracking combustion events in engines, analysing lightning, or sorting high-throughput cell flows in cytometry. Today, these tasks often rely on bulky high-speed cameras like a Phantom, which require artificial lighting, heavy cooling systems, and massive data pipelines, often just to record a few seconds of footage.

Event cameras could offer a much more compact and energy-efficient alternative. They donโ€™t need high-speed links or huge data buffers, and they can be powered through a USB port and still achieve sub ms latency. I think that event cameras could become a competitor for the use of high speed cameras in scientific settings, with a focus on small form factor and mobile applications. One challenge and active research area here is reconstructing high-quality video frames from events. The good news is that weโ€™re seeing steady progress, and thereโ€™s even a public leaderboard tracking the latest benchmarks here. However, these methods currently consume an entire 100W GPU so theyโ€™re done offline. One of the biggest hurdles is collecting good ground truth data for what reconstructed frames should look like, which is why most researchers still rely on simulation. But if such reconstruction models get very good, I could imagine a business model where a user uploads the raw events, chooses the desired fps, and gets back videos with up to 10k fps. Pricing is per frame reconstructed, be it 10 Hz for 2 hours or 1 kHz for 1 second.

high-speed-rendering

A comparison between a high-speed camera and an event camera. The high-speed camera workflow shows three steps: recording, postprocessing with video compression, and the result as high frame rate RGB video. This path requires handling a large amount of data. The event camera workflow shows recording followed by frame reconstruction, leading to high frame rate greyscale video, while generating much less data overall. The diagram highlights how event cameras provide efficient high-speed imaging compared to traditional high-speed cameras.

Automotive

Event cameras in cars have had several testers, but none have stuck to it for now. While the technical case is strong, the path to adoption is complex, shaped by legacy systems, cost constraints, and the structure of the automotive supply chain. Modern cars already rely on a robust stack of sensors. RGB cameras, LiDAR, RADAR, and ultrasonic sensors all work together to enable functions like adaptive cruise control, lane keeping, emergency braking, and parking assistance. These systems are designed to be redundant and resilient across various weather and lighting conditions. For a new sensor like an event camera to be added, it must address a specific problem that the current setup cannot. In the chart below, I marked the strengths and weaknesses of sensors currently employed in cars. I rated each one on a scale from 1 to 5. Event cameras (orange) have a considerable overlap with RGB cameras (green), apart from the performance in glare or high contrast scenarios, which is covered well by RADAR (red).​

BF9EC75B-631C-4593-B150-6F8ED30FFCD4.jpeg

Nevertheless, the unique selling point of combined high temporal resolution and high dynamic range could be a differentiator. For example, detecting fast-moving objects to avoid collisions could become safety-critical. But in practice, these are edge cases, and current systems already perform well enough in most of them. The reality of integration is that automotive development happens within a well-defined supply chain. Original equipment manufacturers (OEM) like Toyota or Mercedes-Benz rarely integrate sensors themselves. Instead, they depend on Tier 1 suppliers like Bosch or Valeo to deliver complete perception modules. Those Tier 1s work with Tier 2 suppliers who provide the actual components, including sensors and chips.​

For event cameras to make it into a production vehicle, they need to be part of a fully validated module offered by a Tier 1 supplier. This includes software, calibration, diagnostics, and integration support. For startups that focus on event camera use cases, this creates a huge barrier, as the margins are already thin. You need a path to integration that matches the way the industry actually builds cars. Companies like NVIDIA are even starting to reshape this landscape. Their Drive Hyperion platform bundles sensors and compute into a single integrated solution, reducing the role of traditional Tier 1 suppliers. Hyperion already supports a carefully selected list of cameras, LiDARs, and RADARs, along with tools for simulation, data generation, and sensor calibration. Event cameras arenโ€™t on that list yet. That means research teams inside OEMs have no easy way to test or simulate their output, let alone train models on it.​

The way automotive AI systems are being designed has also changed. Instead of having separate modules for tasks like lane detection or pedestrian recognition, modern approaches rely on end-to-end learning. Raw sensor data is fed into a large neural network that directly outputs steering and acceleration commands. This architecture scales better but makes it harder to add a new modality. Adding a sensor like an event camera doesnโ€™t just mean collecting or simulating new data. It also means rewriting the training pipeline and handling synchronization with other sensors. Most OEMs are still trying to get good reliability from their existing stack. Theyโ€™re not in a rush to adopt something fundamentally new, especially if it comes without mature tooling.​


Cost is another serious constraint. Automotive suppliers operate on tight margins, and every component is scrutinized. For instance, regulators in Europe and elsewhere are mandating automatic emergency braking. On paper, this sounds like a perfect opportunity for event cameras, especially to detect pedestrians at night. But in reality, carmakers are more likely to spend 3 extra dollars to improve their headlights than to introduce a new sensor that complicates the system. In fact, the industry trend is toward reducing the number of sensors. Fewer sensors mean simpler calibration, fewer failure modes, and lower integration overhead. From that perspective, adding an event camera can feel like a step in the wrong direction unless one is able to replace another modality altogether.

One area where event cameras might gain traction sooner is in the cabin. Driver and passenger monitoring systems are becoming mandatory in many regions. These systems typically use a combination of RGB and infrared cameras to detect gaze direction, drowsiness, and presence. An event camera could potentially replace both sensors, offering better performance in high-contrast lighting conditions, such as when bright headlights shine into the cabin at night. Cabin monitoring systems are often independent from the main driving compute platform, they have faster iteration cycles, and the integration hurdles are lower. Once an event camera is proven in this domain, it could gradually be expanded to support gesture control, seat occupancy, or mood estimation.

Visual light communication (VLC) could become a relevant application in autonomous vehicles. The idea is simple: LEDs that are already in our environmentโ€”traffic lights, street lamps, brake lights, even roadside signsโ€”can modulate their intensity at kilohertz rates to broadcast short messages, while a receiver on the vehicle decodes them optically. Event cameras are a particularly good fit for this because they combine microsecond temporal resolution with useful spatial resolution, letting a single sensor both localize the source and decode high-frequency flicker without the rolling-shutter or motion-blur issues that plague standard frame sensors. Recent work from Woven by Toyota is a good snapshot of where this is headed: they released an event-based VLC dataset with synchronized frames, events, and motion capture and demonstrated LED beacons flickering at 5 kHz encoded via inter-blink intervals. While VLC is not going to be the main driver to integrate event cameras into cars, itโ€™s one โ€˜part of the packageโ€™ application.

Automotive adoption moves slowly. Getting into a car platform can take five to ten years, and the technical hurdles are only part of the story. To succeed, companies developing event cameras need staying power and ideally, strategic partnerships with Tier 1 suppliers or compute platform providers. For a small startup, this is a tough road to walk alone. For the moment, in-cabin sensing might be the most realistic starting point.



[continued in next

Defence

Many of the technologies we now take for granted started with defense: GPS, the internet, radar, night vision, even early AI. Defense has always been an early adopter of bleeding-edge tech, not because itโ€™s trendy, but because the stakes demand it. Systems need to function in low visibility, track fast-moving targets, and operate independently in environments where thereโ€™s no GPS, no 5G, and no time to wait for remote instructions. In such cases, autonomy is a requirement and modern military operations are increasingly autonomous. Drone swarms, for example, donโ€™t rely on one pilot per unit anymore. A central command issues a mission, and the swarm executes it even deep behind enemy lines. That shift toward onboard intelligence makes the case for sensors that are low-latency, low-power, and can extract meaningful information with minimal compute. Thatโ€™s where event cameras can play a role. Their high temporal resolution and efficiency make them well suited to motion detection and fast reaction loops in the field.


drone-detection

Drone detection based on time surfaces at the European Defence Tech Hackathon

We put this into practice at the European Defence Tech Hackathon in Paris last December. The Ukrainian military had outlined their biggest challenges, and drones topped the list by a mile. Over 1.2 million were deployed in Ukraine last year alone according to its Ministry of Defence, most of them manually piloted First Person View (FPV) drones. They include variants that carry a spool of lightweight optical fibre, often 10 km long, that allows the pilot to control the drone by wire, without radio signals, see the photo below. And Ukraineโ€™s target for 2025 is a staggering 4.5 million. Main supply routes are now completely covered in anto drone nets, and fields close to the frontline are covered with optical fibre. Both sides are racing to automate anti-drone systems. At that hackathon in December, we developed an event-based drone detection system and won first place. That experience made it clear that the demand is real! An enemy drone shutdown can mean a soldierโ€™s life saved. Thereโ€™s also a pragmatic reason why the defense sector is attractive: volume. Every drone, loitering munition, or autonomous ground vehicle is a potential autonomous system. Event cameras arenโ€™t the only option, but theyโ€™re a good candidate when fast response times are crucial and power budgets are tight.

fpv-optical-fibre-drone

An FPV drone with an optical fibre spool attached. Photo by Maxym Marusenko/NurPhoto

The European Union has committed โ‚ฌ800 billion to defense and technological sovereignty. Whether that funding reaches startups effectively is another question, but the political intent is clear. Europe wants to control more of its military tech stack, and that opens the door to new players with homegrown solutions. Already today we see many new defence startups on the scene, a lot of them focusing on AI and autonomy. Defence comes with a lot of red tape, whether itโ€™s access to real data, the reliance on slow government funding, the fact that it can resemble a walled garden, or simply the limited options in terms of exits. But out of all the sectors Iโ€™ve looked into, defense stands out as the most likely place for event cameras to find product-market fit first. Thereโ€™s real demand, shorter adoption cycles, and a willingness to experiment. There are new companies Optera in Australia and [TempoSense][https://tempo-sense.com/] in the US (recent slides with more info) that are experimenting with making event sensors for the defence sector, and Prophesee in Europe now openly presents their work on drone navigation, detection and anti drone tech. Also Leonardo, the Italian defence company, released a paper experimenting with event cameras for drone detection.

[To be continued in next post due to the limitation of characters per post]
 
Last edited:
  • Like
  • Love
  • Fire
Reactions: 8 users

Frangipani

Top 20
[Continuation of blog post โ€œEvent cameras in 2025, Part 1โ€ by Gregor Lenz]

Wearables

Back in 2021, I explored the use of event cameras for eye tracking. I had conversations with several experts in the field, and their feedback was clear: for most mobile gaze tracking applications, even a simple 20 Hz camera was good enough. In research setups that aim to study microsaccades or other rapid eye movements, the high temporal resolution of event cameras could be useful. But even then, a regular 120 Hz camera might still get the job done.

What I didnโ€™t fully appreciate back then was the importance of power consumption in wearable devices. My thinking was centered around AR and VR headsets, which already include high refresh rate displays that consume significant power. In that context, saving a few milliwatts didnโ€™t seem that important. But smart glasses are a different story. They need to run for hours or days, and every bit of energy efficiency matters to prolong battery life and allow for slimmer designs. Nowadays spectacles [sic]

Prophesee recently announced a partnership with Tobii, who are a major supplier of eye tracking solutions. Zinn Labs, one of the early adopters of event-based gaze tracking, were acquired in February 2025. These developments suggest that there is traction for the technology, especially in applications where power efficiency and responsiveness are key. According to Tobi Delbruck from ETH Zurich, if spectacles catch on like smartphones, then this will be a true mass production of event vision sensors. That said, the broader question remains whether the smart glasses market will scale any time soon. Event cameras may be a good fit from a technical perspective, but the commercial success of wearables will depend on many other factors beyond just sensor performance.

zinn-labs
Prototype by Zinn Labs that includes a GenX320 sensor.


A Note on Robotics

Even though fast sensors should be great for fine-grained, low-latency loop closure in control, this field is dealing with very different challenges at the moment, at least for building Autonomous Mobile Robots or Humanoids. Controlling an arm or a leg using Visual Language Action (VLA) models is incredibly difficult, and neither input frame rate, nor dynamic range are the limitations. Even once more performant models become available, youโ€™ll have to deal with the same challenges as in the Automotive sector, which is that adding a new modality needs lots of new (simulated) data.

Conclusion​

Event cameras have come a long way, but they are still searching for the right entry points into the mainstream. The most promising early markets seem to be in defense, where speed and efficiency are critical for drones and autonomous systems, and in wearables, where power constraints make their efficiency truly valuable. Other sectors like space, automotive, and manufacturing show interesting opportunities, but adoption is likely to remain slower and more niche for now. The trajectory of this technology suggests that with persistence and the right applications, event cameras will carve out their role in the broader sensor landscape.​

In Part 2, I will discuss the technological hurdles that event cameras are facing today.







Event cameras in 2025, Part 2

August 20, 2025 ยท 14 min ยท 2781 words

In Part 1 I provided a high level overview of different industry sectors that could potentially see the adoption of event cameras. Apart from the challenge of finding the right application, there are several technological challenges before event cameras can reach a mass audience.

Sensor Capabilities

Todayโ€™s most recent event cameras are summarised in the table below.

Camera SupplierSensorModel NameYearResolutionDynamic Range (dB)Max Bandwidth (Mev/s)
iniVationGen2 DVSDAVIS3462017346ร—260~12012
iniVationGen3 DVSDVXPlorer2020640ร—48090-110165
PropheseeSony IMX636EVK420201280ร—7201201066
PropheseeGenX320EVK32023320ร—320140
SamsungGen4 DVSDVS-Gen420201280ร—9601200

Insightness was sold to Sony, and CelePixel partnered with Omnivision, but hasnโ€™t released anything in the past 5 years. Over the past decade, we have seen resolution grow from 128x128 to HD, but thatโ€™s actually not always good. The last column in the table above describes the number of million events per second, which can easily be reached when the camera is moving fast, such on a drone. A paper by Gehrig and Scaramuzza suggests that in low light and high speed scenarios, performance of high res cameras is actually worse than when using fewer, but bigger pixels, due to high per-pixel event rates that are noisy and cause ghosting artifacts.

In areas such as defence, higher resolution and contrast sensitivity, as well as capturing the short/mid range infrared spectrum, is going to be desirable, because range is so important. SCD USA made the MIRA 02Y-E available last year that includes an optional event-based readout, to enable tactical forces to detect laser sources. Using the event-based output, it advertises a frame rate of up to 1.2 kHz. In space, the distances to the captured objects are enormous, and therefore high resolution and light sensitivity are of utmost importance.

In short range applications such as eye tracking for wearables, a GenX320 at lower resolution but high dynamic range and ultra low power modes is going to be more interesting. For scientific applications, NovoViz recently announced a new SPAD (single photon avalanche diode) camera using event-based outputs!

One thing is clear: todayโ€™s binary microsecond spikes are rarely the right format. Much like Intelโ€™s Loihi 2 shifted from binary spikes to richer spike payloads because they realised that the communication overhead was too high otherwise, future event cameras could emit multi-bit โ€œmicro-framesโ€ or tokenizable spike packets. These would represent short-term local activity and could be directly ingested by ML models, reducing the need for preprocessing altogether. Ideally thereโ€™s a trade-off between information density and temporal resolution that can be chosen depending on the application.

A key trend are hybrid vision sensors that combine rgb and event frames. At ISSCC 2023, three papers showed new generations of hybrid vision sensors, which output both RGB frames at fixed rates and events in between.

SensorEvent output typeTiming & synchronizationPolarity infoTypical max rate
Sony 2.97 ฮผmBinary event frames (two separate ON/OFF maps)Synchronous, ~580 ยตs โ€œevent frameโ€ period2 bits per pixel (positive & negative)~1.4 GEvents/s
OmniVision 3-waferPer-event address-event packets (x, y, t, polarity)Asynchronous, microsecond-level timestampsSingle-bit polarity per eventUp to 4.6 GEvents/s
Sony 1.22 ฮผm, 35.6 MPBinary event frames with row-skipping & compressionVariable frame sync, up to 10 kfps per RGB frame2 bits per pixel (positive & negative)Up to 4.56 GEvents/s

The Sony 2.97 ฮผm chip uses aggressive circuit sharing so that four pixels share one comparator and analog front-end. Events are not streamed individually but are batched into binary event frames every ~580 ยตs, with separate maps for ON and OFF polarity. This design keeps per-event energy extremely low (~57 pJ) and allows the sensor to reach ~1.4 GEvents/s without arbitration delays. Because output is already frame-like, it fits naturally into existing machine learning pipelines that expect regular image-like input at deterministic timing. The OmniVision 3-wafer is different: a true asynchronous event stream is preserved. A dedicated 1MP event wafer with in-pixel time-to-digital converters stamps each event with microsecond accuracy. Skip-logic and four parallel readout channels give a 4.6 GEvents/s throughput. This is closer to the classic DVS concept, ideal for ultra-fast motion analysis or scientific experiments where every microsecond matters. The integrated image signal processor can fuse the dense 15MP RGB video with the sparse event stream in hardware for applications such as 10 kfps slow-motion videos. The Sony 1.22 ฮผm hybrid sensor aimed at mobile devices combines a huge 35.6 MP RGB array with a 2 MP event array. Four 1.22 ยตm photodiodes form each event pixel (4.88 ยตm pitch). The event side operates in variable-rate event-frame mode, outputting up to 10 kfps inside each RGB frame period. On-chip event-drop filters and compression dynamically reduce data volume while preserving critical motion information for downstream neural networks (e.g. deblurring or video frame interpolation). It is a practical demonstration that event frames and RGB can be tightly synchronized so that a phone SoC can consume both without exotic drivers.

hybrid-vision-sensor-sony

Kodama et al. presented a sensor that outputs variable-rate binary event frames next to RGB.

hybrid-vision-sensor-sony
Guo et al. presented a new generation of hybrid vision sensor that outputs binary events.

I find the trend towards event frames interested an in line with what most researchers have been feeding their machine learning models anyway. In either case, the event camera sensor has not reached its final form yet. The question is always in what way events should be represented in order to be compatible with modern machine learning methods.


Event Representations​

Most common approaches aggregate events into image-like representations such as 2d histograms, voxel grids, or time surfaces. These are then used to fine-tune deep learning models that were pre-trained on RGB images. This leverages the breadth of existing tooling built for images and is compatible with GPU-accelerated training and inference. Moreover, it allows for adaptive frame rates, aggregating only when thereโ€™s activity and potentially saving on compute. However, this method discards much of the fine temporal structure that makes event cameras valuable in the first place. We still lack a representation for event streams that works well with modern ML architectures and preserves their sparsity. Event streams are a new data modality, just like images, audio, or text, but one for which we havenโ€™t yet cracked the โ€œtokenization problem.โ€ A single ON or OFF event contains very little semantic information. Unlike a word in a sentence, which can encode a concept, even a dozen events reveal almost nothing about the scene. This makes direct tokenization of events inefficient and ineffective. What we need is a representation that can summarize local spatiotemporal structure into meaningful, higher-level primitives. Something akin to a โ€œvisual wordโ€ for events.

Itโ€™s also inherently inefficient: the tensors produced are full of zeros, and latency grows with the size of the memory window. This becomes problematic for real-time applications where a long temporal context is needed but high responsiveness is crucial.

I think that graphs, especially dynamic, sparse graphs, are an interesting abstraction to be explored. Each node could represent a small region of correlated activity in space and time, with edges encoding temporal or spatial relationships. Recent work such as HugNet v2, DAGr, or EvGNN hardware apply Graph Neural Networks (GNNs) to event data. But several challenges remain: to generate such a graph, we need a lot of memory for all those events, and the upredictable number of incoming events makes computation extremely inefficient. This is where specialized hardware accelerators will need to come in, because dynamically fetching events is expensive. By combining event cameras with efficient โ€œgraph processors,โ€ we could offload the task of building sparse graphs directly on-chip, producing representations that are ready for downstream learning. Temporally sparse, graph-based outputs could serve as a robust bridge between raw events and modern ML architectures.

If you want to preserve sparsity, you need tokens that mean something. Individual ON/OFF events are too atomic to be useful tokens, so a practical middle ground is a twoโ€‘stage model: a lightweight, streaming โ€œtokenizerโ€ that clusters local spatiotemporal activity into shortโ€‘lived microโ€‘features, followed by a stateful temporal model that reasons over those features. The tokenizer can be as simple as centroiding event bursts in a small spatial neighborhood with a short time constant, or as involved as a dynamic graph builder that fuses polarity, age, and motion cues. Either way, the goal is to transform a flood of spikes into a bounded, variableโ€‘rate set of tokens with stable meaning. Next letโ€™s explore the type of models that work well with event camera data.


Machine Learning Models​


At their core, event cameras are change detectors, which means that we need memory in our machine learning models to remember where things were before they stopped moving. We can bake memory into the model architecture by using recurrence or attention. For example, Recurrent Vision Transformers and their variants maintain internal state across time and can handle temporally sparse inputs more naturally. These methods preserve temporal continuity, but thereโ€™s a catch: most of these methods still rely on dense, voxelized inputs. Even with more efficient state-space models replacing LSTMs and BPTT (Backpropagation Through Time), weโ€™re still processing a lot of zeros. Training is faster, but inference is still bottlenecked by inefficient representations.

Nowadays larger AI models are being pruned, distilled, and quantised to provide efficient edge models that can generalise well. Even TinyML models are students of a larger model. We have to say goodbye to the idea of training tiny models from scratch for commercial event camera applications, because they wonโ€™t perform well enough in the real world.

Spiking neural networks (SNNs) are sometimes touted as a natural fit for event data. But in their traditional form, with binary activations and reset mechanisms, leaky integrate-and-fire (LIF) neurons are handcrafted biological abstractions. If we learned anything from machine learning, itโ€™s that handcrafted designs are inherently flawed. And neurons are an incredibly complex thing to model, as efforts such as CZIโ€™s Virtual Cells and DeepMindโ€™s cell simulations show. So letโ€™s not get hung up on the artificial neuron model itself, and instead use what works well, because the field is moving incredibly fast.

Iโ€™m very optimistic about state space models (SSMs) for event vision. Instead of baking memory into heavy recurrence or dense attention, an SSM treats the sceneโ€™s latent dynamics as a continuous-time system and then discretizes only for inference. This means a single trained model can adapt to many operating modes: you can run it at different inference rates or even update state event-by-event with variable time stepsโ€”without retrainingโ€”simply by changing the integration step. That flexibility is a good match for sensors whose activity is unpredictable.


Processors​

Meyer et al. implemented a S4D SSM on Intelโ€™s Loihi 2, constraining the state space to be diagonal so that each neuron evolves independently. They mapped these one-dimensional state updates directly to Loihiโ€™s programmable neurons and carefully placed layers to reduce inter-core communication, which resulted in much lower latency and energy use than a Jetson GPU in true online processing. I think itโ€™s a compelling demonstration that SSMs can be run efficiently on stateful AI accelerator hardware and Iโ€™m curious what else is coming out of that.

Some people argue that because event cameras output extremely sparse data, we can save energy by skipping zeros in the input or in intermediate activations. But I donโ€™t buy that argument because while the input might be much sparser than an RGB frame, the bulk of the computation actually happens in intermediate layers and works with higher level representations, which are hopefully similar for both RGB and event inputs. That means that in AI accelerators we canโ€™t exploit spatial event camera sparsity, and inference cost between RGB and event frames are essentially the same. Of course we might get different input frame rates / temporal sparsity, but those can be exploited on GPUs as well.

Keep in mind that on mixed-signal hardware, rules are different. Thereโ€™s a breadth of new materials being explored, memristors and spintronics. The basic rule for analog is: if you need to convert from analog to digital too often, for error correction or because youโ€™re storing states or other intermediate values, your efficiency gains go out of the window. Mythic AI had to painfully learn that and almost tanked, and also Rain AI pivoted from its original analog hardware and faces an uncertain future. The brain uses a mixture of analog (graded potentials, dendritic integration) and digital (spikes) signals and we can replicate this principle in silicon. But since the circuitry is the memory at the same time, it needs an incredible amount of space, and is organised in 3d. Thatโ€™s really costly to do in silicon, and the major challenge is getting the heat out, which is much easier in 2d.

I think that the asynchronous compute principle is key for event cameras, but we need to realise that naรฏve asynchrony is not constructive. Think about a roundabout, and how it manages the flow of traffic without any traffic lights. When the traffic volume is low, every car is more or less in constant motion, and latency to cross the roundabout is minimal. As the volume of traffic grows, a roundabout becomes inefficient, because the movement of any car depends on the decisions of cars nearby. For high traffic flow, it becomes more efficient to use traffic lights to batch process the traffic for multiple lanes at once, which achieves the highest throughput of cars. The same principle applies for events. When you have few pixels activated, you achieve the lowest latency when you process them as they come in, as in a roundabout. But as the amount of events / s gets larger, for example because youโ€™re moving the camera on a car or a drone, you need to get out the traffic lights, and start and stop larger batches of events. Ideally the size of the batch depends on the event rate.

For more info about neuromorphic chips, I refer you to Open Neuromorphicโ€™s Hardware Guide.

Conclusion​

Here are my main points:​

  • Event cameras wonโ€™t go mainstream until they move away from binary events and to richer output formats, whether from the sensor directly or an attached preprocessor.
  • Event cameras follow the trajectory of other sensors that were developed and improved within the context of defence applications.
  • We need an efficient representation that is compatible with modern ML architectures. It might well be event frames in the end.
  • Keep it practical. Biologically-inspired approaches should not distract from deployment-grade ML solutions.

The recipe that scales is: build a token stream that carries meaning, train it with crossโ€‘modal supervision and selfโ€‘supervision that reflects real sensor noise, keep a compact scene memory that is cheap to update, and make computation conditional on activity rather than on a fixed clock.​

Binary events donโ€™t contain enough information on their own, so they must be aggregated in one form or another. Event sensors might move from binary outputs toward richer encodings at the pixel level, attach a dedicated processor to output richer representations, or they simply output what the world already knows well: another form of frames. While many researchers (including me) originally set out to work with binary events directly, I think it is time to swallow a bitter pill and accept that computer vision will depend on frames for the foreseeable future.​

My bet is currently on the latter, because the simplest solutions tend to win.​

Deep learning started out with 32 bit floating point, dense representations, and neuromorphic started out on the other end of the spectrum at binary, extremely sparse representations. They are converging, with neuromorphic realising that binary events are expensive to transmit, and deep learning embracing 4 bit activations and 2:4 sparsity.​

Interesting research directions for event cameras today are about dynamic graph representations for efficient tokenization, state space models for efficient inference, lossy compression for smaller file sizes. To unlock the full potential of event cameras, we need to solve the representation problem to make it compatible with modern deep learning hardware and software, while preserving the extreme sparsity of the data. Also we shouldnโ€™t be too focused on biologically-inspired processing if we want this thing to scale anytime soon. I think that either the sensors must evolve to emit richer, token-friendly outputs, or they must be paired with dedicated pre-processors that produce high-level, potentially graph-based abstractions. Once that happens, event cameras become easy enough to work with to reach the mainstream.​

Ultimately, the application dictates the design. Gesture recognition does not need microsecond temporal resolution. Eye tracking doesnโ€™t need HD spatial resolution. And sometimes a motion sensor that will wake a standard camera will be the easiest solution.​

 
  • Like
  • Fire
  • Thinking
Reactions: 10 users

Andy38

The hope of potential generational wealth is real
Nice to see a bit of life back in the old girl... The question on my mind is whether its been artificially pumped to lure retail in only to have the rug pulled, or was this caused by an entity that knows something we don't? Volume has been massive, but the ASX is funded by retail sheep, who jump on companies whenever there's upwards movement. Lets hope its the latter and something is announced by the company soon. I've bumped heads with several on here, but don't think there's a stock on the ASX that has more loyal shareholder base than this one, and it'd be fantastic to be rewarded for many years of patience.
Itโ€™s been a while @robsmark but great to see some of the โ€œold timersโ€ back on here and commenting. Itโ€™s been a rough few years. Maybe still a touch early for the golden goose to lay, but us loyal shareholders will be holding on for dear life until that moment hopefully arises!
 
  • Love
  • Like
Reactions: 5 users

MDhere

Top 20
  • Haha
  • Love
Reactions: 3 users

Frangipani

Top 20
Another highly entertaining eejournal.com article featuring BrainChip by Max Maxfield:


max-0424-image-for-home-page-akida.jpg

October 9, 2025

Bodacious Buzz on the Brain-Boggling Neuromorphic Brain Chip Battlefront​

by Max Maxfield

Hold onto your hippocampus because the latest neuromorphic marvels are firing on all synapses. To ensure weโ€™re all tap-dancing to the same skirl of the bagpipes, letโ€™s remind ourselves that the term โ€œneuromorphicโ€ is a portmanteau that combines the Greek words โ€œneuroโ€ (relating to nerves or the brain) and โ€œmorphicโ€ (relating to form or structure).

Thus, โ€œneuromorphicโ€ literally means โ€œin the form of the brain.โ€ In turn, โ€œneuromorphic computingโ€ refers to electronic systems inspired by the human brainโ€™s functioning. Instead of processing data step-by-step, like traditional computers, neuromorphic chips attempt to mimic how neurons and synapses communicateโ€”utilizing spikes of electrical activity, massive parallelism, and event-driven operation.

The focus of this column is on hardware accelerator intellectual property (IP) functionsโ€”specifically neural processing units (NPUs)โ€”that designers can incorporate into their System-on-Chip (SoC) devices. Some SoC developers use third-party NPU IPs, while others develop their own IPs in-house.

I was just chatting with Steve Brightfield, who is CMO at BrainChip. As you may recall, BrainChipโ€™s claim to fame is its Akida AI acceleration processor IP, which is inspired by the human brainโ€™s cognitive capabilities and energy efficiency. Akida delivers low-power, real-time AI processing at the edge, utilizing neuromorphic principles for applications such as vision, audio, and sensor fusion.

The vast majority of NPU IPs accelerate artificial neural networks (ANNs) using large arrays of multiplyโ€“accumulate (MAC) units. These dense matrixโ€“vector operations are energy-hungry because every neuron participates in every computation, and the hardware must move a lot of data between memory and the MAC array.

By comparison, Akida employs a neuromorphic architecture based on spiking neural networks (SNNs). Akidaโ€™s neurons donโ€™t constantly compute weighted sums; instead, they exchange spikes (brief digital pulses) only when their internal โ€œmembrane potentialโ€ crosses a threshold. This makes Akida event-driven; that is, computation only occurs when new information is available to process.

In contrast to the general-purpose MAC arrays found in conventional NPUs, Akida utilizes synaptic kernels that perform weighted event accumulation upon the arrival of spikes. Each synapse maintains a small local weight and adds its contribution to a neuronโ€™s membrane potential when it receives a spike. This achieves the same effect as multiply-accumulate, but in a sparse, asynchronous, and energy-efficient manner thatโ€™s more akin to a biological brain.

max-0424-01-akida-ip.jpg

Akida self-contained AI acceleration processor IP (Source: BrainChip)

According to BrainChipโ€™s website, the Akida self-contained AI neural processor IP features the following:
  • Scalable fabric of 1 to 128 nodes
  • Each neural node supports 128 MACs
  • Configurable 50K to 130K embedded local SRAM per node
  • DMA for all memory and model operations
  • Multi-layer execution without host CPU
  • Integrate with any Microcontroller or Application Processor
  • Efficient algorithmic mesh

Hang on! I just told you that, โ€œIn contrast to the general-purpose MAC arrays found in conventional NPUs, Akida utilizes synaptic kernelsโ€ฆโ€ So, itโ€™s a tad embarrassing to see the folks at BrainChip referencing MACs on their website. The thing is that, in the case of the Akida, the term โ€œMACโ€ is used somewhat looselyโ€”more as an engineering shorthand than as a literal, synchronous multiplyโ€“accumulate unit like those found in conventional GPUs and NPUs.

While each Akida neural node contains hardware that can perform multiply-accumulate operations, these operations are event-driven and sparsely activated. When an input spike arrives, only the relevant synapses and neurons in that node perform a small weighted accumulationโ€”thereโ€™s no continuous clocked matrix multiplication going on in the background.

So, while BrainChipโ€™s documentation calls them โ€œMACs,โ€ theyโ€™re actually implemented as neuromorphic synaptic processors that behave like a MAC when a spike fires while remaining idle otherwise. This is how the Akida achieves orders-of-magnitude lower power consumption than conventional NPUs, despite performing similar mathematical operations in principle.

Another way to think about this is that a conventional MAC array crunches numbers continuously, with every neuron participating in every cycle. By comparison, an Akida nodeโ€™s neuromorphic synapses sit dormant, only springing into action when a spike arrives, performing their math locally, and then quieting down again. If I were waxing poetical, I might be tempted to say something pithy at this juncture, like โ€œmore firefly than furnace,โ€ but Iโ€™m not, so I wonโ€™t.

But wait, thereโ€™s more, because the Akida processor IP uses sparsity to focus on the most important data, inherently avoiding unnecessary computation and saving energy at every step. Meanwhile, BrainChipโ€™s neural network model, known as Temporal Event-based Neural Networks (TENNs), builds on a state-space model architecture to track events over time, rather than sampling at fixed intervals, thereby skipping periods of no change to conserve energy and memory. Together, these little scamps deliver unmatched efficiency for real-time AI.

max-0424-02-akida-ip-and-tenns-models.jpg

Akida neural processor + TENNs models = awesome (Source: BrainChip)

The name of the game here is โ€œsparse.โ€ Weโ€™re talking sparse data (streaming inputs are converted to events at the hardware level, reducing the volume of data by up to 10x before processing begins), sparse weights (unnecessary weights are pruned and compressed, reducing model size and compute demand by up to 10x), and sparse activations (only essential activation functions pass data to the next layers, cutting downstream computation by up to 10x).

Since traditional CNNs activate every neural layer at every timestep, they can consume watts of power to process full data streams, even when nothing is changing. By comparison, the fact that the Akida processes only meaningful information enables real-time streaming AI that runs continuously on milliwatts of power, making it possible to deploy always-on intelligence in wearables, sensors, and other battery-powered devices.

Of course, nothing is easy (โ€œIf it were easy, everyone would be doing it,โ€ as they say). A significant challenge for people who wish to utilize neuromorphic computing is that spiking networks differ from conventional neural networks. This is why the folks at BrainChip provide a CNN-to-SNN converter. This means developers can start out with a conventional CNN (which they may already have) and then convert it to an SNN to run on an Akida.

As usual, there are more layers to this onion than you might first suppose. Consider, for example, BrainChipโ€™s collaboration with Prophesee. This is one of those rare cases where two technologies fit together as if theyโ€™d been waiting for each other all along.

Propheseeโ€™s event-based cameras donโ€™t capture conventional frames at fixed intervals; instead, each pixel generates a spike whenever it detects a change in light intensity. In other words, the output is already neuromorphic in natureโ€”a continuous stream of sparse, asynchronous events (โ€œspikesโ€) rather than dense video frames.

That makes it the perfect companion for BrainChipโ€™s Akida processor, which is itself a spiking neural network. While traditional cameras must be converted into spiking form to feed an SNN, and while Prophesee must normally โ€œde-spikeโ€ its output to feed a conventional convolutional network, Akida and Prophesee can connect directlyโ€”spike to spike, neuron to neuronโ€”with no format gymnastics or power-hungry frame buffering in between.

This native spike-based synergy pays off handsomely in power and latency. As BrainChipโ€™s engineers put it, โ€œWeโ€™re working in kilobits per second instead of megabits per second.โ€ Because the Prophesee sensor transmits information only when something changes, and Akida computes only when spikes arrive, the overall system consumes mere milliwattsโ€”compared to the tens of milliwatts required by conventional vision systems.

That difference may not matter in a smartphone, but itโ€™s mission-critical for AR/VR glasses, where the battery is a tenth or even a twentieth the size of a phoneโ€™s. By eliminating the need to convert between frames and spikesโ€”and avoiding the energy cost of frame storage, buffering, and transmissionโ€”BrainChip and Prophesee have effectively built a neuromorphic end-to-end vision pipeline that mirrors how biological eyes and brains actually work: always on, always responsive, yet sipping power rather than guzzling it.

As another example, I recently heard that BrainChip and HaiLa Technologies have partnered to show what happens when brain-inspired computing meets ultra-efficient wireless connectivity. Theyโ€™ve created a demonstration that pairs BrainChipโ€™s Akida neuromorphic processor with HaiLaโ€™s BSC2000 backscatter RFIC, a Wiโ€“Fiโ€“compatible chip that communicates by reflecting existing radio signals rather than generating its own (I plan to devote a future column to this technology). The result is a self-contained edge-AI platform that can perform continuous sensing, anomaly detection, and condition monitoring while sipping mere microwatts of powerโ€”small enough to run a connected sensor for its entire lifetime on a single coin-cell battery.

This collaboration highlights a new class of intelligent, battery-free edge devices, where sensing, processing, and communication are all optimized for power efficiency. Akidaโ€™s event-driven architecture processes only the spikes that matter, while HaiLaโ€™s passive backscatter link eliminates most of the radioโ€™s energy cost. Together they enable always-on, locally intelligent IoT nodes ideal for medical, environmental, and infrastructure monitoringโ€”places where replacing batteries is expensive, impractical, or downright impossible. In short, BrainChip and HaiLa are sketching the blueprint for the next wave of ultra-low-power edge AI systems that think before they speak, and that do both with astonishing efficiency.

Sad to relate, none of the above was what I wanted to talk to you about (stop groaningโ€”it was worth reading). What I originally set out to tell you is the newly introduced Akida Cloud (imagine a roll of drums and a fanfare of trombones).
The existing Akida 1, which has been extremely well received by the market, supports 4-, 2-, and 1-bit weights and activations. The next-generation Akida 2, which is expected to be available to developers in the very near future, will support 8-, 4-, and 1-bit weights and activations. Also, the Akida 2 will support spatio-temporal and temporal event-based neural networks.

For years, BrainChipโ€™s biggest hurdle in courting developers wasnโ€™t its neuromorphic siliconโ€”it was logistics. Demonstrating the Akida architecture meant physically shipping bulky FPGA-based boxes to customers, powering them up on-site, and juggling loan periods. With the launch of the Akida Cloud, that bottleneck disappears.

Engineers can now log in, spin up a virtual instance of the existing Akida 1 running on an actual Akida 1, or the forthcoming Akida 2 running on an FPGA, and run their own neural workloads directly in the browser. Models can be trained, loaded, executed, and benchmarked in real timeโ€”no shipping crates, NDAs, or lab setups required.

Akida Cloud represents more than a convenience upgrade; itโ€™s a strategic move to democratize access to neuromorphic technology. By making their latest architectures available online, the chaps and chapesses at BrainChip are lowering the barrier to entry for researchers, startups, and OEMs who want to experiment with event-based AI but lack specialized hardware.

Users can compare the behavior of Akida 1 and Akida 2 side by side, prototype models, and gather performance data before committing to silicon. For BrainChip, the cloud platform also serves as a rapid feedback loopโ€”turning every connected engineer into an early tester and accelerating SNN adoption across the edge AI ecosystem.

And there you have itโ€”brains in the cloud, spikes on the wire, and AI that thinks before it blinks. If this is what the neuromorphic future looks like, I say โ€œbring it onโ€ (just as soon as my poor old hippocampus cools down). But itโ€™s not all about me (it should be, but itโ€™s not). So, what do you think about all of this?
 
  • Fire
  • Like
  • Love
Reactions: 7 users

charles2

Regular
Another highly entertaining eejournal.com article featuring BrainChip by Max Maxfield:


max-0424-image-for-home-page-akida.jpg

October 9, 2025

Bodacious Buzz on the Brain-Boggling Neuromorphic Brain Chip Battlefront​

by Max Maxfield

Hold onto your hippocampus because the latest neuromorphic marvels are firing on all synapses. To ensure weโ€™re all tap-dancing to the same skirl of the bagpipes, letโ€™s remind ourselves that the term โ€œneuromorphicโ€ is a portmanteau that combines the Greek words โ€œneuroโ€ (relating to nerves or the brain) and โ€œmorphicโ€ (relating to form or structure).

Thus, โ€œneuromorphicโ€ literally means โ€œin the form of the brain.โ€ In turn, โ€œneuromorphic computingโ€ refers to electronic systems inspired by the human brainโ€™s functioning. Instead of processing data step-by-step, like traditional computers, neuromorphic chips attempt to mimic how neurons and synapses communicateโ€”utilizing spikes of electrical activity, massive parallelism, and event-driven operation.

The focus of this column is on hardware accelerator intellectual property (IP) functionsโ€”specifically neural processing units (NPUs)โ€”that designers can incorporate into their System-on-Chip (SoC) devices. Some SoC developers use third-party NPU IPs, while others develop their own IPs in-house.

I was just chatting with Steve Brightfield, who is CMO at BrainChip. As you may recall, BrainChipโ€™s claim to fame is its Akida AI acceleration processor IP, which is inspired by the human brainโ€™s cognitive capabilities and energy efficiency. Akida delivers low-power, real-time AI processing at the edge, utilizing neuromorphic principles for applications such as vision, audio, and sensor fusion.

The vast majority of NPU IPs accelerate artificial neural networks (ANNs) using large arrays of multiplyโ€“accumulate (MAC) units. These dense matrixโ€“vector operations are energy-hungry because every neuron participates in every computation, and the hardware must move a lot of data between memory and the MAC array.

By comparison, Akida employs a neuromorphic architecture based on spiking neural networks (SNNs). Akidaโ€™s neurons donโ€™t constantly compute weighted sums; instead, they exchange spikes (brief digital pulses) only when their internal โ€œmembrane potentialโ€ crosses a threshold. This makes Akida event-driven; that is, computation only occurs when new information is available to process.

In contrast to the general-purpose MAC arrays found in conventional NPUs, Akida utilizes synaptic kernels that perform weighted event accumulation upon the arrival of spikes. Each synapse maintains a small local weight and adds its contribution to a neuronโ€™s membrane potential when it receives a spike. This achieves the same effect as multiply-accumulate, but in a sparse, asynchronous, and energy-efficient manner thatโ€™s more akin to a biological brain.

max-0424-01-akida-ip.jpg

Akida self-contained AI acceleration processor IP (Source: BrainChip)

According to BrainChipโ€™s website, the Akida self-contained AI neural processor IP features the following:
  • Scalable fabric of 1 to 128 nodes
  • Each neural node supports 128 MACs
  • Configurable 50K to 130K embedded local SRAM per node
  • DMA for all memory and model operations
  • Multi-layer execution without host CPU
  • Integrate with any Microcontroller or Application Processor
  • Efficient algorithmic mesh

Hang on! I just told you that, โ€œIn contrast to the general-purpose MAC arrays found in conventional NPUs, Akida utilizes synaptic kernelsโ€ฆโ€ So, itโ€™s a tad embarrassing to see the folks at BrainChip referencing MACs on their website. The thing is that, in the case of the Akida, the term โ€œMACโ€ is used somewhat looselyโ€”more as an engineering shorthand than as a literal, synchronous multiplyโ€“accumulate unit like those found in conventional GPUs and NPUs.

While each Akida neural node contains hardware that can perform multiply-accumulate operations, these operations are event-driven and sparsely activated. When an input spike arrives, only the relevant synapses and neurons in that node perform a small weighted accumulationโ€”thereโ€™s no continuous clocked matrix multiplication going on in the background.

So, while BrainChipโ€™s documentation calls them โ€œMACs,โ€ theyโ€™re actually implemented as neuromorphic synaptic processors that behave like a MAC when a spike fires while remaining idle otherwise. This is how the Akida achieves orders-of-magnitude lower power consumption than conventional NPUs, despite performing similar mathematical operations in principle.

Another way to think about this is that a conventional MAC array crunches numbers continuously, with every neuron participating in every cycle. By comparison, an Akida nodeโ€™s neuromorphic synapses sit dormant, only springing into action when a spike arrives, performing their math locally, and then quieting down again. If I were waxing poetical, I might be tempted to say something pithy at this juncture, like โ€œmore firefly than furnace,โ€ but Iโ€™m not, so I wonโ€™t.

But wait, thereโ€™s more, because the Akida processor IP uses sparsity to focus on the most important data, inherently avoiding unnecessary computation and saving energy at every step. Meanwhile, BrainChipโ€™s neural network model, known as Temporal Event-based Neural Networks (TENNs), builds on a state-space model architecture to track events over time, rather than sampling at fixed intervals, thereby skipping periods of no change to conserve energy and memory. Together, these little scamps deliver unmatched efficiency for real-time AI.

max-0424-02-akida-ip-and-tenns-models.jpg

Akida neural processor + TENNs models = awesome (Source: BrainChip)

The name of the game here is โ€œsparse.โ€ Weโ€™re talking sparse data (streaming inputs are converted to events at the hardware level, reducing the volume of data by up to 10x before processing begins), sparse weights (unnecessary weights are pruned and compressed, reducing model size and compute demand by up to 10x), and sparse activations (only essential activation functions pass data to the next layers, cutting downstream computation by up to 10x).

Since traditional CNNs activate every neural layer at every timestep, they can consume watts of power to process full data streams, even when nothing is changing. By comparison, the fact that the Akida processes only meaningful information enables real-time streaming AI that runs continuously on milliwatts of power, making it possible to deploy always-on intelligence in wearables, sensors, and other battery-powered devices.

Of course, nothing is easy (โ€œIf it were easy, everyone would be doing it,โ€ as they say). A significant challenge for people who wish to utilize neuromorphic computing is that spiking networks differ from conventional neural networks. This is why the folks at BrainChip provide a CNN-to-SNN converter. This means developers can start out with a conventional CNN (which they may already have) and then convert it to an SNN to run on an Akida.

As usual, there are more layers to this onion than you might first suppose. Consider, for example, BrainChipโ€™s collaboration with Prophesee. This is one of those rare cases where two technologies fit together as if theyโ€™d been waiting for each other all along.

Propheseeโ€™s event-based cameras donโ€™t capture conventional frames at fixed intervals; instead, each pixel generates a spike whenever it detects a change in light intensity. In other words, the output is already neuromorphic in natureโ€”a continuous stream of sparse, asynchronous events (โ€œspikesโ€) rather than dense video frames.

That makes it the perfect companion for BrainChipโ€™s Akida processor, which is itself a spiking neural network. While traditional cameras must be converted into spiking form to feed an SNN, and while Prophesee must normally โ€œde-spikeโ€ its output to feed a conventional convolutional network, Akida and Prophesee can connect directlyโ€”spike to spike, neuron to neuronโ€”with no format gymnastics or power-hungry frame buffering in between.

This native spike-based synergy pays off handsomely in power and latency. As BrainChipโ€™s engineers put it, โ€œWeโ€™re working in kilobits per second instead of megabits per second.โ€ Because the Prophesee sensor transmits information only when something changes, and Akida computes only when spikes arrive, the overall system consumes mere milliwattsโ€”compared to the tens of milliwatts required by conventional vision systems.

That difference may not matter in a smartphone, but itโ€™s mission-critical for AR/VR glasses, where the battery is a tenth or even a twentieth the size of a phoneโ€™s. By eliminating the need to convert between frames and spikesโ€”and avoiding the energy cost of frame storage, buffering, and transmissionโ€”BrainChip and Prophesee have effectively built a neuromorphic end-to-end vision pipeline that mirrors how biological eyes and brains actually work: always on, always responsive, yet sipping power rather than guzzling it.

As another example, I recently heard that BrainChip and HaiLa Technologies have partnered to show what happens when brain-inspired computing meets ultra-efficient wireless connectivity. Theyโ€™ve created a demonstration that pairs BrainChipโ€™s Akida neuromorphic processor with HaiLaโ€™s BSC2000 backscatter RFIC, a Wiโ€“Fiโ€“compatible chip that communicates by reflecting existing radio signals rather than generating its own (I plan to devote a future column to this technology). The result is a self-contained edge-AI platform that can perform continuous sensing, anomaly detection, and condition monitoring while sipping mere microwatts of powerโ€”small enough to run a connected sensor for its entire lifetime on a single coin-cell battery.

This collaboration highlights a new class of intelligent, battery-free edge devices, where sensing, processing, and communication are all optimized for power efficiency. Akidaโ€™s event-driven architecture processes only the spikes that matter, while HaiLaโ€™s passive backscatter link eliminates most of the radioโ€™s energy cost. Together they enable always-on, locally intelligent IoT nodes ideal for medical, environmental, and infrastructure monitoringโ€”places where replacing batteries is expensive, impractical, or downright impossible. In short, BrainChip and HaiLa are sketching the blueprint for the next wave of ultra-low-power edge AI systems that think before they speak, and that do both with astonishing efficiency.

Sad to relate, none of the above was what I wanted to talk to you about (stop groaningโ€”it was worth reading). What I originally set out to tell you is the newly introduced Akida Cloud (imagine a roll of drums and a fanfare of trombones).
The existing Akida 1, which has been extremely well received by the market, supports 4-, 2-, and 1-bit weights and activations. The next-generation Akida 2, which is expected to be available to developers in the very near future, will support 8-, 4-, and 1-bit weights and activations. Also, the Akida 2 will support spatio-temporal and temporal event-based neural networks.

For years, BrainChipโ€™s biggest hurdle in courting developers wasnโ€™t its neuromorphic siliconโ€”it was logistics. Demonstrating the Akida architecture meant physically shipping bulky FPGA-based boxes to customers, powering them up on-site, and juggling loan periods. With the launch of the Akida Cloud, that bottleneck disappears.

Engineers can now log in, spin up a virtual instance of the existing Akida 1 running on an actual Akida 1, or the forthcoming Akida 2 running on an FPGA, and run their own neural workloads directly in the browser. Models can be trained, loaded, executed, and benchmarked in real timeโ€”no shipping crates, NDAs, or lab setups required.

Akida Cloud represents more than a convenience upgrade; itโ€™s a strategic move to democratize access to neuromorphic technology. By making their latest architectures available online, the chaps and chapesses at BrainChip are lowering the barrier to entry for researchers, startups, and OEMs who want to experiment with event-based AI but lack specialized hardware.

Users can compare the behavior of Akida 1 and Akida 2 side by side, prototype models, and gather performance data before committing to silicon. For BrainChip, the cloud platform also serves as a rapid feedback loopโ€”turning every connected engineer into an early tester and accelerating SNN adoption across the edge AI ecosystem.

And there you have itโ€”brains in the cloud, spikes on the wire, and AI that thinks before it blinks. If this is what the neuromorphic future looks like, I say โ€œbring it onโ€ (just as soon as my poor old hippocampus cools down). But itโ€™s not all about me (it should be, but itโ€™s not). So, what do you think about all of this?
There you have it: Accessible and in Plain English. Even the boss will understand!

Awareness of BrainChip should accelerate by magnitudes. And fast.

Damn good KoolAid to start the day.
 
  • Like
  • Haha
Reactions: 5 users

Frangipani

Top 20
Last year, a student team using Akida won the Munich Neuromorphic Hackathon, organised by neuroTUM (a student club based at TU Mรผnchen / Technical University of Munich for students interested in the intersection of neuroscience and engineering) and our partner fortiss (who to this day have never officially been acknowledged as a partner from our side, though).

Will Akida again help one of the teams to win this yearโ€™s challenge?!
The 2025 Munich Neuromorphic Hackathon will take place from 7-12 November.

โ€œThe teams will face interesting industry challenges posed by German Aerospace Center (DLR), Simi Reality Motion Systems and fortiss, working with Brain-inspired computing methods towards the most efficient neuromorphic processor.โ€

Simi Reality Motion Systems (part of the ZF Group) has been collaborating with fortiss on several projects, such as SpikingBody (โ€œNeuromorphic AI meets tennis. Real-time action recognition implemented on Loihi 2) and EMMANรœELA (AR/VR).


View attachment 91444



View attachment 91452 View attachment 91453 View attachment 91454

View attachment 91445 View attachment 91446



View attachment 91447

View attachment 91448



View attachment 91450 View attachment 91451

Our partner fortiss (although the partnership has so far only been acknowledged by them, not by BrainChip, cf. https://thestockexchange.com.au/threads/brn-discussion-ongoing.1/post-454015), is now also promoting the Munich Neuromorphic Hackathon 2025 (7-8, 10-12 November) on LinkedIn:


BA798AEF-DD19-4DC5-8640-D94AD58716F3.jpeg



More on Simi Reality Motion Systems in my tagged post.

Sponsors of the Munich Hackathon 2025 are the Fraunhofer Institut fรผr Techno- und Wirtschaftsmathematik (ITWM) in Kaiserslautern - one of two Fraunhofer Institutes collaborating on the STANCE (Strategic Alliance for Neuromorphic Computing and Engineering) project - as well as gAIn - Next Generation AI Computing, a joint initiative between Ludwig-Maximilians-Universitรคt Munich (LMU), Technical University of Dresden and Technical University of Munich (TUM), supported by the state governments of Bavaria and Saxony.



9B38A10A-F6DA-4C02-BE83-1BACD8CEA399.jpeg


E2706116-6468-49B3-90EC-4A05A5CDB68E.jpeg




Which reminded me of this September 2024 post, when I spotted a Fraunhofer ITWM researcher liking a BrainChip post:

https://thestockexchange.com.au/threads/brn-discussion-ongoing.1/post-435802

2F221CC9-675D-4A93-87BB-D6728A6F7039.jpeg






1ACC8266-1C0C-4F25-9962-A1665CE6327A.jpeg




E859C6CA-3364-45B4-B115-028079DBF9D5.jpeg
20977E62-88E2-4E2A-99AA-49C555EB525B.jpeg
 
Last edited:
Top Bottom