Pom down under
Top 20
Well you try and find a gif to suitThat's a serve Pom not a forehand smash volleyI do know SUMMFING

Well you try and find a gif to suitThat's a serve Pom not a forehand smash volleyI do know SUMMFING
![]()
I'm very happy to have joined Paddington Robotics as part of an ambitious team! We're focused on applications to solve real-world problems from day 1 and we're moving fast. In between we have breaks⌠| Gregor Lenz
I'm very happy to have joined Paddington Robotics as part of an ambitious team! We're focused on applications to solve real-world problems from day 1 and we're moving fast. In between we have breaks playing foosball so really what's not to like? If you want to build with us and help us grow a...www.linkedin.com
Gregor Lenz, until recently CTO of our partner Neurobus (https://thestockexchange.com.au/threads/brn-discussion-ongoing.1/post-456183) and co-author of Low-power Ship Detection in Satellite Images Using Neuromorphic Hardware alongside Douglas McLelland (https://arxiv.org/pdf/2406.11319) has joined the London-based startup Paddington Robotics (https://paddington-robotics.com/ - the website doesnât yet have any information other than âPaddington Robotics - Embodied AI in Actionâ):
View attachment 81384
![]()
Paddington Robotics | LinkedIn
Paddington Robotics | 494 followers on LinkedIn. Solving AI + general purpose robotics in the real world | Bridging the gap between the digital world and the physical world, between product and research - solving AI + Robotics, whilst solving real world problems.www.linkedin.com
View attachment 81386
Some further info I was able to find about the London-based startup founded late last year, whose co-founder and CEO is Zehan Wang:
View attachment 81419
https://www.northdata.de/Paddington%20Robotics%20Ltd¡,%20London/Companies%20House%2016015385
View attachment 81420 View attachment 81421 View attachment 81422 View attachment 81423
Camera Supplier | Sensor | Model Name | Year | Resolution | Dynamic Range (dB) | Max Bandwidth (Mev/s) |
---|---|---|---|---|---|---|
iniVation | Gen2 DVS | DAVIS346 | 2017 | 346Ă260 | ~120 | 12 |
iniVation | Gen3 DVS | DVXPlorer | 2020 | 640Ă480 | 90-110 | 165 |
Prophesee | Sony IMX636 | EVK4 | 2020 | 1280Ă720 | 120 | 1066 |
Prophesee | GenX320 | EVK3 | 2023 | 320Ă320 | 140 | |
Samsung | Gen4 DVS | DVS-Gen4 | 2020 | 1280Ă960 | 1200 |
Sensor | Event output type | Timing & synchronization | Polarity info | Typical max rate |
---|---|---|---|---|
Sony 2.97 Îźm | Binary event frames (two separate ON/OFF maps) | Synchronous, ~580 Âľs âevent frameâ period | 2 bits per pixel (positive & negative) | ~1.4 GEvents/s |
OmniVision 3-wafer | Per-event address-event packets (x, y, t, polarity) | Asynchronous, microsecond-level timestamps | Single-bit polarity per event | Up to 4.6 GEvents/s |
Sony 1.22 Îźm, 35.6 MP | Binary event frames with row-skipping & compression | Variable frame sync, up to 10 kfps per RGB frame | 2 bits per pixel (positive & negative) | Up to 4.56 GEvents/s |
Itâs been a while @robsmark but great to see some of the âold timersâ back on here and commenting. Itâs been a rough few years. Maybe still a touch early for the golden goose to lay, but us loyal shareholders will be holding on for dear life until that moment hopefully arises!Nice to see a bit of life back in the old girl... The question on my mind is whether its been artificially pumped to lure retail in only to have the rug pulled, or was this caused by an entity that knows something we don't? Volume has been massive, but the ASX is funded by retail sheep, who jump on companies whenever there's upwards movement. Lets hope its the latter and something is announced by the company soon. I've bumped heads with several on here, but don't think there's a stock on the ASX that has more loyal shareholder base than this one, and it'd be fantastic to be rewarded for many years of patience.
Well you try and find a gif to suit![]()
There you have it: Accessible and in Plain English. Even the boss will understand!Another highly entertaining eejournal.com article featuring BrainChip by Max Maxfield:
![]()
Bodacious Buzz on the Brain-Boggling Neuromorphic Brain Chip Battlefront
Hold onto your hippocampus because the latest neuromorphic marvels are firing on all synapses. To ensure weâre all tap-dancing to the same skirl of the bagpipes, letâs remind ourselves that the terâŚwww.eejournal.com
![]()
October 9, 2025
Bodacious Buzz on the Brain-Boggling Neuromorphic Brain Chip Battlefront
by Max Maxfield
Hold onto your hippocampus because the latest neuromorphic marvels are firing on all synapses. To ensure weâre all tap-dancing to the same skirl of the bagpipes, letâs remind ourselves that the term âneuromorphicâ is a portmanteau that combines the Greek words âneuroâ (relating to nerves or the brain) and âmorphicâ (relating to form or structure).
Thus, âneuromorphicâ literally means âin the form of the brain.â In turn, âneuromorphic computingâ refers to electronic systems inspired by the human brainâs functioning. Instead of processing data step-by-step, like traditional computers, neuromorphic chips attempt to mimic how neurons and synapses communicateâutilizing spikes of electrical activity, massive parallelism, and event-driven operation.
The focus of this column is on hardware accelerator intellectual property (IP) functionsâspecifically neural processing units (NPUs)âthat designers can incorporate into their System-on-Chip (SoC) devices. Some SoC developers use third-party NPU IPs, while others develop their own IPs in-house.
I was just chatting with Steve Brightfield, who is CMO at BrainChip. As you may recall, BrainChipâs claim to fame is its Akida AI acceleration processor IP, which is inspired by the human brainâs cognitive capabilities and energy efficiency. Akida delivers low-power, real-time AI processing at the edge, utilizing neuromorphic principles for applications such as vision, audio, and sensor fusion.
The vast majority of NPU IPs accelerate artificial neural networks (ANNs) using large arrays of multiplyâaccumulate (MAC) units. These dense matrixâvector operations are energy-hungry because every neuron participates in every computation, and the hardware must move a lot of data between memory and the MAC array.
By comparison, Akida employs a neuromorphic architecture based on spiking neural networks (SNNs). Akidaâs neurons donât constantly compute weighted sums; instead, they exchange spikes (brief digital pulses) only when their internal âmembrane potentialâ crosses a threshold. This makes Akida event-driven; that is, computation only occurs when new information is available to process.
In contrast to the general-purpose MAC arrays found in conventional NPUs, Akida utilizes synaptic kernels that perform weighted event accumulation upon the arrival of spikes. Each synapse maintains a small local weight and adds its contribution to a neuronâs membrane potential when it receives a spike. This achieves the same effect as multiply-accumulate, but in a sparse, asynchronous, and energy-efficient manner thatâs more akin to a biological brain.
![]()
Akida self-contained AI acceleration processor IP (Source: BrainChip)
According to BrainChipâs website, the Akida self-contained AI neural processor IP features the following:
- Scalable fabric of 1 to 128 nodes
- Each neural node supports 128 MACs
- Configurable 50K to 130K embedded local SRAM per node
- DMA for all memory and model operations
- Multi-layer execution without host CPU
- Integrate with any Microcontroller or Application Processor
- Efficient algorithmic mesh
Hang on! I just told you that, âIn contrast to the general-purpose MAC arrays found in conventional NPUs, Akida utilizes synaptic kernelsâŚâ So, itâs a tad embarrassing to see the folks at BrainChip referencing MACs on their website. The thing is that, in the case of the Akida, the term âMACâ is used somewhat looselyâmore as an engineering shorthand than as a literal, synchronous multiplyâaccumulate unit like those found in conventional GPUs and NPUs.
While each Akida neural node contains hardware that can perform multiply-accumulate operations, these operations are event-driven and sparsely activated. When an input spike arrives, only the relevant synapses and neurons in that node perform a small weighted accumulationâthereâs no continuous clocked matrix multiplication going on in the background.
So, while BrainChipâs documentation calls them âMACs,â theyâre actually implemented as neuromorphic synaptic processors that behave like a MAC when a spike fires while remaining idle otherwise. This is how the Akida achieves orders-of-magnitude lower power consumption than conventional NPUs, despite performing similar mathematical operations in principle.
Another way to think about this is that a conventional MAC array crunches numbers continuously, with every neuron participating in every cycle. By comparison, an Akida nodeâs neuromorphic synapses sit dormant, only springing into action when a spike arrives, performing their math locally, and then quieting down again. If I were waxing poetical, I might be tempted to say something pithy at this juncture, like âmore firefly than furnace,â but Iâm not, so I wonât.
But wait, thereâs more, because the Akida processor IP uses sparsity to focus on the most important data, inherently avoiding unnecessary computation and saving energy at every step. Meanwhile, BrainChipâs neural network model, known as Temporal Event-based Neural Networks (TENNs), builds on a state-space model architecture to track events over time, rather than sampling at fixed intervals, thereby skipping periods of no change to conserve energy and memory. Together, these little scamps deliver unmatched efficiency for real-time AI.
![]()
Akida neural processor + TENNs models = awesome (Source: BrainChip)
The name of the game here is âsparse.â Weâre talking sparse data (streaming inputs are converted to events at the hardware level, reducing the volume of data by up to 10x before processing begins), sparse weights (unnecessary weights are pruned and compressed, reducing model size and compute demand by up to 10x), and sparse activations (only essential activation functions pass data to the next layers, cutting downstream computation by up to 10x).
Since traditional CNNs activate every neural layer at every timestep, they can consume watts of power to process full data streams, even when nothing is changing. By comparison, the fact that the Akida processes only meaningful information enables real-time streaming AI that runs continuously on milliwatts of power, making it possible to deploy always-on intelligence in wearables, sensors, and other battery-powered devices.
Of course, nothing is easy (âIf it were easy, everyone would be doing it,â as they say). A significant challenge for people who wish to utilize neuromorphic computing is that spiking networks differ from conventional neural networks. This is why the folks at BrainChip provide a CNN-to-SNN converter. This means developers can start out with a conventional CNN (which they may already have) and then convert it to an SNN to run on an Akida.
As usual, there are more layers to this onion than you might first suppose. Consider, for example, BrainChipâs collaboration with Prophesee. This is one of those rare cases where two technologies fit together as if theyâd been waiting for each other all along.
Propheseeâs event-based cameras donât capture conventional frames at fixed intervals; instead, each pixel generates a spike whenever it detects a change in light intensity. In other words, the output is already neuromorphic in natureâa continuous stream of sparse, asynchronous events (âspikesâ) rather than dense video frames.
That makes it the perfect companion for BrainChipâs Akida processor, which is itself a spiking neural network. While traditional cameras must be converted into spiking form to feed an SNN, and while Prophesee must normally âde-spikeâ its output to feed a conventional convolutional network, Akida and Prophesee can connect directlyâspike to spike, neuron to neuronâwith no format gymnastics or power-hungry frame buffering in between.
This native spike-based synergy pays off handsomely in power and latency. As BrainChipâs engineers put it, âWeâre working in kilobits per second instead of megabits per second.â Because the Prophesee sensor transmits information only when something changes, and Akida computes only when spikes arrive, the overall system consumes mere milliwattsâcompared to the tens of milliwatts required by conventional vision systems.
That difference may not matter in a smartphone, but itâs mission-critical for AR/VR glasses, where the battery is a tenth or even a twentieth the size of a phoneâs. By eliminating the need to convert between frames and spikesâand avoiding the energy cost of frame storage, buffering, and transmissionâBrainChip and Prophesee have effectively built a neuromorphic end-to-end vision pipeline that mirrors how biological eyes and brains actually work: always on, always responsive, yet sipping power rather than guzzling it.
As another example, I recently heard that BrainChip and HaiLa Technologies have partnered to show what happens when brain-inspired computing meets ultra-efficient wireless connectivity. Theyâve created a demonstration that pairs BrainChipâs Akida neuromorphic processor with HaiLaâs BSC2000 backscatter RFIC, a WiâFiâcompatible chip that communicates by reflecting existing radio signals rather than generating its own (I plan to devote a future column to this technology). The result is a self-contained edge-AI platform that can perform continuous sensing, anomaly detection, and condition monitoring while sipping mere microwatts of powerâsmall enough to run a connected sensor for its entire lifetime on a single coin-cell battery.
This collaboration highlights a new class of intelligent, battery-free edge devices, where sensing, processing, and communication are all optimized for power efficiency. Akidaâs event-driven architecture processes only the spikes that matter, while HaiLaâs passive backscatter link eliminates most of the radioâs energy cost. Together they enable always-on, locally intelligent IoT nodes ideal for medical, environmental, and infrastructure monitoringâplaces where replacing batteries is expensive, impractical, or downright impossible. In short, BrainChip and HaiLa are sketching the blueprint for the next wave of ultra-low-power edge AI systems that think before they speak, and that do both with astonishing efficiency.
Sad to relate, none of the above was what I wanted to talk to you about (stop groaningâit was worth reading). What I originally set out to tell you is the newly introduced Akida Cloud (imagine a roll of drums and a fanfare of trombones).
The existing Akida 1, which has been extremely well received by the market, supports 4-, 2-, and 1-bit weights and activations. The next-generation Akida 2, which is expected to be available to developers in the very near future, will support 8-, 4-, and 1-bit weights and activations. Also, the Akida 2 will support spatio-temporal and temporal event-based neural networks.
For years, BrainChipâs biggest hurdle in courting developers wasnât its neuromorphic siliconâit was logistics. Demonstrating the Akida architecture meant physically shipping bulky FPGA-based boxes to customers, powering them up on-site, and juggling loan periods. With the launch of the Akida Cloud, that bottleneck disappears.
Engineers can now log in, spin up a virtual instance of the existing Akida 1 running on an actual Akida 1, or the forthcoming Akida 2 running on an FPGA, and run their own neural workloads directly in the browser. Models can be trained, loaded, executed, and benchmarked in real timeâno shipping crates, NDAs, or lab setups required.
Akida Cloud represents more than a convenience upgrade; itâs a strategic move to democratize access to neuromorphic technology. By making their latest architectures available online, the chaps and chapesses at BrainChip are lowering the barrier to entry for researchers, startups, and OEMs who want to experiment with event-based AI but lack specialized hardware.
Users can compare the behavior of Akida 1 and Akida 2 side by side, prototype models, and gather performance data before committing to silicon. For BrainChip, the cloud platform also serves as a rapid feedback loopâturning every connected engineer into an early tester and accelerating SNN adoption across the edge AI ecosystem.
And there you have itâbrains in the cloud, spikes on the wire, and AI that thinks before it blinks. If this is what the neuromorphic future looks like, I say âbring it onâ (just as soon as my poor old hippocampus cools down). But itâs not all about me (it should be, but itâs not). So, what do you think about all of this?
Last year, a student team using Akida won the Munich Neuromorphic Hackathon, organised by neuroTUM (a student club based at TU MĂźnchen / Technical University of Munich for students interested in the intersection of neuroscience and engineering) and our partner fortiss (who to this day have never officially been acknowledged as a partner from our side, though).
Will Akida again help one of the teams to win this yearâs challenge?!
The 2025 Munich Neuromorphic Hackathon will take place from 7-12 November.
âThe teams will face interesting industry challenges posed by German Aerospace Center (DLR), Simi Reality Motion Systems and fortiss, working with Brain-inspired computing methods towards the most efficient neuromorphic processor.â
Simi Reality Motion Systems (part of the ZF Group) has been collaborating with fortiss on several projects, such as SpikingBody (âNeuromorphic AI meets tennis. Real-time action recognition implemented on Loihi 2) and EMMANĂELA (AR/VR).
![]()
âď¸đđŠđŠđĽđ˘đđđđ˘đ¨đ§đŹ đ¨đŠđđ§: đđŽđ§đ˘đđĄ đđđŽđŤđ¨đŚđ¨đŤđŠđĄđ˘đ đđđđ¤đđđĄđ¨đ§ đťâď¸đ§ For the third continuous year, we are excited to announce that, in collaboration with fortiss the⌠| neuroTUM
âď¸đđŠđŠđĽđ˘đđđđ˘đ¨đ§đŹ đ¨đŠđđ§: đđŽđ§đ˘đđĄ đđđŽđŤđ¨đŚđ¨đŤđŠđĄđ˘đ đđđđ¤đđđĄđ¨đ§ đťâď¸đ§ For the third continuous year, we are excited to announce that, in collaboration with fortiss the Munich Neuromorphic Hackathon 2025 will take place between 7-11th of November, with a break on the 9th. The teams will face interesting...www.linkedin.com
View attachment 91444
Neuromorphic Hackathon | neuroTUM
Join the revolution in brain-inspired computing at the Neuromorphic Hackathon. Build the future of AI with neuromorphic technologies.neurotum.github.io
View attachment 91452 View attachment 91453 View attachment 91454
View attachment 91445 View attachment 91446
View attachment 91447
View attachment 91448
![]()
LinkedIn Login, Sign in | LinkedIn
Login to LinkedIn to keep in touch with people you know, share ideas, and build your career.www.linkedin.com
View attachment 91450 View attachment 91451
View attachment 91858
Here you go with the Mercedes label and all! I think he smashed it in! Gotta love the Mercedes logo![]()
Another highly entertaining eejournal.com article featuring BrainChip by Max Maxfield:
![]()
Bodacious Buzz on the Brain-Boggling Neuromorphic Brain Chip Battlefront
Hold onto your hippocampus because the latest neuromorphic marvels are firing on all synapses. To ensure weâre all tap-dancing to the same skirl of the bagpipes, letâs remind ourselves that the terâŚwww.eejournal.com
![]()
October 9, 2025
Bodacious Buzz on the Brain-Boggling Neuromorphic Brain Chip Battlefront
by Max Maxfield
Hold onto your hippocampus because the latest neuromorphic marvels are firing on all synapses. To ensure weâre all tap-dancing to the same skirl of the bagpipes, letâs remind ourselves that the term âneuromorphicâ is a portmanteau that combines the Greek words âneuroâ (relating to nerves or the brain) and âmorphicâ (relating to form or structure).
Thus, âneuromorphicâ literally means âin the form of the brain.â In turn, âneuromorphic computingâ refers to electronic systems inspired by the human brainâs functioning. Instead of processing data step-by-step, like traditional computers, neuromorphic chips attempt to mimic how neurons and synapses communicateâutilizing spikes of electrical activity, massive parallelism, and event-driven operation.
The focus of this column is on hardware accelerator intellectual property (IP) functionsâspecifically neural processing units (NPUs)âthat designers can incorporate into their System-on-Chip (SoC) devices. Some SoC developers use third-party NPU IPs, while others develop their own IPs in-house.
I was just chatting with Steve Brightfield, who is CMO at BrainChip. As you may recall, BrainChipâs claim to fame is its Akida AI acceleration processor IP, which is inspired by the human brainâs cognitive capabilities and energy efficiency. Akida delivers low-power, real-time AI processing at the edge, utilizing neuromorphic principles for applications such as vision, audio, and sensor fusion.
The vast majority of NPU IPs accelerate artificial neural networks (ANNs) using large arrays of multiplyâaccumulate (MAC) units. These dense matrixâvector operations are energy-hungry because every neuron participates in every computation, and the hardware must move a lot of data between memory and the MAC array.
By comparison, Akida employs a neuromorphic architecture based on spiking neural networks (SNNs). Akidaâs neurons donât constantly compute weighted sums; instead, they exchange spikes (brief digital pulses) only when their internal âmembrane potentialâ crosses a threshold. This makes Akida event-driven; that is, computation only occurs when new information is available to process.
In contrast to the general-purpose MAC arrays found in conventional NPUs, Akida utilizes synaptic kernels that perform weighted event accumulation upon the arrival of spikes. Each synapse maintains a small local weight and adds its contribution to a neuronâs membrane potential when it receives a spike. This achieves the same effect as multiply-accumulate, but in a sparse, asynchronous, and energy-efficient manner thatâs more akin to a biological brain.
![]()
Akida self-contained AI acceleration processor IP (Source: BrainChip)
According to BrainChipâs website, the Akida self-contained AI neural processor IP features the following:
- Scalable fabric of 1 to 128 nodes
- Each neural node supports 128 MACs
- Configurable 50K to 130K embedded local SRAM per node
- DMA for all memory and model operations
- Multi-layer execution without host CPU
- Integrate with any Microcontroller or Application Processor
- Efficient algorithmic mesh
Hang on! I just told you that, âIn contrast to the general-purpose MAC arrays found in conventional NPUs, Akida utilizes synaptic kernelsâŚâ So, itâs a tad embarrassing to see the folks at BrainChip referencing MACs on their website. The thing is that, in the case of the Akida, the term âMACâ is used somewhat looselyâmore as an engineering shorthand than as a literal, synchronous multiplyâaccumulate unit like those found in conventional GPUs and NPUs.
While each Akida neural node contains hardware that can perform multiply-accumulate operations, these operations are event-driven and sparsely activated. When an input spike arrives, only the relevant synapses and neurons in that node perform a small weighted accumulationâthereâs no continuous clocked matrix multiplication going on in the background.
So, while BrainChipâs documentation calls them âMACs,â theyâre actually implemented as neuromorphic synaptic processors that behave like a MAC when a spike fires while remaining idle otherwise. This is how the Akida achieves orders-of-magnitude lower power consumption than conventional NPUs, despite performing similar mathematical operations in principle.
Another way to think about this is that a conventional MAC array crunches numbers continuously, with every neuron participating in every cycle. By comparison, an Akida nodeâs neuromorphic synapses sit dormant, only springing into action when a spike arrives, performing their math locally, and then quieting down again. If I were waxing poetical, I might be tempted to say something pithy at this juncture, like âmore firefly than furnace,â but Iâm not, so I wonât.
But wait, thereâs more, because the Akida processor IP uses sparsity to focus on the most important data, inherently avoiding unnecessary computation and saving energy at every step. Meanwhile, BrainChipâs neural network model, known as Temporal Event-based Neural Networks (TENNs), builds on a state-space model architecture to track events over time, rather than sampling at fixed intervals, thereby skipping periods of no change to conserve energy and memory. Together, these little scamps deliver unmatched efficiency for real-time AI.
![]()
Akida neural processor + TENNs models = awesome (Source: BrainChip)
The name of the game here is âsparse.â Weâre talking sparse data (streaming inputs are converted to events at the hardware level, reducing the volume of data by up to 10x before processing begins), sparse weights (unnecessary weights are pruned and compressed, reducing model size and compute demand by up to 10x), and sparse activations (only essential activation functions pass data to the next layers, cutting downstream computation by up to 10x).
Since traditional CNNs activate every neural layer at every timestep, they can consume watts of power to process full data streams, even when nothing is changing. By comparison, the fact that the Akida processes only meaningful information enables real-time streaming AI that runs continuously on milliwatts of power, making it possible to deploy always-on intelligence in wearables, sensors, and other battery-powered devices.
Of course, nothing is easy (âIf it were easy, everyone would be doing it,â as they say). A significant challenge for people who wish to utilize neuromorphic computing is that spiking networks differ from conventional neural networks. This is why the folks at BrainChip provide a CNN-to-SNN converter. This means developers can start out with a conventional CNN (which they may already have) and then convert it to an SNN to run on an Akida.
As usual, there are more layers to this onion than you might first suppose. Consider, for example, BrainChipâs collaboration with Prophesee. This is one of those rare cases where two technologies fit together as if theyâd been waiting for each other all along.
Propheseeâs event-based cameras donât capture conventional frames at fixed intervals; instead, each pixel generates a spike whenever it detects a change in light intensity. In other words, the output is already neuromorphic in natureâa continuous stream of sparse, asynchronous events (âspikesâ) rather than dense video frames.
That makes it the perfect companion for BrainChipâs Akida processor, which is itself a spiking neural network. While traditional cameras must be converted into spiking form to feed an SNN, and while Prophesee must normally âde-spikeâ its output to feed a conventional convolutional network, Akida and Prophesee can connect directlyâspike to spike, neuron to neuronâwith no format gymnastics or power-hungry frame buffering in between.
This native spike-based synergy pays off handsomely in power and latency. As BrainChipâs engineers put it, âWeâre working in kilobits per second instead of megabits per second.â Because the Prophesee sensor transmits information only when something changes, and Akida computes only when spikes arrive, the overall system consumes mere milliwattsâcompared to the tens of milliwatts required by conventional vision systems.
That difference may not matter in a smartphone, but itâs mission-critical for AR/VR glasses, where the battery is a tenth or even a twentieth the size of a phoneâs. By eliminating the need to convert between frames and spikesâand avoiding the energy cost of frame storage, buffering, and transmissionâBrainChip and Prophesee have effectively built a neuromorphic end-to-end vision pipeline that mirrors how biological eyes and brains actually work: always on, always responsive, yet sipping power rather than guzzling it.
As another example, I recently heard that BrainChip and HaiLa Technologies have partnered to show what happens when brain-inspired computing meets ultra-efficient wireless connectivity. Theyâve created a demonstration that pairs BrainChipâs Akida neuromorphic processor with HaiLaâs BSC2000 backscatter RFIC, a WiâFiâcompatible chip that communicates by reflecting existing radio signals rather than generating its own (I plan to devote a future column to this technology). The result is a self-contained edge-AI platform that can perform continuous sensing, anomaly detection, and condition monitoring while sipping mere microwatts of powerâsmall enough to run a connected sensor for its entire lifetime on a single coin-cell battery.
This collaboration highlights a new class of intelligent, battery-free edge devices, where sensing, processing, and communication are all optimized for power efficiency. Akidaâs event-driven architecture processes only the spikes that matter, while HaiLaâs passive backscatter link eliminates most of the radioâs energy cost. Together they enable always-on, locally intelligent IoT nodes ideal for medical, environmental, and infrastructure monitoringâplaces where replacing batteries is expensive, impractical, or downright impossible. In short, BrainChip and HaiLa are sketching the blueprint for the next wave of ultra-low-power edge AI systems that think before they speak, and that do both with astonishing efficiency.
Sad to relate, none of the above was what I wanted to talk to you about (stop groaningâit was worth reading). What I originally set out to tell you is the newly introduced Akida Cloud (imagine a roll of drums and a fanfare of trombones).
The existing Akida 1, which has been extremely well received by the market, supports 4-, 2-, and 1-bit weights and activations. The next-generation Akida 2, which is expected to be available to developers in the very near future, will support 8-, 4-, and 1-bit weights and activations. Also, the Akida 2 will support spatio-temporal and temporal event-based neural networks.
For years, BrainChipâs biggest hurdle in courting developers wasnât its neuromorphic siliconâit was logistics. Demonstrating the Akida architecture meant physically shipping bulky FPGA-based boxes to customers, powering them up on-site, and juggling loan periods. With the launch of the Akida Cloud, that bottleneck disappears.
Engineers can now log in, spin up a virtual instance of the existing Akida 1 running on an actual Akida 1, or the forthcoming Akida 2 running on an FPGA, and run their own neural workloads directly in the browser. Models can be trained, loaded, executed, and benchmarked in real timeâno shipping crates, NDAs, or lab setups required.
Akida Cloud represents more than a convenience upgrade; itâs a strategic move to democratize access to neuromorphic technology. By making their latest architectures available online, the chaps and chapesses at BrainChip are lowering the barrier to entry for researchers, startups, and OEMs who want to experiment with event-based AI but lack specialized hardware.
Users can compare the behavior of Akida 1 and Akida 2 side by side, prototype models, and gather performance data before committing to silicon. For BrainChip, the cloud platform also serves as a rapid feedback loopâturning every connected engineer into an early tester and accelerating SNN adoption across the edge AI ecosystem.
And there you have itâbrains in the cloud, spikes on the wire, and AI that thinks before it blinks. If this is what the neuromorphic future looks like, I say âbring it onâ (just as soon as my poor old hippocampus cools down). But itâs not all about me (it should be, but itâs not). So, what do you think about all of this?