Pom down under
Top 20
Well you try and find a gif to suitThat's a serve Pom not a forehand smash volleyI do know SUMMFING

Well you try and find a gif to suitThat's a serve Pom not a forehand smash volleyI do know SUMMFING
![]()
I'm very happy to have joined Paddington Robotics as part of an ambitious team! We're focused on applications to solve real-world problems from day 1 and we're moving fast. In between we have breaks⦠| Gregor Lenz
I'm very happy to have joined Paddington Robotics as part of an ambitious team! We're focused on applications to solve real-world problems from day 1 and we're moving fast. In between we have breaks playing foosball so really what's not to like? If you want to build with us and help us grow a...www.linkedin.com
Gregor Lenz, until recently CTO of our partner Neurobus (https://thestockexchange.com.au/threads/brn-discussion-ongoing.1/post-456183) and co-author of Low-power Ship Detection in Satellite Images Using Neuromorphic Hardware alongside Douglas McLelland (https://arxiv.org/pdf/2406.11319) has joined the London-based startup Paddington Robotics (https://paddington-robotics.com/ - the website doesnāt yet have any information other than āPaddington Robotics - Embodied AI in Actionā):
View attachment 81384
![]()
Paddington Robotics | LinkedIn
Paddington Robotics | 494 followers on LinkedIn. Solving AI + general purpose robotics in the real world | Bridging the gap between the digital world and the physical world, between product and research - solving AI + Robotics, whilst solving real world problems.www.linkedin.com
View attachment 81386
Some further info I was able to find about the London-based startup founded late last year, whose co-founder and CEO is Zehan Wang:
View attachment 81419
https://www.northdata.de/Paddington%20Robotics%20LtdĀ·,%20London/Companies%20House%2016015385
View attachment 81420 View attachment 81421 View attachment 81422 View attachment 81423
Camera Supplier | Sensor | Model Name | Year | Resolution | Dynamic Range (dB) | Max Bandwidth (Mev/s) |
---|---|---|---|---|---|---|
iniVation | Gen2 DVS | DAVIS346 | 2017 | 346Ć260 | ~120 | 12 |
iniVation | Gen3 DVS | DVXPlorer | 2020 | 640Ć480 | 90-110 | 165 |
Prophesee | Sony IMX636 | EVK4 | 2020 | 1280Ć720 | 120 | 1066 |
Prophesee | GenX320 | EVK3 | 2023 | 320Ć320 | 140 | |
Samsung | Gen4 DVS | DVS-Gen4 | 2020 | 1280Ć960 | 1200 |
Sensor | Event output type | Timing & synchronization | Polarity info | Typical max rate |
---|---|---|---|---|
Sony 2.97 μm | Binary event frames (two separate ON/OFF maps) | Synchronous, ~580 µs āevent frameā period | 2 bits per pixel (positive & negative) | ~1.4 GEvents/s |
OmniVision 3-wafer | Per-event address-event packets (x, y, t, polarity) | Asynchronous, microsecond-level timestamps | Single-bit polarity per event | Up to 4.6 GEvents/s |
Sony 1.22 μm, 35.6 MP | Binary event frames with row-skipping & compression | Variable frame sync, up to 10 kfps per RGB frame | 2 bits per pixel (positive & negative) | Up to 4.56 GEvents/s |
Itās been a while @robsmark but great to see some of the āold timersā back on here and commenting. Itās been a rough few years. Maybe still a touch early for the golden goose to lay, but us loyal shareholders will be holding on for dear life until that moment hopefully arises!Nice to see a bit of life back in the old girl... The question on my mind is whether its been artificially pumped to lure retail in only to have the rug pulled, or was this caused by an entity that knows something we don't? Volume has been massive, but the ASX is funded by retail sheep, who jump on companies whenever there's upwards movement. Lets hope its the latter and something is announced by the company soon. I've bumped heads with several on here, but don't think there's a stock on the ASX that has more loyal shareholder base than this one, and it'd be fantastic to be rewarded for many years of patience.
Well you try and find a gif to suit![]()
There you have it: Accessible and in Plain English. Even the boss will understand!Another highly entertaining eejournal.com article featuring BrainChip by Max Maxfield:
![]()
Bodacious Buzz on the Brain-Boggling Neuromorphic Brain Chip Battlefront
Hold onto your hippocampus because the latest neuromorphic marvels are firing on all synapses. To ensure weāre all tap-dancing to the same skirl of the bagpipes, letās remind ourselves that the terā¦www.eejournal.com
![]()
October 9, 2025
Bodacious Buzz on the Brain-Boggling Neuromorphic Brain Chip Battlefront
by Max Maxfield
Hold onto your hippocampus because the latest neuromorphic marvels are firing on all synapses. To ensure weāre all tap-dancing to the same skirl of the bagpipes, letās remind ourselves that the term āneuromorphicā is a portmanteau that combines the Greek words āneuroā (relating to nerves or the brain) and āmorphicā (relating to form or structure).
Thus, āneuromorphicā literally means āin the form of the brain.ā In turn, āneuromorphic computingā refers to electronic systems inspired by the human brainās functioning. Instead of processing data step-by-step, like traditional computers, neuromorphic chips attempt to mimic how neurons and synapses communicateāutilizing spikes of electrical activity, massive parallelism, and event-driven operation.
The focus of this column is on hardware accelerator intellectual property (IP) functionsāspecifically neural processing units (NPUs)āthat designers can incorporate into their System-on-Chip (SoC) devices. Some SoC developers use third-party NPU IPs, while others develop their own IPs in-house.
I was just chatting with Steve Brightfield, who is CMO at BrainChip. As you may recall, BrainChipās claim to fame is its Akida AI acceleration processor IP, which is inspired by the human brainās cognitive capabilities and energy efficiency. Akida delivers low-power, real-time AI processing at the edge, utilizing neuromorphic principles for applications such as vision, audio, and sensor fusion.
The vast majority of NPU IPs accelerate artificial neural networks (ANNs) using large arrays of multiplyāaccumulate (MAC) units. These dense matrixāvector operations are energy-hungry because every neuron participates in every computation, and the hardware must move a lot of data between memory and the MAC array.
By comparison, Akida employs a neuromorphic architecture based on spiking neural networks (SNNs). Akidaās neurons donāt constantly compute weighted sums; instead, they exchange spikes (brief digital pulses) only when their internal āmembrane potentialā crosses a threshold. This makes Akida event-driven; that is, computation only occurs when new information is available to process.
In contrast to the general-purpose MAC arrays found in conventional NPUs, Akida utilizes synaptic kernels that perform weighted event accumulation upon the arrival of spikes. Each synapse maintains a small local weight and adds its contribution to a neuronās membrane potential when it receives a spike. This achieves the same effect as multiply-accumulate, but in a sparse, asynchronous, and energy-efficient manner thatās more akin to a biological brain.
![]()
Akida self-contained AI acceleration processor IP (Source: BrainChip)
According to BrainChipās website, the Akida self-contained AI neural processor IP features the following:
- Scalable fabric of 1 to 128 nodes
- Each neural node supports 128 MACs
- Configurable 50K to 130K embedded local SRAM per node
- DMA for all memory and model operations
- Multi-layer execution without host CPU
- Integrate with any Microcontroller or Application Processor
- Efficient algorithmic mesh
Hang on! I just told you that, āIn contrast to the general-purpose MAC arrays found in conventional NPUs, Akida utilizes synaptic kernelsā¦ā So, itās a tad embarrassing to see the folks at BrainChip referencing MACs on their website. The thing is that, in the case of the Akida, the term āMACā is used somewhat looselyāmore as an engineering shorthand than as a literal, synchronous multiplyāaccumulate unit like those found in conventional GPUs and NPUs.
While each Akida neural node contains hardware that can perform multiply-accumulate operations, these operations are event-driven and sparsely activated. When an input spike arrives, only the relevant synapses and neurons in that node perform a small weighted accumulationāthereās no continuous clocked matrix multiplication going on in the background.
So, while BrainChipās documentation calls them āMACs,ā theyāre actually implemented as neuromorphic synaptic processors that behave like a MAC when a spike fires while remaining idle otherwise. This is how the Akida achieves orders-of-magnitude lower power consumption than conventional NPUs, despite performing similar mathematical operations in principle.
Another way to think about this is that a conventional MAC array crunches numbers continuously, with every neuron participating in every cycle. By comparison, an Akida nodeās neuromorphic synapses sit dormant, only springing into action when a spike arrives, performing their math locally, and then quieting down again. If I were waxing poetical, I might be tempted to say something pithy at this juncture, like āmore firefly than furnace,ā but Iām not, so I wonāt.
But wait, thereās more, because the Akida processor IP uses sparsity to focus on the most important data, inherently avoiding unnecessary computation and saving energy at every step. Meanwhile, BrainChipās neural network model, known as Temporal Event-based Neural Networks (TENNs), builds on a state-space model architecture to track events over time, rather than sampling at fixed intervals, thereby skipping periods of no change to conserve energy and memory. Together, these little scamps deliver unmatched efficiency for real-time AI.
![]()
Akida neural processor + TENNs models = awesome (Source: BrainChip)
The name of the game here is āsparse.ā Weāre talking sparse data (streaming inputs are converted to events at the hardware level, reducing the volume of data by up to 10x before processing begins), sparse weights (unnecessary weights are pruned and compressed, reducing model size and compute demand by up to 10x), and sparse activations (only essential activation functions pass data to the next layers, cutting downstream computation by up to 10x).
Since traditional CNNs activate every neural layer at every timestep, they can consume watts of power to process full data streams, even when nothing is changing. By comparison, the fact that the Akida processes only meaningful information enables real-time streaming AI that runs continuously on milliwatts of power, making it possible to deploy always-on intelligence in wearables, sensors, and other battery-powered devices.
Of course, nothing is easy (āIf it were easy, everyone would be doing it,ā as they say). A significant challenge for people who wish to utilize neuromorphic computing is that spiking networks differ from conventional neural networks. This is why the folks at BrainChip provide a CNN-to-SNN converter. This means developers can start out with a conventional CNN (which they may already have) and then convert it to an SNN to run on an Akida.
As usual, there are more layers to this onion than you might first suppose. Consider, for example, BrainChipās collaboration with Prophesee. This is one of those rare cases where two technologies fit together as if theyād been waiting for each other all along.
Propheseeās event-based cameras donāt capture conventional frames at fixed intervals; instead, each pixel generates a spike whenever it detects a change in light intensity. In other words, the output is already neuromorphic in natureāa continuous stream of sparse, asynchronous events (āspikesā) rather than dense video frames.
That makes it the perfect companion for BrainChipās Akida processor, which is itself a spiking neural network. While traditional cameras must be converted into spiking form to feed an SNN, and while Prophesee must normally āde-spikeā its output to feed a conventional convolutional network, Akida and Prophesee can connect directlyāspike to spike, neuron to neuronāwith no format gymnastics or power-hungry frame buffering in between.
This native spike-based synergy pays off handsomely in power and latency. As BrainChipās engineers put it, āWeāre working in kilobits per second instead of megabits per second.ā Because the Prophesee sensor transmits information only when something changes, and Akida computes only when spikes arrive, the overall system consumes mere milliwattsācompared to the tens of milliwatts required by conventional vision systems.
That difference may not matter in a smartphone, but itās mission-critical for AR/VR glasses, where the battery is a tenth or even a twentieth the size of a phoneās. By eliminating the need to convert between frames and spikesāand avoiding the energy cost of frame storage, buffering, and transmissionāBrainChip and Prophesee have effectively built a neuromorphic end-to-end vision pipeline that mirrors how biological eyes and brains actually work: always on, always responsive, yet sipping power rather than guzzling it.
As another example, I recently heard that BrainChip and HaiLa Technologies have partnered to show what happens when brain-inspired computing meets ultra-efficient wireless connectivity. Theyāve created a demonstration that pairs BrainChipās Akida neuromorphic processor with HaiLaās BSC2000 backscatter RFIC, a WiāFiācompatible chip that communicates by reflecting existing radio signals rather than generating its own (I plan to devote a future column to this technology). The result is a self-contained edge-AI platform that can perform continuous sensing, anomaly detection, and condition monitoring while sipping mere microwatts of powerāsmall enough to run a connected sensor for its entire lifetime on a single coin-cell battery.
This collaboration highlights a new class of intelligent, battery-free edge devices, where sensing, processing, and communication are all optimized for power efficiency. Akidaās event-driven architecture processes only the spikes that matter, while HaiLaās passive backscatter link eliminates most of the radioās energy cost. Together they enable always-on, locally intelligent IoT nodes ideal for medical, environmental, and infrastructure monitoringāplaces where replacing batteries is expensive, impractical, or downright impossible. In short, BrainChip and HaiLa are sketching the blueprint for the next wave of ultra-low-power edge AI systems that think before they speak, and that do both with astonishing efficiency.
Sad to relate, none of the above was what I wanted to talk to you about (stop groaningāit was worth reading). What I originally set out to tell you is the newly introduced Akida Cloud (imagine a roll of drums and a fanfare of trombones).
The existing Akida 1, which has been extremely well received by the market, supports 4-, 2-, and 1-bit weights and activations. The next-generation Akida 2, which is expected to be available to developers in the very near future, will support 8-, 4-, and 1-bit weights and activations. Also, the Akida 2 will support spatio-temporal and temporal event-based neural networks.
For years, BrainChipās biggest hurdle in courting developers wasnāt its neuromorphic siliconāit was logistics. Demonstrating the Akida architecture meant physically shipping bulky FPGA-based boxes to customers, powering them up on-site, and juggling loan periods. With the launch of the Akida Cloud, that bottleneck disappears.
Engineers can now log in, spin up a virtual instance of the existing Akida 1 running on an actual Akida 1, or the forthcoming Akida 2 running on an FPGA, and run their own neural workloads directly in the browser. Models can be trained, loaded, executed, and benchmarked in real timeāno shipping crates, NDAs, or lab setups required.
Akida Cloud represents more than a convenience upgrade; itās a strategic move to democratize access to neuromorphic technology. By making their latest architectures available online, the chaps and chapesses at BrainChip are lowering the barrier to entry for researchers, startups, and OEMs who want to experiment with event-based AI but lack specialized hardware.
Users can compare the behavior of Akida 1 and Akida 2 side by side, prototype models, and gather performance data before committing to silicon. For BrainChip, the cloud platform also serves as a rapid feedback loopāturning every connected engineer into an early tester and accelerating SNN adoption across the edge AI ecosystem.
And there you have itābrains in the cloud, spikes on the wire, and AI that thinks before it blinks. If this is what the neuromorphic future looks like, I say ābring it onā (just as soon as my poor old hippocampus cools down). But itās not all about me (it should be, but itās not). So, what do you think about all of this?
Last year, a student team using Akida won the Munich Neuromorphic Hackathon, organised by neuroTUM (a student club based at TU München / Technical University of Munich for students interested in the intersection of neuroscience and engineering) and our partner fortiss (who to this day have never officially been acknowledged as a partner from our side, though).
Will Akida again help one of the teams to win this yearās challenge?!
The 2025 Munich Neuromorphic Hackathon will take place from 7-12 November.
āThe teams will face interesting industry challenges posed by German Aerospace Center (DLR), Simi Reality Motion Systems and fortiss, working with Brain-inspired computing methods towards the most efficient neuromorphic processor.ā
Simi Reality Motion Systems (part of the ZF Group) has been collaborating with fortiss on several projects, such as SpikingBody (āNeuromorphic AI meets tennis. Real-time action recognition implemented on Loihi 2) and EMMANĆELA (AR/VR).
![]()
āļøšš©š©š„š¢šššš¢šØš§š¬ šØš©šš§: šš®š§š¢šš” ššš®š«šØš¦šØš«š©š”š¢š šššš¤ššš”šØš§ š»āļøš§ For the third continuous year, we are excited to announce that, in collaboration with fortiss the⦠| neuroTUM
āļøšš©š©š„š¢šššš¢šØš§š¬ šØš©šš§: šš®š§š¢šš” ššš®š«šØš¦šØš«š©š”š¢š šššš¤ššš”šØš§ š»āļøš§ For the third continuous year, we are excited to announce that, in collaboration with fortiss the Munich Neuromorphic Hackathon 2025 will take place between 7-11th of November, with a break on the 9th. The teams will face interesting...www.linkedin.com
View attachment 91444
Neuromorphic Hackathon | neuroTUM
Join the revolution in brain-inspired computing at the Neuromorphic Hackathon. Build the future of AI with neuromorphic technologies.neurotum.github.io
View attachment 91452 View attachment 91453 View attachment 91454
View attachment 91445 View attachment 91446
View attachment 91447
View attachment 91448
![]()
LinkedIn Login, Sign in | LinkedIn
Login to LinkedIn to keep in touch with people you know, share ideas, and build your career.www.linkedin.com
View attachment 91450 View attachment 91451
View attachment 91858
Here you go with the Mercedes label and all! I think he smashed it in! Gotta love the Mercedes logo![]()
Another highly entertaining eejournal.com article featuring BrainChip by Max Maxfield:
![]()
Bodacious Buzz on the Brain-Boggling Neuromorphic Brain Chip Battlefront
Hold onto your hippocampus because the latest neuromorphic marvels are firing on all synapses. To ensure weāre all tap-dancing to the same skirl of the bagpipes, letās remind ourselves that the terā¦www.eejournal.com
![]()
October 9, 2025
Bodacious Buzz on the Brain-Boggling Neuromorphic Brain Chip Battlefront
by Max Maxfield
Hold onto your hippocampus because the latest neuromorphic marvels are firing on all synapses. To ensure weāre all tap-dancing to the same skirl of the bagpipes, letās remind ourselves that the term āneuromorphicā is a portmanteau that combines the Greek words āneuroā (relating to nerves or the brain) and āmorphicā (relating to form or structure).
Thus, āneuromorphicā literally means āin the form of the brain.ā In turn, āneuromorphic computingā refers to electronic systems inspired by the human brainās functioning. Instead of processing data step-by-step, like traditional computers, neuromorphic chips attempt to mimic how neurons and synapses communicateāutilizing spikes of electrical activity, massive parallelism, and event-driven operation.
The focus of this column is on hardware accelerator intellectual property (IP) functionsāspecifically neural processing units (NPUs)āthat designers can incorporate into their System-on-Chip (SoC) devices. Some SoC developers use third-party NPU IPs, while others develop their own IPs in-house.
I was just chatting with Steve Brightfield, who is CMO at BrainChip. As you may recall, BrainChipās claim to fame is its Akida AI acceleration processor IP, which is inspired by the human brainās cognitive capabilities and energy efficiency. Akida delivers low-power, real-time AI processing at the edge, utilizing neuromorphic principles for applications such as vision, audio, and sensor fusion.
The vast majority of NPU IPs accelerate artificial neural networks (ANNs) using large arrays of multiplyāaccumulate (MAC) units. These dense matrixāvector operations are energy-hungry because every neuron participates in every computation, and the hardware must move a lot of data between memory and the MAC array.
By comparison, Akida employs a neuromorphic architecture based on spiking neural networks (SNNs). Akidaās neurons donāt constantly compute weighted sums; instead, they exchange spikes (brief digital pulses) only when their internal āmembrane potentialā crosses a threshold. This makes Akida event-driven; that is, computation only occurs when new information is available to process.
In contrast to the general-purpose MAC arrays found in conventional NPUs, Akida utilizes synaptic kernels that perform weighted event accumulation upon the arrival of spikes. Each synapse maintains a small local weight and adds its contribution to a neuronās membrane potential when it receives a spike. This achieves the same effect as multiply-accumulate, but in a sparse, asynchronous, and energy-efficient manner thatās more akin to a biological brain.
![]()
Akida self-contained AI acceleration processor IP (Source: BrainChip)
According to BrainChipās website, the Akida self-contained AI neural processor IP features the following:
- Scalable fabric of 1 to 128 nodes
- Each neural node supports 128 MACs
- Configurable 50K to 130K embedded local SRAM per node
- DMA for all memory and model operations
- Multi-layer execution without host CPU
- Integrate with any Microcontroller or Application Processor
- Efficient algorithmic mesh
Hang on! I just told you that, āIn contrast to the general-purpose MAC arrays found in conventional NPUs, Akida utilizes synaptic kernelsā¦ā So, itās a tad embarrassing to see the folks at BrainChip referencing MACs on their website. The thing is that, in the case of the Akida, the term āMACā is used somewhat looselyāmore as an engineering shorthand than as a literal, synchronous multiplyāaccumulate unit like those found in conventional GPUs and NPUs.
While each Akida neural node contains hardware that can perform multiply-accumulate operations, these operations are event-driven and sparsely activated. When an input spike arrives, only the relevant synapses and neurons in that node perform a small weighted accumulationāthereās no continuous clocked matrix multiplication going on in the background.
So, while BrainChipās documentation calls them āMACs,ā theyāre actually implemented as neuromorphic synaptic processors that behave like a MAC when a spike fires while remaining idle otherwise. This is how the Akida achieves orders-of-magnitude lower power consumption than conventional NPUs, despite performing similar mathematical operations in principle.
Another way to think about this is that a conventional MAC array crunches numbers continuously, with every neuron participating in every cycle. By comparison, an Akida nodeās neuromorphic synapses sit dormant, only springing into action when a spike arrives, performing their math locally, and then quieting down again. If I were waxing poetical, I might be tempted to say something pithy at this juncture, like āmore firefly than furnace,ā but Iām not, so I wonāt.
But wait, thereās more, because the Akida processor IP uses sparsity to focus on the most important data, inherently avoiding unnecessary computation and saving energy at every step. Meanwhile, BrainChipās neural network model, known as Temporal Event-based Neural Networks (TENNs), builds on a state-space model architecture to track events over time, rather than sampling at fixed intervals, thereby skipping periods of no change to conserve energy and memory. Together, these little scamps deliver unmatched efficiency for real-time AI.
![]()
Akida neural processor + TENNs models = awesome (Source: BrainChip)
The name of the game here is āsparse.ā Weāre talking sparse data (streaming inputs are converted to events at the hardware level, reducing the volume of data by up to 10x before processing begins), sparse weights (unnecessary weights are pruned and compressed, reducing model size and compute demand by up to 10x), and sparse activations (only essential activation functions pass data to the next layers, cutting downstream computation by up to 10x).
Since traditional CNNs activate every neural layer at every timestep, they can consume watts of power to process full data streams, even when nothing is changing. By comparison, the fact that the Akida processes only meaningful information enables real-time streaming AI that runs continuously on milliwatts of power, making it possible to deploy always-on intelligence in wearables, sensors, and other battery-powered devices.
Of course, nothing is easy (āIf it were easy, everyone would be doing it,ā as they say). A significant challenge for people who wish to utilize neuromorphic computing is that spiking networks differ from conventional neural networks. This is why the folks at BrainChip provide a CNN-to-SNN converter. This means developers can start out with a conventional CNN (which they may already have) and then convert it to an SNN to run on an Akida.
As usual, there are more layers to this onion than you might first suppose. Consider, for example, BrainChipās collaboration with Prophesee. This is one of those rare cases where two technologies fit together as if theyād been waiting for each other all along.
Propheseeās event-based cameras donāt capture conventional frames at fixed intervals; instead, each pixel generates a spike whenever it detects a change in light intensity. In other words, the output is already neuromorphic in natureāa continuous stream of sparse, asynchronous events (āspikesā) rather than dense video frames.
That makes it the perfect companion for BrainChipās Akida processor, which is itself a spiking neural network. While traditional cameras must be converted into spiking form to feed an SNN, and while Prophesee must normally āde-spikeā its output to feed a conventional convolutional network, Akida and Prophesee can connect directlyāspike to spike, neuron to neuronāwith no format gymnastics or power-hungry frame buffering in between.
This native spike-based synergy pays off handsomely in power and latency. As BrainChipās engineers put it, āWeāre working in kilobits per second instead of megabits per second.ā Because the Prophesee sensor transmits information only when something changes, and Akida computes only when spikes arrive, the overall system consumes mere milliwattsācompared to the tens of milliwatts required by conventional vision systems.
That difference may not matter in a smartphone, but itās mission-critical for AR/VR glasses, where the battery is a tenth or even a twentieth the size of a phoneās. By eliminating the need to convert between frames and spikesāand avoiding the energy cost of frame storage, buffering, and transmissionāBrainChip and Prophesee have effectively built a neuromorphic end-to-end vision pipeline that mirrors how biological eyes and brains actually work: always on, always responsive, yet sipping power rather than guzzling it.
As another example, I recently heard that BrainChip and HaiLa Technologies have partnered to show what happens when brain-inspired computing meets ultra-efficient wireless connectivity. Theyāve created a demonstration that pairs BrainChipās Akida neuromorphic processor with HaiLaās BSC2000 backscatter RFIC, a WiāFiācompatible chip that communicates by reflecting existing radio signals rather than generating its own (I plan to devote a future column to this technology). The result is a self-contained edge-AI platform that can perform continuous sensing, anomaly detection, and condition monitoring while sipping mere microwatts of powerāsmall enough to run a connected sensor for its entire lifetime on a single coin-cell battery.
This collaboration highlights a new class of intelligent, battery-free edge devices, where sensing, processing, and communication are all optimized for power efficiency. Akidaās event-driven architecture processes only the spikes that matter, while HaiLaās passive backscatter link eliminates most of the radioās energy cost. Together they enable always-on, locally intelligent IoT nodes ideal for medical, environmental, and infrastructure monitoringāplaces where replacing batteries is expensive, impractical, or downright impossible. In short, BrainChip and HaiLa are sketching the blueprint for the next wave of ultra-low-power edge AI systems that think before they speak, and that do both with astonishing efficiency.
Sad to relate, none of the above was what I wanted to talk to you about (stop groaningāit was worth reading). What I originally set out to tell you is the newly introduced Akida Cloud (imagine a roll of drums and a fanfare of trombones).
The existing Akida 1, which has been extremely well received by the market, supports 4-, 2-, and 1-bit weights and activations. The next-generation Akida 2, which is expected to be available to developers in the very near future, will support 8-, 4-, and 1-bit weights and activations. Also, the Akida 2 will support spatio-temporal and temporal event-based neural networks.
For years, BrainChipās biggest hurdle in courting developers wasnāt its neuromorphic siliconāit was logistics. Demonstrating the Akida architecture meant physically shipping bulky FPGA-based boxes to customers, powering them up on-site, and juggling loan periods. With the launch of the Akida Cloud, that bottleneck disappears.
Engineers can now log in, spin up a virtual instance of the existing Akida 1 running on an actual Akida 1, or the forthcoming Akida 2 running on an FPGA, and run their own neural workloads directly in the browser. Models can be trained, loaded, executed, and benchmarked in real timeāno shipping crates, NDAs, or lab setups required.
Akida Cloud represents more than a convenience upgrade; itās a strategic move to democratize access to neuromorphic technology. By making their latest architectures available online, the chaps and chapesses at BrainChip are lowering the barrier to entry for researchers, startups, and OEMs who want to experiment with event-based AI but lack specialized hardware.
Users can compare the behavior of Akida 1 and Akida 2 side by side, prototype models, and gather performance data before committing to silicon. For BrainChip, the cloud platform also serves as a rapid feedback loopāturning every connected engineer into an early tester and accelerating SNN adoption across the edge AI ecosystem.
And there you have itābrains in the cloud, spikes on the wire, and AI that thinks before it blinks. If this is what the neuromorphic future looks like, I say ābring it onā (just as soon as my poor old hippocampus cools down). But itās not all about me (it should be, but itās not). So, what do you think about all of this?
![]()
Neuromorphic Brain Chip in NVIDIA Jetson | AI Cowboys + UT San Antonio Rewire Edge AI
The AI Cowboys fused neuromorphic computing with NVIDIA Jetson at UT San Antonio. Cutting AI energy use by 90%. Explore spiking neural networks, case studies, market growth, and why this breakthrough could reshape sustainability, defense, and accessibility.www.theaicowboys.com
View attachment 91673
From the link you provided.
Quote:
Case Study: Jetson + Akida vs. Conventional AI
ā
At the National Security Collaboration Center inside UT San Antonio's San Pedro I building, the AI Cowboys tested a hybrid system combining the NVIDIA Jetson Orin AGX with a BrainChip Akida PCIe Board.
Interestingly, 3 of the 6 guys from AI Cowboys are retired Air Force, so I wonder if our new vice president of sales is connected?
- Traditional Jetson-only workload: Continuous anomaly detection on video streams at 120 watts.
- Jetson + Akida neuromorphic hybrid: The same workload, the same accuracyāat just 9 watts.
It is certainly interesting and could benefit BRN.
![]()
About Us
Texasās top AI startup company delivering innovative AI, machine learning, EduTech, and quantum solutions for government, research, and startups.www.theaicowboys.com
Ann would be nice
After the squeezeAnn would be nice
After the squeeze![]()