BRN Discussion Ongoing

Tothemoon24

Top 20
img_1589-jpeg.91806



Traditional LiDAR systems depend on the cloud, creating delays that limit how fast machines can respond. Akida PointNet++ brings AI processing directly to the edge, reducing latency by up to 80% and operating on just a few milliwatts of power.

This means faster perception, safer navigation, and smarter automation for vehicles, drones, robots, and intelligent infrastructure.

With Akida’s efficient on-chip design, devices can interpret complex 3D environments instantly without relying on cloud connectivity.

Learn how BrainChip’s Akida PointNet++ is shaping the future of spatial AI.
👉 https://lnkd.in/gzbyf9Kv
IMG_1590.jpeg

 

Attachments

  • IMG_1589.jpeg
    IMG_1589.jpeg
    71.5 KB · Views: 1,225
Last edited:
  • Like
  • Fire
  • Love
Reactions: 64 users

7für7

Top 20
img_1589-jpeg.91806



Traditional LiDAR systems depend on the cloud, creating delays that limit how fast machines can respond. Akida PointNet++ brings AI processing directly to the edge, reducing latency by up to 80% and operating on just a few milliwatts of power.

This means faster perception, safer navigation, and smarter automation for vehicles, drones, robots, and intelligent infrastructure.

With Akida’s efficient on-chip design, devices can interpret complex 3D environments instantly without relying on cloud connectivity.

Learn how BrainChip’s Akida PointNet++ is shaping the future of spatial AI.
👉 https://lnkd.in/gzbyf9Kv
View attachment 91807


Thanks for the post! I also was about to post the same and deleted it…..

I just have one question… When I post something, I usually write one or two sentences about what it’s about and then add the link.

I’ve noticed that you often also take a screenshot of the post’s header with the BrainChip logo, copy and paste the text separately, and then also include the link — all good, of course — but why the screenshot of the header with the BrainChip logo?
 
  • Like
  • Thinking
Reactions: 3 users

7für7

Top 20
Sorry if already posted

“BrainChip experts host an interactive webinar showcasing the benefits of executing AI/ML inference models on BrainChip’s Akida 2 remotely using Akida Cloud.”


 
  • Like
  • Fire
  • Love
Reactions: 14 users

Taproot

Regular
 
  • Like
  • Fire
  • Love
Reactions: 8 users

7für7

Top 20
It’s a long time ago since we talked about gaps… is……is there any gap to….is there any gap to close?
Just Asking GIF
 
  • Haha
Reactions: 5 users

Mccabe84

Regular
Fingers crossed we get a short squeeze as they might not be the ones buying 🤞
Screenshot_20251008_104017_Samsung Internet.jpg
 
  • Like
  • Fire
Reactions: 15 users

equanimous

Norse clairvoyant shapeshifter goddess
in response to the speeding ticket yesterday Brainchip has decided to go again....

image (5).jpg
 
  • Haha
  • Like
  • Love
Reactions: 38 users

7für7

Top 20
Here is the LinkedIn post regarding Akida…LiDAR

 
  • Like
  • Fire
  • Love
Reactions: 10 users

Bravo

If ARM was an arm, BRN would be its biceps💪!
The partnership between Arduino and BrainChip was announced in October 2022


Just to clarify, BrainChip and Arduino have not formally announced a partnership to date. However there is a connection between Brainchip and Arduino via the Edge Impulse ecosystem.
  • In May 2022, BrainChip and Edge Impulse officially announced a partnership to accelerate AI/ML deployments on BrainChip’s Akida platform.
  • In October 2022, Arduino and Edge Impulse announced a white-label partnership, integrating Edge Impulse Studio directly into Arduino’s online tools.
In a post dated 21 January 2024 (see below), Fact Finder highlighted a project by Christopher Mendez (PRO Content Creator at Arduino) that used BrainChip’s AKD1000 with Edge Impulse to control a TV, air conditioner, or light simply by pointing at them. Using the Akida Development Kit and a Logitech BRIO 4K Webcam, the model achieved excellent accuracy and ultra-low power consumption. Mendez concluded:

"This project leverages the Brainchip Akida Neuromorphic Hardware Accelerator to propose an innovative solution to home automation. It can be optimized to work as a daily used gadget that may be at everyone's house in the near future.”

Fast-forward to today and Qualcomm has now acquired both Arduino (this week) and Edge Impulse (earlier this year), bringing them under the same umbrella.

I believe this creates a very interesting pathway where BrainChip could possibly integrate through Edge Impulse’s existing Akida support into the Arduino ecosystem via Qualcomm’s new edge AI stack. Or Qualcomm may encourage developer cross-pollination among its new assets (Arduino + Edge Impulse + Snapdragon Spaces, etc).

So while there’s still no direct BrainChip / Arduino partnership, the ecosystem connections have become much closer and potentially far more strategic.

At least that's what I'm hoping.🤞




Link here #74,284
Screenshot 2025-10-08 at 9.56.18 am.png
 
  • Like
  • Love
  • Fire
Reactions: 38 users

7für7

Top 20
in response to the speeding ticket yesterday Brainchip has decided to go again....

View attachment 91809

Let’s see where we end up on Friday… latest Monday… I think the shorter will take every cent no matter what…. I don’t trust that sh…
 

Bravo

If ARM was an arm, BRN would be its biceps💪!
img_1589-jpeg.91806



Traditional LiDAR systems depend on the cloud, creating delays that limit how fast machines can respond. Akida PointNet++ brings AI processing directly to the edge, reducing latency by up to 80% and operating on just a few milliwatts of power.

This means faster perception, safer navigation, and smarter automation for vehicles, drones, robots, and intelligent infrastructure.

With Akida’s efficient on-chip design, devices can interpret complex 3D environments instantly without relying on cloud connectivity.

Learn how BrainChip’s Akida PointNet++ is shaping the future of spatial AI.
👉 https://lnkd.in/gzbyf9Kv
View attachment 91807



Quite the coincidence if you ask me.

I was just thinking that BrainChip’s new Akida PointNet++ may be the missing piece in that puzzle behind Qualcomm’s acquisition of Arduino, to accelerate its push into robotics and intelligent automation, as discussed a little earlier.

I don't believe that Qualcomm currently has any true equivalent to Akida PointNet++, especially when it comes to real-time, neuromorphic 3D point-cloud processing at the edge, which is a critical capability for next-generation robotics, drones, and autonomous systems.

By processing LiDAR and 3D spatial data directly at the edge, Akida PointNet++ delivers sub-10 ms spatial awareness while drawing only milliwatts of power, potentially giving Qualcomm exactly what its robotics roadmap needs - faster perception, longer battery life, and smarter autonomy.

Combined with Arduino + Edge Impulse (Qualcomm), this could finally democratize neuromorphic robotics, with Akida quietly powering the spatial intelligence beneath it all.

Here's hoping! 🙏
 
Last edited:
  • Like
  • Fire
  • Love
Reactions: 35 users
img_1589-jpeg.91806



Traditional LiDAR systems depend on the cloud, creating delays that limit how fast machines can respond. Akida PointNet++ brings AI processing directly to the edge, reducing latency by up to 80% and operating on just a few milliwatts of power.

This means faster perception, safer navigation, and smarter automation for vehicles, drones, robots, and intelligent infrastructure.

With Akida’s efficient on-chip design, devices can interpret complex 3D environments instantly without relying on cloud connectivity.

Learn how BrainChip’s Akida PointNet++ is shaping the future of spatial AI.
👉 https://lnkd.in/gzbyf9Kv
View attachment 91807
AI Overview
1759891785098.png

1759891785149.png

+1

Over 1.6 million LiDAR units were shipped globally in 2024, a figure that is expected to quadruple by 2030, driven primarily by increasing adoption in passenger cars. The market's massive growth is a clear indicator of the increasing demand for LiDAR technology across various sectors, including autonomous vehicles and advanced driver-assistance systems (ADAS).
 
  • Like
  • Fire
  • Love
Reactions: 14 users

Bravo

If ARM was an arm, BRN would be its biceps💪!
Nice to see the ol' share price continuing to head in the right direction!

Hot diggity dog!



200w.gif
 
  • Haha
  • Like
  • Fire
Reactions: 20 users
  • Haha
  • Wow
Reactions: 9 users

miaeffect

Oat latte lover
  • Haha
  • Fire
  • Wow
Reactions: 10 users
From the other site

Here are some benchmarks comparing PointNet++ running on Akida 2 vs NVIDIA Jetson Xavier NX and Orin NX for real-time LiDAR classification at the edge.

1. Benchmark Sources

  • Akida PointNet++: BrainChip LiDAR Point Cloud Model brochure (Oct 2025).
  • Jetson Xavier NX / Orin NX: Derived from public PyTorch PointNet++ benchmarks on ModelNet40 and KITTI, using TensorRT-optimized inference (batch = 1).
    • Xavier NX: 384 CUDA cores, 21 TOPS INT8
    • Orin NX: 1024 CUDA cores, 100 TOPS INT8
  • Power figures are measured in NVIDIA’s MaxN mode, which reflects real deployment on drones / robotics platforms.

2. PointNet++ Performance Comparison

MetricAkida 2Jetson Xavier NXJetson Orin NX
FPS (ModelNet40)183 FPS65 – 85 FPS (FP16/INT8)110 – 135 FPS (FP16/INT8)
Latency / Frame5 – 6 ms12 – 15 ms7 – 9 ms
Power50 mW10 – 15 W15 – 25 W
Energy / Inference0.28 mJ~150 – 200 mJ~200 – 300 mJ
Model Size1.07 MB~10 – 12 MB~10 – 12 MB
Accuracy (ModelNet40)81.6 % (4-bit QAT)89 – 90 % (FP32 baseline)89 – 90 % (FP32 baseline)
Deployment ModeAlways-on, ultra-low powerEmbedded GPU (fan/heat dissipation required)High-end embedded GPU

3. What This Shows

  1. Throughput
    • Akida 2 actually outperforms Xavier and Orin NX on raw FPS, despite being neuromorphic and consuming three orders of magnitude less power.
    • Orin NX gets closer but still lags slightly at similar batch sizes.
  2. Power & Energy
    • Akida’s ~50 mW is in a completely different regime than the 10–25 W Jetson modules.
    • That’s ~500×–1000× lower energy per inference, which is decisive for always-on payloads (e.g. drones, satellites, smart infrastructure).
  3. Accuracy Trade-off
    • Akida’s quantized model (4-bit QAT) loses ~8 % absolute accuracy vs FP32, but this is expected and often acceptable for edge classification, especially if upstream sensor fusion provides redundancy.
  4. Form Factor / Thermal
    • Jetsons need active cooling and steady power supply — not trivial on space or micro-UAV platforms.
    • Akida can operate fanless, battery-powered, or solar-powered.

4. Strategic Takeaway

Use CaseBest Fit
Battery-powered always-on LiDAR classification (e.g., satellite autonomy, drones, infrastructure nodes)Akida 2 — ultra-low power, high FPS, compact
Onboard AI co-processor with larger perception stack (e.g., autonomous cars, ground robots)Jetson Orin NX — higher model flexibility, better FP32 accuracy, but power-hungry
Mixed sensor fusion payloads with strict SWaP (e.g., ESA cubesats, tactical drones)Akida as front-end classifier + Jetson/FPGA for downstream fusion or planning

Summary Table​

FeatureAkida 2Jetson Xavier NXJetson Orin NX
FPS18365–85110–135
Power0.05 W10–15 W15–25 W
Energy/Inference0.28 mJ150–200 mJ200–300 mJ
Accuracy81.6 %≈ 90 %≈ 90 %
Edge SuitabilityAlways-onThermally constrainedHigh-end only

Bottom line:​

For PointNet++ at the edge, Akida 2 outperforms Jetson Xavier NX and Orin NX on raw FPS, power, and energy efficiency, with a modest accuracy gap from quantization that can be narrowed through improved training and model updates. This is why BrainChip is targeting Akida PointNet++ for autonomous drones, satellites, and infrastructure nodes — it's built for tiny, always-on LiDAR intelligence rather than general AI workloads.

*GPT5
 
  • Like
  • Love
  • Fire
Reactions: 25 users

Tothemoon24

Top 20
Stocks are rising & so am I 🍌


IMG_1594.jpeg

The Hidden Tech in Your Next Wearable: Why Spiking is the New Speed and Efficiency in Hardware.

Most of the AI buzz focuses on massive models in cloud data centers. But the real revolution in low-power intelligence is happening at the very edge: through Neuromorphic Computing.

The challenge for devices like advanced health trackers, industrial sensors, or autonomous drone navigation is simple: continuous processing with crippling power constraints. Running traditional Deep Learning on these devices drains the battery almost instantly.

The solution is brilliant—and brain-inspired.

Enter the Spiking Neural Network (SNN):

Unlike a conventional chip that processes data in large, power-hungry blocks, neuromorphic chips use Spiking Neural Networks (SNNs). These networks only fire (process) a signal when the input data exceeds a certain threshold—just like a biological neuron. This is known as event-based processing.

Why this is a game-changer for niche industries:

1. Extreme Efficiency: A 100x to 1000x improvement in energy efficiency compared to standard GPUs for certain tasks. Imagine a sensor that lasts for years, not days.
2. Instant Reaction: Processing occurs in real-time, right where the data is created (Edge AI), making it perfect for safety-critical systems like autonomous vehicles or real-time medical monitoring where latency can't be tolerated.
3. Adaptive Learning: The SNN architecture inherently supports "one-shot" or continuous on-device learning, adapting to its environment without massive retraining cycles.

If you are a Product Manager, Hardware Engineer, or Investor building solutions that require high-speed, continuous processing under severe power budgets, you need to be integrating this ecosystem now.

What niche application (outside of health) is perfectly primed for a move to neuromorphic processing? Share your 'killer app' idea below! 👇

#NeuromorphicComputing #EdgeAI #HardwareInnovation #IoT #SNN #FutureofTech #DeepLearning

IMG_1595.jpeg

 
  • Like
  • Fire
  • Haha
Reactions: 15 users

7für7

Top 20
@Esq.111 is your parrot Stil alive?

Parrot Birds With Arms GIF
 
  • Haha
  • Like
Reactions: 4 users
Top Bottom