BRN Discussion Ongoing

manny100

Top 20
Politicians come and go but its only Tech advances that really cause change or allow change. That is why we are in the 'box' seat.
It's just patience from here.
........... and that is why i always say that our tech is our value safety net.
 
  • Like
  • Fire
  • Love
Reactions: 8 users

Bravo

If ARM was an arm, BRN would be its biceps💪!
A competitor's product with analog neurons.
Perhaps someone could make a comparison to Akida.



ChatGPT's Response


Here’s a concise, apples-to-apples comparison of Neuronova’s H1 vs BrainChip’s Akida (Gen-2) based on what’s publicly available.

What each one is (at a glance)​

  • Neuronova H1 — a fully-analog neuromorphic processor for spiking neural networks (SNNs) aimed at ultra-low-power edge sensing; the company positions it as “doing more with less,” with a roadmap toward sensor-in-package integration. Public materials emphasise analog in-memory/time-domain compute and extremely low power (trade press cites <1 µW for always-on sensing use cases). NeuromorphicCore.ai+3Neuronova+3Tech.eu+3
  • BrainChip Akida (Gen-2) — a fully-digital neuromorphic processor IP (licensable block or FPGA dev platform) designed for event-based inference and on-chip/edge learning; supports CNN→SNN conversion and BrainChip’s TENN (Temporal Event-based Neural Networks) models, with model cards and documented HW limits. BrainChip+3BrainChip+3doc.brainchipinc.com+3

Side-by-side​

DimensionNeuronova H1BrainChip Akida Gen-2
Core compute styleFully analog SNN compute at the sensor/edgeFully digital event-domain neural processor IP
Target useUltra-low-power always-on sensing; roadmap to sensor-level integrationEdge AI inference and on-device learning (incremental/edge learning) for wearables, IoT, automotive, space, etc.
Power narrative“Do more with less”; reporting around sub-µW always-on claims in media profiles (for tiny tasks)“Millwatts-class” continuous AI; event-sparsity to cut energy; supports edge learning without cloud round-trips Tech.eu+2PoliHub+2
SensorsMessaging about “towards full sensor integration” (front-ends that adapt to drift/non-ideality)Works with conventional frame sensors and event cameras; model zoo & conversion tools; not sensor-hardwired Neuronova+1
Software/toolingPublic details limited (no open model zoo/docs on H1 at time of writing)Published product brief, capability limits, examples docs, and Akida 2 model cards (incl. 8-bit / 4-bit quantisation) BrainChip+3BrainChip+3doc.brainchipinc.com+3
Learning(Not detailed publicly)Supports on-chip/edge learning; TENNs for temporal efficiency; incremental learning flows BrainChip+1
Business modelEarly-stage startup; pre-seed funded, building chipsIP/licensing + dev platforms; 2nd-gen IP available (docs, model zoo) PoliHub+1


BrainChip – partners & collaborators​

Neuronova – partners & collaborators​

  • On Neuronova’s site and recent profiles/funding notes, no Arm (or similar large-ecosystem) partnership is listed publicly. Pages focus on the H1 analog neuromorphic chip and funding news; no partner roster is shown.

Practical takeaways​

  • If you need proofable tools, docs, and near-term deployability: Akida Gen-2 is more mature and documented today (model zoo, conversion pipelines, HW limits), with a defined IP path for partners. BrainChip+2BrainChip+2
  • If your top priority is extreme standby/always-on power at the sensor itself: Neuronova’s H1 vision (analog SNN + sensor-in-package) is compelling for microwatt-class sentry tasks, but public specs are thin; you’ll likely need NDAs/eval hardware to validate claims and fit. Neuronova+2Tech.eu+2
  • Architectural trade-off: analog neuromorphics can be exceptionally efficient but may face calibration/variability and tooling challenges; digital neuromorphics (Akida) are somewhat higher power but typically easier to integrate, reproduce, and scale across foundries and toolchains. (This analog-vs-digital tooling friction is a theme in SNN literature.) MDPI

Where the info is thin (so you know what to ask vendors)​

  • Neuronova H1: exact MAC/synapse counts, memory topology, input modalities, supported learning rules, compiler/SDK maturity, I/O (SPI/PCIe), latency/accuracy vs standard benchmarks — not detailed on the public site. Neuronova
  • Akida Gen-2: plenty of docs exist, but real-world PPA depends on the mapped model and operating point; check the model cards and HW-limit page for your network shapes and quantisation plans. BrainChip+1

Bottom line​

  • Akida Gen-2 is the safer pick if you want a documented, digital, licensable neuromorphic platform with existing tools and examples today. BrainChip+1
  • Neuronova H1 looks aimed at ultra-low-power sensing with a bold analog approach and sensor-integration roadmap, but you’ll need direct vendor engagement to validate specs and tooling readiness beyond the website/press. Neuronova+1
 
  • Like
  • Fire
  • Love
Reactions: 24 users

keyeat

Regular
1760419746836.png


Hungry Spongebob Squarepants GIF by nounish ⌐◨-◨
 
  • Sad
  • Fire
  • Haha
Reactions: 9 users

genyl

Regular
Trying to close ya shorts

View attachment 91948
Is this a popular guy in here? Insulting people who are sceptical and are showing reasonable critism towards our beloved company painchip? No im not a shorter. Ive been invested here since 2020 and im tired of waiting. I allow myself to be sceptical since there is reason to it. It might not be good technical but keep showing your true colors. Clown
 
  • Like
  • Fire
Reactions: 5 users

7für7

Top 20
American Housewife Slap GIF by ABC Network
 
  • Haha
  • Wow
Reactions: 3 users

Frangipani

Top 20
Yet another BRN employee has left…

Our former ML Solutions Architect Dhvani Kothari is now an AI Solutions Architect with Contextual AI:


13E2826E-6FCD-471B-BD88-227F7278100E.jpeg



CD2CCC7E-E925-463D-8D18-5A31DF45000C.jpeg




9D6547B7-3C33-4014-B2EB-D95A8317B9ED.jpeg
 
Last edited:
  • Sad
  • Like
  • Love
Reactions: 9 users

Guzzi62

Regular
Is this a popular guy in here? Insulting people who are sceptical and are showing reasonable critism towards our beloved company painchip? No im not a shorter. Ive been invested here since 2020 and im tired of waiting. I allow myself to be sceptical since there is reason to it. It might not be good technical but keep showing your true colors. Clown
That can be debated, he tries hard to be funny, often insulting people in the process.

Personally, I don't care, it's not his fault he is a clown and treat the forum like he is in the pub after work.

You have the right of being here just as everybody else, so just keep on posting and ignore the posters you don't like, you can put them on ignore and not see their posts anymore.

I think most here are tired of waiting, if they don't deliver before the next AGM, they are done for and can sell the company as far as I am concerned.
It's not acceptable they keep on printing shares for keeping the light on, hollowing out shareholders part of the company.
Sean said that he expect orders of 9 mill bucks this year, the window is closing, but I will give him time before I will get more harsh in my criticism.
 
  • Like
Reactions: 4 users

Frangipani

Top 20
As much as I like your research Frangi, please show the same courtesy as you would expect from others on this forum, namely, I asked you a question a number of weeks ago, but you chose not to respond, now for someone whom likes to comment on other posters, for good or bad, can you give me and others your view on how you personally think Brainchip is tracking towards a successful outcome...going by the amount of time you spend in showcasing facts, you must have formed an opinion one way or another.

Be assured, Australians only Sue if you are caught out lying... appreciate an honest response this time !

Best regards....Texta 🍷🍷

Hi @TECH,

so you asked me for an honest response to your question how I “personally think BrainChip is tracking towards a successful outcome”?

Voilà: Well, I believe the successful outcome that all of us genuine shareholders hope for will ultimately depend on an interplay of multiple factors, but primarily on the capability of our management to convert the apparently super-high customer interest (if we are to believe our CEO) into more actual sales.

My BrainChip journey over the past 3.5 years has convinced me that the tech is real and has been and continues to be validated by researchers in both academia and industry all over the world. At the same time, though, it has taught me that the adoption of neuromorphic technology is taking much longer than anticipated.

The amount of time I voluntarily dedicate to this forum and the content I contribute should give you a hint that I’m optimistic in principle, yet not uncritical of our company. I haven’t sold a single of my BRN shares - on the contrary, I’ve been accumulating ever since I first bought in. However, I would definitely not go as far as claim that BrainChip had reached its tipping point and that I personally no longer see any risk, like FF did back in March 2024. Just recall the financial predicament our partner Prophesee found themselves in about a year ago - and they were much further advanced on their commercialisation path than we currently are.

There’s a limit to what my and anyone else’s sleuthing and dot-joining can tell us about a future “successful outcome”.
After all, we are not privy to knowing what’s going on behind closed doors, whether there are currently any prospective customers about to sign on the dotted line to generate much needed revenue for our company, how many NDAs are currently in place and how many of those will eventually translate into a deal beyond initial evaluation.

None of us without insider knowledge can predict whether our company’s roadmap will go ahead as planned (which would be awesome), how our non-neuromorphic and neuromorphic competitors in the Edge AI space will develop over the next few years and how many downselections we will or we won’t make because of them, which employees will leave us and why, what exceptional new talents we will be able to attract, how global politics will play out and how world markets will react, what effect certain decisions taken by our management will have on the BRN share price, whether or not BrainChip will be acquired one day etc.

All these and other hard or impossible to predict factors may play a role to a larger or lesser extent. Excellent technology alone does not guarantee a “successful outcome” in a commercial sense.

Yes, the groundwork has been laid, the ecosystem is being built, and the validation by others is highly encouraging. But “successful” also means that more deals generating meaningful revenue need to follow at some point. And so we wait. Understandably, patience is wearing thin for many long-term holders, especially those whose age and/or health realistically means they cannot afford to wait for much longer, if they themselves - and not their children and grandchildren, if they even have any - would like to enjoy the juicy fruit they have been hoping to harvest.

Yet, some frustrated posters blame BRN management only and overlook the fact that it was also fellow forum users here and elsewhere that have been creating unrealistic expectations and hype over the years for which the company cannot be made solely responsible. My BrainChip journey has also been an intriguing psychology class about the world of echo chambers.

Maybe that’s not exactly the response you wanted to hear from me, but I’m afraid I can’t answer your question in a more honest way than that.


Now before you had initially asked me your question the other day, I had actually asked you to please provide proof to support your claim that there are Accenture patents that do not refer to Loihi as a “neuromorphic research chip” at some point in the patent.

So since it was you who brought up the topic of courtesy yourself, please now also have the courtesy to respond to me on that matter. I won’t frame it the same way as you did with me, though, by claiming you chose not to respond, but instead will give you the benefit of doubt that it may have merely slipped your mind.

Best regards
Frangipani ☕☕
 
  • Like
  • Love
  • Fire
Reactions: 25 users

Learning

Learning to the Top 🕵‍♂️
Screenshot_20251014_193312_LinkedIn.jpg

Screenshot_20251014_193328_LinkedIn.jpg



Learning
 
  • Like
  • Wow
  • Fire
Reactions: 20 users

Rach2512

Regular
View attachment 91961
View attachment 91962


Learning


Thanks for sharing @Learning.

Does that say Lidar System?

Screenshot_20251014_164419_Samsung Internet.jpg

Screenshot_20251014_164413_Samsung Internet.jpg
 
  • Like
  • Fire
  • Wow
Reactions: 14 users

Frangipani

Top 20
Looks like BrainChip’s coming to this event

We’re thrilled to reveal the incredible line up of panel speakers coming to Semiconductor Australia 2025.

Three feature panels will bring together some of the most influential voices across defence, aerospace, chips, investment, and quantum to tackle the big issues shaping Australia’s sovereign capability and strategic future.

Our feature panel ‘Leading from the Front: tech sovereignty, strategic deterrence, and building Australia’s innovation arsenal’, will examine local success stories and Australia's rising sovereign capability in defence and dual-use technologies.

As global tensions reshape supply chains and strategic alliances, the panel will ask the hard questions: how do we leverage sovereign technologies to strengthen deterrence, attract strategic capital, and integrate cutting-edge innovations into defence and industry?

Moderated by Bell Potter Securities’s James Berman, the discussion will explore how Australia can lead from the front in building an innovation arsenal that keeps pace with the world’s fastest-moving competitors.

The panel includes Defence Council’s Dr Jens Goennemann, DroneShield Angus Bean, Hypersonix Inc. Matt A. Hill, Visionary Machines Sonny Foster, and Wrays | Intellectual Property Paula Adamson.

Yup, BrainChip is indeed coming to Semiconductor Australia, although not participating in the above panel discussion “Leading from the Front” with DroneShield.

Instead, Sean Hehir will be representing BrainChip in Session Two “Semiconductors Power the Universe” alongside the CEOs of Weebit Nano (according to whom our two companies had been in discussions with “on multiple fronts” at the time of a March 2025 Retail Investor Briefing), 2D Generation and Archer Materials.


89B09A47-3CE2-464C-987A-BCF0E1E8CC23.jpeg


AC2DA016-9A34-43AB-B703-78A576745C82.jpeg
 
  • Like
  • Thinking
Reactions: 6 users

Diogenese

Top 20
View attachment 91961
View attachment 91962


Learning

https://www.bing.com/images/search?...tomobile&cw=1006&ch=704&ajaxhist=0&ajaxserp=0

1760432596052.png




The original Jonckheere Aerodynamic Coupe was built by Belgium by Henri Jonckheere and his son Joseph Jonckheere in 1935. Starting from a 1925 Rolls-Royce Phantom I they added a streamlined coachwork with twin sunroofs.

1760432698435.png
 
  • Like
  • Love
  • Fire
Reactions: 10 users
View attachment 91961
View attachment 91962


Learning


1760433748169.png


1760434335011.png



Wow — BrainChip’s Akida is featured in the new Mercedes show car! I’m not entirely sure if the Level 4 automated driving system with LiDAR is powered by Akida, but it seems highly likely. Otherwise, BrainChip probably wouldn’t have suddenly published that LiDAR point cloud
 
Last edited:
  • Like
  • Fire
  • Love
Reactions: 17 users

manny100

Top 20
Yet another BRN employee has left…

Our former ML Solutions Architect Dhvani Kothari is now an AI Solutions Architect with Contextual AI:


View attachment 91958


View attachment 91959



View attachment 91960
Taking a positive view there are reported to be an increasing number of Neuromorphic start ups which we are far more advanced than. They will 'head hunt' Brainchip staff for sure. It's an indication that the industry is growing.
 
  • Like
  • Love
Reactions: 5 users
In Level 4 (L4) automated driving, a LiDAR (Light Detection and Ranging) system is one of the most critical sensors enabling the vehicle to perceive and understand its surroundings without human intervention. Here's what it does in detail:


🧠 1. 3D Environment Mapping

LiDAR emits laser pulses (typically millions per second) and measures how long they take to bounce back from surrounding objects.

  • This creates a high-resolution 3D “point cloud” — a map of the car’s environment in real time.
  • It provides exact distance, size, and shape information of obstacles, road edges, buildings, pedestrians, and vehicles — far more accurate than a camera.

👀 2. Object Detection and Tracking

  • The system can identify and classify nearby objects (cars, bikes, humans, road barriers, etc.).
  • It continuously tracks the position and speed of moving objects, helping predict their behavior (e.g., a pedestrian crossing the road).

🧭 3. Localization

LiDAR helps the vehicle know exactly where it is on a map with centimeter-level accuracy.

  • By comparing the current LiDAR point cloud to a high-definition (HD) map, the vehicle can determine its precise position and orientation, even where GPS is unreliable (like tunnels or urban canyons).

⚙️ 4. Sensor Fusion

L4 systems combine LiDAR data with inputs from:

  • Cameras (for color, traffic signs, lane markings)
  • Radar (for speed and distance in poor weather)
  • Ultrasonic sensors (for close-range detection)
This sensor fusion gives the car a complete and robust understanding of its environment, crucial for safe autonomous decisions.


🚗 5. Path Planning and Obstacle Avoidance

With the detailed 3D model of its surroundings:

  • The car’s AI can plan safe paths, calculate maneuvers, and avoid collisions in real time.
  • In L4 autonomy, the LiDAR data supports redundant decision-making, meaning the vehicle can still drive safely even if other sensors fail.

⚡ 6. Redundancy and Safety

L4 autonomous vehicles rely on multiple LiDAR units (e.g., 360° roof-mounted plus corner LiDARs) to ensure complete coverage and fail-safe operation.
This redundancy is vital for hands-off, eyes-off driving — a key requirement for Level 4 autonomy.


Summary Table​

FunctionDescriptionWhy It Matters for L4
3D MappingBuilds real-time environment modelEnables perception in all lighting conditions
Object DetectionIdentifies vehicles, pedestrians, etc.Critical for collision avoidance
LocalizationDetermines exact positionKeeps the vehicle correctly aligned with maps
Sensor FusionCombines LiDAR with radar/camerasIncreases reliability and accuracy
Path PlanningGuides safe navigationEnables full autonomy without driver input
RedundancyBackup sensing systemEnsures safety in system failures

 
  • Fire
  • Love
  • Like
Reactions: 3 users

🚘 LiDAR + BrainChip Akida (L4 Mercedes) vs Tesla Full Vision Autonomous Driving​


LiDAR + BrainChip Akida (L4 Mercedes)


Pros:


  • ✅ True 3D Perception: LiDAR provides centimeter-accurate distance and shape data in real time.
  • ✅ Superior Depth and Object Recognition: Does not rely on estimating depth from 2D images.
  • ✅ All-Weather and Night Operation: Works reliably in low light, fog, rain, or glare.
  • ✅ Hardware Redundancy: Combines LiDAR, radar, and cameras for safety-critical backup.
  • ✅ Ultra-Low Latency Processing: Akida’s neuromorphic chip handles LiDAR point clouds at microsecond speed.
  • ✅ Energy-Efficient: Consumes only a few hundred milliwatts — ideal for edge AI.
  • ✅ On-Device Learning: Can adapt locally without needing cloud retraining.
  • ✅ Regulatory Path to L4: Meets ISO 26262 (ASIL-D) safety requirements, enabling certified driverless modes.
  • ✅ Scalable Edge Autonomy: Enables distributed intelligence across vehicle sensors.

Cons:


  • ❌ Higher Hardware Cost: LiDAR sensors and neuromorphic chips add expense.
  • ❌ Complex Integration: Requires sophisticated sensor-fusion and calibration.
  • ❌ Bulkier Setup: Multiple LiDAR units increase weight and design complexity.
  • ❌ Limited Data Scale: Fewer vehicles collecting real-world data compared with Tesla’s fleet.



Tesla Full Vision System (FSD)


Pros:


  • ✅ Low Hardware Cost: Uses inexpensive cameras only.
  • ✅ Sleek Design: No LiDAR domes — simple aesthetics.
  • ✅ Fleet Learning Advantage: Millions of cars continuously send data for model improvement.
  • ✅ Rapid Software Iteration: Updates can be rolled out over-the-air easily.
  • ✅ Excellent Visual Pattern Recognition: Leverages massive datasets for visual reasoning.

Cons:


  • ❌ 2D-Only Perception: Estimates depth; less reliable in 3D spatial mapping.
  • ❌ Poor Performance in Bad Weather or Night: Camera vision degrades with fog, rain, or glare.
  • ❌ High Compute Load: Requires large GPU processing power and high energy.
  • ❌ No True Sensor Redundancy: Fails if cameras are blinded or obstructed.
  • ❌ No On-Device Learning: Models must be retrained centrally.
  • ❌ Regulatory Limitations: Currently approved only for L2–L3 assist, not L4 autonomy.



🧩 Overall Comparison


CategoryLiDAR + Akida (L4 Mercedes)Tesla FSD (Vision-Only)Which is More Advanced
Depth PerceptionDirect 3D laser distance2D image estimation✅ LiDAR + Akida
Weather ReliabilityWorks in fog, rain, nightVision fails in poor conditions✅ LiDAR + Akida
Processing EfficiencyNeuromorphic low-power chipGPU-heavy, high energy✅ LiDAR + Akida
Learning AdaptationOn-device continual learningCentral server training only✅ LiDAR + Akida
Safety RedundancyMulti-sensor failoverVision only✅ LiDAR + Akida
Regulatory ReadinessMeets ASIL-D standardsL2/L3 only (FSD beta)✅ LiDAR + Akida
Cost and SimplicityHigh cost / complex setupLow cost / simple hardware✅ Tesla FSD
Fleet Data ScaleLimited deploymentMassive global fleet✅ Tesla FSD



🏁 Verdict


LiDAR + BrainChip Akida (Mercedes L4) is more advanced in terms of:


  • Real-time 3D perception
  • Safety redundancy
  • Regulatory compliance
  • Energy efficiency
  • Neuromorphic edge AI

Tesla Vision-Only FSD is more scalable and cost-efficient, ideal for rapid consumer deployment,
but not yet capable of certified Level 4 autonomy due to its limited sensor redundancy and environmental reliability.


Personal Opinion

In my view, the LiDAR + BrainChip Akida approach represents a major leap forward in autonomous driving.
Its event-based neuromorphic processing allows it to handle LiDAR point clouds and sensor data much faster and far more efficiently than Tesla’s purely vision-based Full Self-Driving system.

While Tesla’s system relies on heavy GPU computation and large centralized training pipelines, Akida’s spiking neural network architecture processes events as they happen, using a fraction of the power and latency.
This results in real-time situational awareness, low energy consumption, and higher reliability, especially in complex or unpredictable environments where vision alone struggles.

In short, this new approach processes information more like a human brain than a camera, giving it a decisive edge in achieving true Level 4 autonomy.
 
  • Like
  • Love
  • Fire
Reactions: 12 users

Boab

I wish I could paint like Vincent
View attachment 91961
View attachment 91962


Learning
Everything old is new again. Although this is an Alfa Romeo
1760437178622.png
 
  • Like
  • Love
Reactions: 6 users

Boab

I wish I could paint like Vincent
  • Like
Reactions: 2 users

Learning

Learning to the Top 🕵‍♂️
Last edited:
  • Fire
  • Like
Reactions: 4 users
Top Bottom