BRN Discussion Ongoing

Qualcom
 

Attachments

  • Screenshot_20250628_164107_LinkedIn.jpg
    Screenshot_20250628_164107_LinkedIn.jpg
    244.3 KB · Views: 202
  • Sad
  • Like
  • Haha
Reactions: 3 users
  • Fire
Reactions: 1 users
Exoswan thoughts from a few days ago.




Top Neuromorphic Computing Stocks 2025: Pure-Play Watchlist​

Last modified: June 24, 2025


BrainChip Holdings (ASX: BRN)​

HQ: Australia; Pure-play in digital spiking neural processors for edge AI.

BrainChip is a small Australian chip company with one big idea: put a brain-like computer at the sensor itself. Its Akida processor is a fully digital, event-driven neural network that only “fires” when an input changes, so tasks such as keyword spotting, vibration analysis or security vision can run for months on a coin-cell battery. Earlier this year, the firm released a second-generation core that adds vision-transformer layers and temporal event networks and began shipping the design on a simple M.2 card so engineers can test it in any PC.

The bullish view is direct. Akida already ships in plug-and-play modules and a growing set of microcontroller boards; each volume design win could turn into a royalty stream backed by the MetaTF software tool-chain. Because power efficiency is measured in microwatts the chip competes where GPUs never fit—hearing aids, industrial sensors, drones. Patents and first-mover status give BrainChip room to run if neuromorphic demand finally arrives.

Yet the risks are plain. Annual revenue is still tiny and R&D burn exceeds sales, so dilution is the default funding source. Bigger semiconductor houses can cross-subsidize their own spiking-network blocks and bundle them for free. The technology also forces customers to rethink algorithms; without a larger field team, design cycles can drag for years. Buying the stock therefore means betting that 2025–2027 will bring visible product launches, not just more press releases. If that catalyst materializes, the company’s tiny market cap could re-rate sharply upward for investors.
 
  • Like
  • Fire
  • Thinking
Reactions: 29 users
Sean has stated a couple of times at least now. how when New Technological frontiers are created, it is a bit of a "Wild West" for a while, until 2 or 3 dominant players, come to the fore.

He wants "us" to be one of those dominant players and so obviously, do we!
I would think this would look like, something of a 20 to 30% market share, of our targeted markets combined.

With our apparent foothold lead, in the Premier, Space, Military and Medical fields, this is well within reach, in my opinion.
But it's obviously the Big breakthrough in "bread and butter" consumer markets, that we eagerly seek.

Despite what our 20 cent share price and lack of any serious revenue shows, "we" have been laying the foundations and groundwork, to become one of those dominant players, for some time now.

Tony Lewis is hinting very strongly at the announcement of a new technological update, coming out imminently, which while strategically important and value adding from an IP perspective, is not the kind of announcement we really need.


I know Sean must have mixed up the Time Zones, as "his" Friday has only recently ended and the necessity to announce a new large IP deal on the ASX, missed even "our" late announcements deadline.
So Monday morning, is still on the cards, for the Big Announcement, I promised last week. 😉👍



View attachment 87779
@DingoBorat

Your crystal ball working well.

Suspect we get an Ann Mon morn as you say with a new Patent published 26 June.

Just came up and doesn't have all the paperwork on the website yet, just the below.



US2025209313A1
METHOD AND SYSTEM FOR IMPLEMENTING ENCODER PROJECTION IN NEURAL NETWORKS

Bibliographic data


Global Dossier

Applicants​

BRAINCHIP INC [US]

Inventors​

COENEN PHD OLIVIER JEAN-MARIE DOMINIQUE [US]; PEI YAN RU [US]

Classifications​

IPC
G06F17/16; G06N3/048;
CPC
G06F17/16 (US); G06N3/048 (US);

Priorities​

US202363614220P·2023-12-22; US202418991246A·2024-12-20

Application​

US202418991246A·2024-12-20

Publication​

US2025209313A1·2025-06-26

Published as​

US2025209313A1
en

METHOD AND SYSTEM FOR IMPLEMENTING ENCODER PROJECTION IN NEURAL NETWORKS​

Abstract​

Disclosed is a neural network system that includes a memory and a processor. The memory is configured to store a plurality of storage buffers corresponding to a current neural network layer, and implement a neural network that includes a plurality of neurons for the current neural network layer and a corresponding group among a plurality of groups of basis function values. The processor is configured to receive an input data sequence into the first plurality of storage buffers over a first time sequence and project the input data sequence on a corresponding basis function values by performing, for each connection of a corresponding neuron, a dot product of the first input data sequence within a corresponding storage buffer with the corresponding basis function values and thereby determine a corresponding potential value for the corresponding neurons. Thus, utilizing the corresponding potential values, the processor generates a plurality of encoded output responses.
 
  • Like
  • Love
  • Fire
Reactions: 45 users
So.....who are Rockwell Collins...now known as Collin Aerospace ;)

They seem to think Akida could be an option to fit their neuromorphic processing component in this Patent also just published.





US2025208915A1
ONLINE SCHEDULING FOR ADAPTIVE EMBEDDED SYSTEMS

Bibliographic data


Global Dossier

Applicants​

ROCKWELL COLLINS INC [US]

Inventors​

GARCIA GENER ALEJANDRO [IE]; SKALISTIS STEFANOS [IE]; MORA DE SAMBRICIO JAVIER [IE]

Classifications​

IPC
G06F9/50;
CPC
G06F9/4881 (EP); G06F9/4893 (EP); G06F9/5027 (US); G06N3/049 (EP); G06N3/063 (EP); G06N3/088 (EP);

Priorities​

EP23219808A·2023-12-22

Application​

US202418960907A·2024-11-26

Publication​

US2025208915A1·2025-06-26

Published as​

EP4575780A1;
US2025208915A1
en

ONLINE SCHEDULING FOR ADAPTIVE EMBEDDED SYSTEMS​

Abstract​

A method of generating schedules for an adaptive embedded system, the method comprising: deriving task sets of all possible tasks to be performed by the embedded system; deriving sets of all possible hardware configurations of the embedded system; creating a multi-model system having a multi-model defining the adaptivity of the system for all possible tasks and all possible hardware and all combinations thereof, the adaptivity defining how the system can change operation responsive to a mode change requirement and/or occurrence of a fault; solving a scheduling problem for the models of the multi-model system in a neuromorphic accelerator implemented by spiked neural networks; and providing schedule instructions to the system, for performance of tasks, based on the solution

Reference except:

0063] The neuromorphic accelerator, using SNNs, can be implemented in various known ways, e.g. on an FPGA as described, for example, in C. Frenkel, M. Lefebvre, J-D Legat and D. Bol, ‘A 0.086-mm212.7-pJ/SOP 64k-Synapse 256 Neuron Online-Learning Digital Spiking Neuromorphic Processor in 28-nm CMOS,’ IEEE Transactions on Biomedical Circuits and Systems, vol. 13, no. 1, pp. 145-156, 2019, or using a specific ASIC peripheral as in e.g. Brainchip Akida Neuromorphoc Core.
 
  • Like
  • Fire
  • Love
Reactions: 37 users

Diogenese

Top 20
@DingoBorat

Your crystal ball working well.

Suspect we get an Ann Mon morn as you say with a new Patent published 26 June.

Just came up and doesn't have all the paperwork on the website yet, just the below.



US2025209313A1
METHOD AND SYSTEM FOR IMPLEMENTING ENCODER PROJECTION IN NEURAL NETWORKS

Bibliographic data


Global Dossier

Applicants​

BRAINCHIP INC [US]

Inventors​

COENEN PHD OLIVIER JEAN-MARIE DOMINIQUE [US]; PEI YAN RU [US]

Classifications​

IPC
G06F17/16; G06N3/048;
CPC
G06F17/16 (US); G06N3/048 (US);

Priorities​

US202363614220P·2023-12-22; US202418991246A·2024-12-20

Application​

US202418991246A·2024-12-20

Publication​

US2025209313A1·2025-06-26

Published as​

US2025209313A1
en

METHOD AND SYSTEM FOR IMPLEMENTING ENCODER PROJECTION IN NEURAL NETWORKS​

Abstract​

Disclosed is a neural network system that includes a memory and a processor. The memory is configured to store a plurality of storage buffers corresponding to a current neural network layer, and implement a neural network that includes a plurality of neurons for the current neural network layer and a corresponding group among a plurality of groups of basis function values. The processor is configured to receive an input data sequence into the first plurality of storage buffers over a first time sequence and project the input data sequence on a corresponding basis function values by performing, for each connection of a corresponding neuron, a dot product of the first input data sequence within a corresponding storage buffer with the corresponding basis function values and thereby determine a corresponding potential value for the corresponding neurons. Thus, utilizing the corresponding potential values, the processor generates a plurality of encoded output responses.
Hi Fmf,

That was a quick pickup ...
 
  • Like
  • Fire
Reactions: 12 users
Hi Fmf,

That was a quick pickup ...
Cheers.

I have Espacenet and USPTO open on tabs on my phone with relevant keywords and just refresh every so often when I remember and see if the result # changes.

I noticed tonight it went from 18 to 20 so had a look and obviously one was our own and the other is Collins Aerospace reference to Akida.
 
  • Like
  • Fire
  • Love
Reactions: 30 users

Diogenese

Top 20
Cheers.

I have Espacenet and USPTO open on tabs on my phone with relevant keywords and just refresh every so often when I remember and see if the result # changes.

I noticed tonight it went from 18 to 20 so had a look and obviously one was our own and the other is Collins Aerospace reference to Akida.
I find Espacenet is more user friendly, but you can get the nitty-gritty on examination from USPTO.
 
  • Like
  • Fire
Reactions: 12 users
@DingoBorat

Your crystal ball working well.

Suspect we get an Ann Mon morn as you say with a new Patent published 26 June.

Just came up and doesn't have all the paperwork on the website yet, just the below.



US2025209313A1
METHOD AND SYSTEM FOR IMPLEMENTING ENCODER PROJECTION IN NEURAL NETWORKS

Bibliographic data


Global Dossier

Applicants​

BRAINCHIP INC [US]

Inventors​

COENEN PHD OLIVIER JEAN-MARIE DOMINIQUE [US]; PEI YAN RU [US]

Classifications​

IPC
G06F17/16; G06N3/048;
CPC
G06F17/16 (US); G06N3/048 (US);

Priorities​

US202363614220P·2023-12-22; US202418991246A·2024-12-20

Application​

US202418991246A·2024-12-20

Publication​

US2025209313A1·2025-06-26

Published as​

US2025209313A1
en

METHOD AND SYSTEM FOR IMPLEMENTING ENCODER PROJECTION IN NEURAL NETWORKS​

Abstract​

Disclosed is a neural network system that includes a memory and a processor. The memory is configured to store a plurality of storage buffers corresponding to a current neural network layer, and implement a neural network that includes a plurality of neurons for the current neural network layer and a corresponding group among a plurality of groups of basis function values. The processor is configured to receive an input data sequence into the first plurality of storage buffers over a first time sequence and project the input data sequence on a corresponding basis function values by performing, for each connection of a corresponding neuron, a dot product of the first input data sequence within a corresponding storage buffer with the corresponding basis function values and thereby determine a corresponding potential value for the corresponding neurons. Thus, utilizing the corresponding potential values, the processor generates a plurality of encoded output responses.
It's all gobbledygook to me FMF..
But possibly something to do with part of AKIDA 3.0 IP?..

My "Crystal Ball" is working well?..
Hey, I'm running out of ideas, to keep my "predictions" Alive 😛..

I think the distinct possibility, of our CTO, running through the office singing the songs from musicals, says something Big on the Technical front, is about to be released..

But not the type of 'Announcement" I'm trying to "coax" into Existence. 😛

Record Highs, on both the Nasdaq and Dow, quadrupled with a Big Technical and Big IP announcement, would be very satisfying indeed. 😉


"It can't rain all the time.."
 
  • Like
  • Love
  • Fire
Reactions: 19 users

Bravo

If ARM was an arm, BRN would be its biceps💪!
So.....who are Rockwell Collins...now known as Collin Aerospace ;)

They seem to think Akida could be an option to fit their neuromorphic processing component in this Patent also just published.





US2025208915A1
ONLINE SCHEDULING FOR ADAPTIVE EMBEDDED SYSTEMS

Bibliographic data


Global Dossier

Applicants​

ROCKWELL COLLINS INC [US]

Inventors​

GARCIA GENER ALEJANDRO [IE]; SKALISTIS STEFANOS [IE]; MORA DE SAMBRICIO JAVIER [IE]

Classifications​

IPC
G06F9/50;
CPC
G06F9/4881 (EP); G06F9/4893 (EP); G06F9/5027 (US); G06N3/049 (EP); G06N3/063 (EP); G06N3/088 (EP);

Priorities​

EP23219808A·2023-12-22

Application​

US202418960907A·2024-11-26

Publication​

US2025208915A1·2025-06-26

Published as​

EP4575780A1;
US2025208915A1
en

ONLINE SCHEDULING FOR ADAPTIVE EMBEDDED SYSTEMS​

Abstract​

A method of generating schedules for an adaptive embedded system, the method comprising: deriving task sets of all possible tasks to be performed by the embedded system; deriving sets of all possible hardware configurations of the embedded system; creating a multi-model system having a multi-model defining the adaptivity of the system for all possible tasks and all possible hardware and all combinations thereof, the adaptivity defining how the system can change operation responsive to a mode change requirement and/or occurrence of a fault; solving a scheduling problem for the models of the multi-model system in a neuromorphic accelerator implemented by spiked neural networks; and providing schedule instructions to the system, for performance of tasks, based on the solution

Reference except:

0063] The neuromorphic accelerator, using SNNs, can be implemented in various known ways, e.g. on an FPGA as described, for example, in C. Frenkel, M. Lefebvre, J-D Legat and D. Bol, ‘A 0.086-mm212.7-pJ/SOP 64k-Synapse 256 Neuron Online-Learning Digital Spiking Neuromorphic Processor in 28-nm CMOS,’ IEEE Transactions on Biomedical Circuits and Systems, vol. 13, no. 1, pp. 145-156, 2019, or using a specific ASIC peripheral as in e.g. Brainchip Akida Neuromorphoc Core.


Nice one FMF!

Rockwell Collins
(now part of Collins Aerospace) and Lockheed Martin have a history of collaboration on various aerospace and defense projects. They've partnered on avionics for aircraft like the F-22 and F-35, and on training systems, among other things. More recently, Collins Aerospace has supplied subsystems for NASA's Orion spacecraft, which Lockheed Martin is also involved with.

As mentioned previously, Rockwell Collins is a subsidiary of RTX Corporation, formerly known as Raytheon Technologies.
 
  • Like
  • Love
  • Fire
Reactions: 26 users

Bravo

If ARM was an arm, BRN would be its biceps💪!
Screenshot 2025-06-29 at 9.49.33 am.png




You said:​

Yesterday Tony Lewis, CTO of BrainChip commented "This is cool. Why? We literally took a model from a completely different domain (text processing) and applied it to RADAR and got excellent results in terms of performance and MAC (computation) count. At a recent conference we surveyed that state-of-the art and found that there is a huge opportunity for state-space models and was a real eye-opener to the hardware-algorithms systems we are pioneering at BrainChip Here is what I find cool: Conventional models take a radar signals, use a processor to convert the 1-d wave form into a 2d image so that conventional CNNs-- convolutional neural networks-- can process them. That inherently introducing lots of delay : signal-> image-> classification. With our State-space model (TENNs ) we can go directly from a raw signal to classification with much lower latency, few macs (less energy ) less chip area (cost) and excellent performance. This summer my team will push out a number of new models for our Defense, Medical and commercial work. Its an exciting time to be building the future at BrainChip. Follow me to hear the latest news on job posting, wins, and AI at the edge." Can you please explain what you think the major implications of this statement may be?

ChatGPT said:​

Tony Lewis’s statement provides a significant glimpse into BrainChip’s current direction, and the implications are technically important, strategically aligned with military and commercial AI demands, and position BrainChip as a serious player in low-latency edge AI — especially in radar and defense systems.

Let’s break it down clearly:

🧠 Key Takeaways from Tony’s Statement​

1. Cross-domain Model Transfer (Text ➝ Radar)

"We literally took a model from a completely different domain (text processing) and applied it to RADAR..."
Implication:
BrainChip is leveraging neural architectures originally designed for natural language processing (NLP) — such as state-space models (SSMs) or possibly TENNs (Time-Encoded Neural Networks) — and applying them directly to raw radar data.

That’s a big deal — because traditionally, signal types (text, images, radar) are handled with very different model types. This kind of architectural crossover suggests:

  • A unified AI model strategy
  • Less model specialization required per domain
  • Rapid model prototyping across applications
It also hints that BrainChip is not just a hardware company, but deeply involved in cutting-edge neural architecture innovation.

2. Bypassing 2D Transformation (Radar ➝ Image ➝ CNN)

"Conventional models take radar signals... convert to a 2D image... then use CNNs..."
Implication:
Traditional radar AI pipelines are inefficient:
  • Convert 1D waveform (Doppler/time signal) ➝ 2D spectrogram/image
  • Then process with image-focused models like CNNs
This:
  • Adds latency
  • Requires additional memory
  • Burns more power and compute
BrainChip’s new method bypasses this entirely:
  • Processes raw radar signals directly using state-space models or TENNs
  • Skips the image step — reducing delay, MAC count, power, and cost
This is perfect for missiles, drones, wearables, and embedded defense systems where every milliwatt and microsecond counts.

3. Use of State-Space Models (SSMs / TENNs)

"Huge opportunity for state-space models... real eye-opener..."
Implication:
State-space models (like S4, or BrainChip's custom TENNs) are:
  • Efficient alternatives to Transformers for sequential data
  • Excellent for long-range memory, which is vital in radar or ECG waveforms
  • Linear in time and memory, unlike CNNs or RNNs
By adopting SSMs:
  • BrainChip is leading a hardware-algorithm co-design movement
  • They’re building models that are explicitly optimized for their Akida hardware — not just running PyTorch models inefficiently on neuromorphic chips
This also hints at a massive strategic shift:
  • Moving away from retrofitting mainstream models
  • Moving toward custom, edge-native models built with hardware constraints in mind from day one

4. Strategic Domains: Defense, Medical, Commercial

"My team will push out a number of new models for our Defense, Medical and commercial work."
Implication:
BrainChip is:
  • Productizing their model-hardware synergy
  • Targeting high-value edge verticals:
    • Defense (radar, drones, loitering munitions, battlefield intelligence)
    • Medical (wearables, EEG/ECG anomaly detection, hearing aids)
    • Commercial (audio wake-word detection, industrial sensors, smart cameras)
By deploying domain-optimized TENNs, BrainChip is creating:
  • Application-specific IP
  • A catalog of ready-to-run models for enterprise and defense clients
  • A strategy that scales Akida beyond being "just a chip" into a full AI platform

5. Low Latency, Low MACs, Low Area = Edge Superiority

"...lower latency, fewer MACs (less energy), less chip area (cost), and excellent performance..."
Implication:
This encapsulates the holy grail of edge AI:
  • Do more with less
  • Enable true autonomy at the edge
  • Outperform traditional AI methods in real-world environments
And it's especially important for:
  • Missiles and UAVs, where thermal budget and compute are tightly constrained
  • Medical implants or wearables, where energy efficiency is crucial
  • Perimeter surveillance, where always-on, instant decision-making is required
This reinforces BrainChip’s value proposition: the Akida platform delivers neural compute without the baggage of mainstream AI accelerators (like GPUs or TPUs).

🧩 Final Strategic Implications​

🔹 1. Shift Toward Model + Hardware Integration

BrainChip isn’t just delivering a chip — they’re co-developing algorithms tailored to that chip. That’s what Nvidia does with CUDA + GPUs — and it’s a winning strategy.

🔹 2. Leadership in Neuromorphic AI for Real-world Deployment

This move positions BrainChip as:
  • Not just neuromorphic researchers
  • But practical AI infrastructure builders — solving latency, power, and cost in mission-critical systems

🔹 3. Potential for IP Licensing or Defense-Grade Design Wins

As their radar and medical models mature, BrainChip could:
  • License these state-space models as embedded IP
  • Land major contracts in defense, aerospace, or medical technology sectors

Summary​

Tony Lewis’s comment reveals a strategic inflection point:
  • BrainChip is pioneering hardware-native AI model design
  • They’re breaking away from conventional pipelines (CNNs, 2D transforms)
  • And positioning themselves at the forefront of edge-native, low-latency radar and signal intelligence
This isn't just about a smarter chip — it’s about building a new end-to-end paradigm for AI at the edge.
 
Last edited:
  • Like
  • Fire
  • Love
Reactions: 59 users

Bravo

If ARM was an arm, BRN would be its biceps💪!
Screenshot 2025-06-29 at 10.03.25 am.png
 
  • Like
  • Fire
  • Love
Reactions: 36 users

Bravo

If ARM was an arm, BRN would be its biceps💪!
Nice one FMF!

Rockwell Collins (now part of Collins Aerospace) and Lockheed Martin have a history of collaboration on various aerospace and defense projects. They've partnered on avionics for aircraft like the F-22 and F-35, and on training systems, among other things. More recently, Collins Aerospace has supplied subsystems for NASA's Orion spacecraft, which Lockheed Martin is also involved with.

As mentioned previously, Rockwell Collins is a subsidiary of RTX Corporation, formerly known as Raytheon Technologies.


Rockwell has also filed another patent for neurmorphic sensors for low power wearables as per post below.


Check this out Brain Fam!

Here's a patent for NEUROMORPHIC SENSORS FOR LOW POWER WEARABLES.

The applicant is Rockwell Collins. Date of filing was 5th April 2024.

The patent doesn't mention BrainChip, but as you can see below, Brainchip worked with Rockwell Collins in 2017 on perimeter surveillance, so you'd think they would have to be aware of us.🥴😝

Rockwell Collins now operates as part of Collins Aerospace, a subsidiary of ...wait for it.... the RTX Corporation (formerly Raytheon Technologies).



EXTRACT ONLY
View attachment 77447





ENLARGED EXTRACT


View attachment 77448






View attachment 77449





 
  • Like
  • Love
  • Fire
Reactions: 29 users

Beebo

Regular
  • Haha
  • Like
Reactions: 6 users

Bravo

If ARM was an arm, BRN would be its biceps💪!
Surely it can't be too much longer before something comes our way as a result of this new partnership between Arm and Cerence, where low-power, real-time LLM inference at the edge is the central challenge?

As we all know, Akida excels at:
  • Keyword spotting
  • Natural language intent classification
  • Time-series pattern recognition
While Arm and Cerence are working on optimizing LLMs on traditional CPU/GPU pipelines, the bottlenecks of power, latency, and thermal limits in vehicles still remain. Akida, being a neuromorphic processor would be capable of delivering sub-milliwatt operation for AI inference, event-based, real-time processing, on-device learning capabilities and ultra-low latency for audio and language data streams.

What's not to like about that? These would be ideal traits for in-vehicle voice assistants and LLM use cases, where responsiveness, power efficiency, and privacy really matter.

It says here that "CaLLM Edge operates fully on Arm-based chipsets" and we know Akida is compatible with the Arm product family as has been successfully demonstrated with Cortex M85.

I could easily imagine a Cerence voice assistant enhanced by Akida doing real-time voice analysis and decision-making, entirely offline, with a power budget that’s EV-battery friendly.

Arm should be asking: "How can we future proof this stack for in-cabin AI by 2026-2027 when compute demands will surge but battery and thermal budgets won't".





Cerence AI and Arm push LLM boundaries with on-device AI for smarter cars​

Jun 6, 2025 | Stephen Mayhew
Categories Edge Computing News | Hardware
Cerence AI and Arm push LLM boundaries with on-device AI for smarter cars

Cerence AI has partnered with semiconductor manufacturer, Arm to enhance its embedded small language model (SLM), CaLLM Edge, using Arm’s Kleidi software library.
The collaboration aims to optimize CPU and GPU performance for real-time language processing at the edge, improving speed, efficiency, and privacy highlighting the growing importance of edge computing and generative AI in the automotive industry.
Arm’s Kleidi technology accelerates machine learning and neural network operations on Arm-based devices, addressing the challenges of limited compute power in vehicles. CaLLM Edge operates fully on Arm-based chipsets, enabling advanced in-car AI capabilities without relying on cloud connectivity.
“We are excited to partner with Arm to take CaLLM Edge to the next level, setting new standards for performance and efficiency in edge computing in the car,” says Nils Schanz, EVP, Product & Technology, Cerence AI. “By combining our expertise in AI-powered language models with Arm’s innovative library, we are continuing our journey to create a new era of voice-first experiences and next-generation AI applications in the automotive space, empowering consumers with smarter, faster, and more responsive in-car assistants.”
This partnership supports automakers in delivering smarter, faster, and more responsive AI-powered user experiences for drivers and setting new standards for in-car AI applications, enhancing safety and connectivity.

https://www.edgeir.com/cerence-ai-and-arm-push-llm-boundaries-with-on-device-ai-for-smarter-cars-20250606


Screenshot 2025-06-29 at 11.25.51 am.png
 
  • Like
  • Love
Reactions: 11 users

KMuzza

Mad Scientist
THE EYES have it- European manufactured cars are required by law -2026



and also public for use-

1751167757233.png



remember the old Y Tube clumsy Brainchip video's being put out.

Brainchip will be there - somewhere -but on motor vehicle ADAS systems.
 
  • Like
  • Fire
  • Love
Reactions: 16 users

Rach2512

Regular

Philip Dodge again, nice work.

Perhaps we should all follow Glenn and comment - Go Brainchip
 
  • Like
  • Love
  • Thinking
Reactions: 12 users
Great synergy for BRN
Hopefully they are in discussions
 

Attachments

  • Screenshot_20250629_164054_Chrome.jpg
    Screenshot_20250629_164054_Chrome.jpg
    544 KB · Views: 91
  • Like
Reactions: 3 users

TECH

Regular

Philip Dodge again, nice work.

Perhaps we should all follow Glenn and comment - Go Brainchip

Nice work Rach,

Great seeing you much more active on the forum this year, contributing positive news wherever possible, yes, it's a real journey,
but your passion is well noted, thanks for supporting our company, it lifts the spirits of all.... we all love Akida ❤️ including Peter
and Anil, they are the parents 🤣🤣

Kind regards......Chris (Tech)
 
  • Like
  • Love
Reactions: 20 users
Top Bottom