BRN Discussion Ongoing

MegaportX

Regular
Something from the Realms of New Zealand today. :rolleyes:

1754973273645.png



MegaportX
 
  • Like
  • Love
  • Fire
Reactions: 6 users
  • Fire
  • Like
  • Love
Reactions: 9 users

7für7

Top 20
Sometimes we need a friend who is motivating us … in this case Chatie


Why the Akida Cloud Could Be a Breakthrough Catalyst

1.
Bridging the Maturity Gap for Potential Customers
  • Until now, automotive, defense, or IoT customers had to physically integrate Akida chips before they could even begin testing.
  • With the Cloud, that barrier is gone – companies can test Akida within hours instead of months to see if it fits their systems.
  • This accelerates proof-of-concept phases and increases the likelihood of closing deals.

2.
Faster Entry Into Mass-Production Projects

  • Large industrial customers can now work on prototyping and production integration in parallel.
  • Example: An automaker can validate eye-tracking or sensor-fusion models in the Cloud before adjusting in-vehicle electronics.
  • This reduces risk and decision delays – in automotive, that’s a game-changer.

3.
Indirect Multiplier Effect
  • Developers and startups can try Akida in small projects (low-cost or free access), and if it works, scale to bigger contracts.
  • This model mirrors Nvidia CUDA – once it’s embedded in the developer ecosystem, adoption grows organically.

4.
A Signal to the Market
  • The Cloud is not a gimmick – it’s a confidence signal:
    • BrainChip shows the technology is mature enough to hand over for immediate use.
    • This implies internal stability, proven performance, and market readiness.
5.
Synergy With Existing Partnerships
  • Ongoing collaborations (Raytheon, ISL, Andes) can now transition to commercial stages faster.
  • Cloud-based tests can be done discreetly – NDA projects can run in the Cloud without requiring an immediate press release.

Breakthrough Potential

The Cloud essentially opens two floodgates at once:
  1. Technical entry barrier removed → more projects start.
  2. Sales cycles shortened → deals reach revenue stage faster.

If one of the major partners moves to public mass production soon, the Cloud will have been the silent enabler – making such events much more likely.
 
  • Like
  • Fire
  • Love
Reactions: 9 users

Diogenese

Top 20
We've seen the ads for those portable translators.

This mob are excited about SLMs:

Small language models could be the next big disruptor in AI translation​

Story by Special Report

Over the past five years, the translation industry has ridden a wave of AI adoption. Large language models (LLMs) have moved from niche tools to near-standard in content localisation but their generalist nature leaves room for a new contender.

The co-founder and CEO of ASX-listed language tech company Straker (ASX:STG), Grant Straker, argues that small language Models (SLMs) – purpose-built for specific industries and language pairs – are the next leap forward
. ...

https://www.msn.com/en-au/technolog...S&cvid=689ac8d18b7b4f9699905359a4e481e9&ei=60

... now if they didn't need to carry that car battery around with them ...
 
Last edited:
  • Like
  • Haha
  • Love
Reactions: 14 users

7für7

Top 20
👁️👄👁️


011C8672-2261-409D-B494-7BB30649AD7E.png
 
  • Haha
  • Like
Reactions: 5 users
Thank God someone posted something. I posted something and silence for 2 hours. I thought I had broken the friggin thread. Where is everyone? Bravo has an excuse. Hopefully she's out running. God knows the share price needs a kick along.

SC
 
  • Haha
  • Like
  • Love
Reactions: 12 users

7für7

Top 20
Thank God someone posted something. I posted something and silence for 2 hours. I thought I had broken the friggin thread. Where is everyone? Bravo has an excuse. Hopefully she's out running. God knows the share price needs a kick along.

SC
I just posted something to check if the forum just crashed…

GO BRAVO GO!!
 
  • Haha
Reactions: 5 users
Prophesee
 

Attachments

  • Screenshot_20250812_180016_LinkedIn.jpg
    Screenshot_20250812_180016_LinkedIn.jpg
    297.4 KB · Views: 151
  • Like
  • Fire
  • Love
Reactions: 6 users
Thank God someone posted something. I posted something and silence for 2 hours. I thought I had broken the friggin thread. Where is everyone? Bravo has an excuse. Hopefully she's out running. God knows the share price needs a kick along.

SC
BrainChip actually has to make some noise and give shareholders something to talk about..

It's sometimes a bit like a kid rummaging through their toy box in here, trying to find an old toy they'd kinda "forgotten" about, to play with..

Like Hollywood rehashing an old movie, because they don't have the imagination to think up a good original storyline.

We need new material damn it!

In the words of a long gone, young bed wetting, wet behind the ears poster, from another Time.

"FEED THE RATS!"


Screenshot_20250812-175025_Chrome.jpg
 
Last edited:
  • Haha
  • Like
Reactions: 9 users

The next future technology with BrainChip Akida. It was developed by the Start-up Saluts. This platform enables real-time autonomous decision-making in space and the deep sea, but read it yourself.

hashtag#SALUTS hashtag#NEROnaut hashtag#Brainchip hashtag#Akida hashtag#futuretechnology hashtag#SNN hashtag#neuromorphic
Mohamed Sobhy FoudaMohamed Sobhy Fouda
• 2ndPremium • 2nd
Saluts builder | Astroprenure | Peace Promoter📍Planet Earth 🌍Saluts builder | Astroprenure | Peace Promoter📍Planet Earth 🌍 11h • Edited •

11 hours ago • Edited • Visible to anyone on or off LinkedIn


Got Autonomy? Let’s talk at booth #2436, Small Satellite Conference (SmallSat) 10-13th of August, 2025.

We promise, we deliver 🚀

🚀 Introducing NEROnaut™ — Neuro-Evolving Robot-on-Chip

At Saluts, we believe true autonomy begins at the chip level.
That’s why we built NEROnaut™, our embedded AI brain designed for edge intelligence in the harshest environments — from deep space to deep sea.

🧠 What is NEROnaut™?
A neuro-evolving computing core that learns, adapts, and optimizes in real time — without relying on constant connectivity. It’s engineered to run autonomous guidance, navigation, control, and data processing where latency, bandwidth, or reliability would cripple traditional systems.

hashtag#WeNeedMoreSpace

🔹 Key Capabilities:

- On-chip AI inference & evolution — adapting mission logic mid-operation
Multi-domain compatibility — land, sea, air, space

- Fault-tolerant autonomy — resilient under GNSS-denied or communication-jammed conditions

- Sensor fusion & real-time decision making — from radar to vision-based navigation

- Ultra-low-power architecture — mission endurance without constant ground control

📄 Technicalä Attached – dive into the specs, interfaces, and application domains.
With NEROnaut™, mission control becomes mission autonomy.
One chip. One brain. Infinite adaptability.

📩 DM us to discuss integrations amd autonomy.

hashtag#Saluts hashtag#NEROnaut hashtag#RobotOnChip hashtag#EdgeAI hashtag#AutonomousSystems hashtag#SpaceTech


1754987499715.png


1754987542133.png
 
  • Like
  • Fire
  • Love
Reactions: 32 users

Bravo

If ARM was an arm, BRN would be its biceps💪!
The good news is: This article signals that neuromorphic computing is shifting from novelty to reality, with real-world products in development and adoption on the rise.

The bad news is: Sadly Tom’s Guide only highlights Innatera and doesn't mention BrainChip at all. :cry:

From a tech perspective, I believe BrainChip still has unique IP and an arguably more mature solution. But in a fast-emerging category like neuromorphic computing, public perception moves quickly, and if BrainChip isn’t actively owning the media conversation, Innatera (or another competitor) could grab the crown before the market fully matures.

Giddy up BrainChip!!!! 🏇




Screenshot 2025-08-12 at 6.42.42 pm.png




 
  • Like
  • Love
  • Thinking
Reactions: 12 users

Diogenese

Top 20
The good news is: This article signals that neuromorphic computing is shifting from novelty to reality, with real-world products in development and adoption on the rise.

The bad news is: Sadly Tom’s Guide only highlights Innatera and doesn't mention BrainChip at all. :cry:

From a tech perspective, I believe BrainChip still has unique IP and an arguably more mature solution. But in a fast-emerging category like neuromorphic computing, public perception moves quickly, and if BrainChip isn’t actively owning the media conversation, Innatera (or another competitor) could grab the crown before the market fully matures.

Giddy up BrainChip!!!! 🏇




View attachment 89566



Hi Bravo,

It would be nice to have our names up in lights, but we're not really selling to the public, except those lucky enuf to have inherited their grandparents' 2nm chip foundry.

Innatera has a lot of analog patents, eg:

WO2022073946A1 ADAPTATION OF SNNS THROUGH TRANSIENT SYNCHRONY 20201005

1754994007517.png


0003] SNNs encode information in the form of one or more precisely timed (voltage) spikes, rather than as integer or real-valued vectors. Computations for inference (i.e. inferring the presence of a certain feature in an input signal) may be effectively performed in the analog and temporal domains. Consequently, SNNs are typically realized in hardware as full-custom mixed-signal integrated circuits, which enables them to perform inference functions with several orders of magnitude lower energy consumption than their deep neural network (DNN) counterparts, in addition to having smaller network sizes.
[0006] Neuromorphic SNN emulators (205), e.g. systems containing electronic analog/mixed- signal circuits that mimic neuro-biological architectures present in the nervous system, form distributed (in a non- von Neumann sense, i.e. computational elements and memory are co- localized, resulting in memory storage and complex nonlinear operations being simultaneously performed by the neurons in the network), parallel, and event-driven systems offering capabilities such as adaptation (including adaptation of physical characteristics, firing frequency, homeostatic (behavioural) regulation, et cetera), self-organization, and learning.

This would probably exclude them from the equivalent 8-bit and higher accuracy. Even 4-bit may be a stretch.

Using it with Prophesee GenX320 is their low-fi chip, 100k pixels (320*320).

Prophesee have a range of DVS sensors with Sony using their 3D layer process:
https://www.prophesee.ai/buy-event-based-products/

They are also in a few cameras.

It wasn't that long ago Prophesee were seeking bankruptcy protection, so good luck to them.
 
  • Like
  • Fire
Reactions: 10 users
 
  • Like
  • Fire
Reactions: 5 users

7für7

Top 20

Bro is like… Tell me you’re invested in the most underrated stock of the last 50 years without telling me you’re invested in the most underrated stock of the last 50 years…

Or… Tell me you’re talking about Akida without telling me you’re talking about Akida.

Because the latest papers highlight..SNNs for massive energy efficiency…Eventbased processing for instant response…Edge and Cloud hybrid architectures for scalable deployment

This isn’t about hype …. it’s about a global trend in AI design….and Akida just happens to be one of the few commercial platforms already delivering in this domain.

WOHOOOOOOO
 
  • Like
  • Fire
Reactions: 11 users

itsol4605

Regular
With BrainChip, too!!

 
  • Like
Reactions: 2 users

TopCat

Regular
Maybe 🤔, maybe not. Who knows



Arm neural technology is an industry first, adding dedicated neural accelerators to Arm GPUs, bringing PC-quality, AI powered graphics to mobile for the first time – and laying the foundation for future on-device AI innovation

On-device AI is transforming workloads everywhere, from mobile gaming to productivity tools to intelligent cameras. This is driving demand for stunning visuals, high frame rates and smarter features – without draining battery or adding friction. Announced today at SIGGRAPH, Arm neural technology is an industry first, bringing dedicated neural accelerators to Arm GPUs from 2026. This takes the performance of GPUs for graphics rendering to new heights, delivering up to 50% GPU workload reduction for today’s most intensive mobile content, starting with mobile gaming. And this is just the beginning – the availability of this new technology lays the foundations for the industry to deliver even more on-device AI innovation in the future.
 
  • Like
  • Thinking
  • Fire
Reactions: 9 users
With BrainChip, too!!

1755025399122.gif
 
  • Haha
  • Like
Reactions: 9 users

itsol4605

Regular
  • Like
  • Fire
Reactions: 21 users

Tothemoon24

Top 20


Keeping It Local: Bringing Generative AI to the Intelligent Edge​

placeholder


Generative AI is no longer confined to the cloud. With NXP’s eIQ® GenAI Flow , developers can now run large language models (LLMs) —like Llama and Qwen— directly on embedded edge devices securely, efficiently and close to the data. This paradigm shift unlocks new opportunities for real-time intelligence across industries, from automotive to industrial automation.
Built as a complete software deployment pipeline, eIQ GenAI Flow simplifies the once-daunting task of implementing generative AI models on power- and compute-constrained systems. It combines the latest model optimization techniques like quantization with hardware acceleration from NXP’s eIQ Neutron NPU to make GenAI practical and performant—right at the edge.

Smarter AI, Locally Deployed​

At its core, GenAI Flow helps overcome the traditional barriers of running advanced models in embedded environments. The pipeline already enables today’s most powerful open language models, with support for multimodal and vision-language models (VLMs) soon. GenAI Flow provides the necessary optimizations out-of-the-box for real-time execution on application processors like the i.MX 95—the kind of performance needed for conversational AI, physical AI and more.
GenAI is moving from the cloud to the edge, so what does that mean for embedded developers?Learn more by listening to our EdgeVerse Techcast episode on Apple Podcasts , Spotify or YouTube .
By using accuracy-preserving quantization techniques such as integer 8 and 4 (INT8 and INT4) precision, we can fully leverage the Neural Processing Unit (NPU) for inference acceleration. Using GenAI Flow dramatically improves response speed and power efficiency on-device. For example, time to first token (TTFT)—a key metric for any GenAI application—can be reduced from 9.6 seconds on an Arm Cortex CPU (Float32 precision) to less than 1 second on the Neutron NPU with INT8 quantization. This enables captivating, real-time AI experiences, without requiring power-hungry servers or cloud infrastructure.
image.jpg

Play Video
Generative AI is driving innovations at the edge. GenAI Flow, included with NXP's eIQ Toolkit, makes enabling Gen AI at the edge simple and secure.
GenAI Flow also supports small language models (SLMs), which are lighter, yet still capable of delivering high-quality results. The pipeline offers flexible execution across central processing unit (CPU), NPU or a hybrid configuration, allowing developers to tune performance based on their specific product needs.

Adding Context with RAG​

A defining feature of GenAI Flow is the built-in support for retrieval-augmentation generation (RAG) . This form of model fine-tuning allows LLMs to access domain-specific or private data sources—such as device and service manuals, internal PDFs and equipment maintenance logs—without having to retrain the original model. RAG injects the relevant external knowledge as a vector database stored on the edge device, enabling highly contextual, grounded responses that can eliminate an AI’s hallucination problem and prevent certain errors in judgement.
RAG is particularly powerful for edge use cases because all data processing happens locally. This protects sensitive information while delivering dynamic, on-demand AI responses. Developers can simply turn a new document into a highly compact, LLM-friendly database and the model immediately adopts the additional context—no retraining required! This efficiency alone can save millions of dollars and energy spent on numerous iterations of GenAI fine-tuning in data centers.

Real-World Impact: From Cars to Robots​

GenAI Flow is already being used across multiple industries where low-latency performance and data privacy are critical.
In Automotive, AI-powered infotainment systems can respond to natural voice commands by referencing service manuals embedded in the vehicle. This creates a seamless, hands-free experience without the typical connectivity requirements.
In healthcare, touchless-AI interfaces equip clinicians to securely access procedure or patient data using voice prompts—an ideal solution for reducing physical contact and contamination risk in sensitive environments.
image.jpg

Play Video
AICHI, the AI controller for health insights, securely collects and analyzes multimodal heath and other sensor data in real time, detecting early anomalies and enabling proactive, personalized care.
In mobile robotics, generative AI models interpret written instructions and visual inputs—using optical character recognition (OCR) and RAG—to take context-aware actions. These systems move beyond basic automation and into intelligent interaction between humans and environments.
image.jpg

Play Video
This 3D perception sensor fusion demo showcases trusted spatial perception at the edge, operating in dynamic and uncertain environments.
In industrial automation, AI assistants help technicians troubleshoot machine issues using real-time sensor data and maintenance documentation—all processed locally, even in remote or low-bandwidth settings.
Across these scenarios, GenAI Flow offers developers a powerful and privacy-conscious framework for building intelligent edge solutions.

What’s Next for GenAI at the Edge?​

The next evolution of GenAI at the edge is multimodal and agentic. Future systems will blend together voice, vision and language inputs to create richer, more intuitive user experiences. With GenAI Flow, this convergence is already underway, enabling unified edge pipelines that can reason and act from a combination of input types.
There’s also a strong focus on continuing to optimize edge AI performance—both in scaling up support for larger models and by making smaller models even faster. This includes advancements in quantization, execution flexibility and support for increasingly compact LLM architectures.
As AI systems become more adaptive and locally responsive, access to the best tooling becomes ever more critical. GenAI Flow is designed with scalability in mind, helping developers integrate today’s rapidly evolving AI capabilities into products across microprocessor unit (MPU) platforms and potentially even into future microcontroller unit (MCU)-class devices.
Tags: Technologies
 
  • Like
  • Fire
  • Wow
Reactions: 16 users
Top Bottom