BRN Discussion Ongoing

genyl

Member
This big question for us longs now has to be wether or not the company can keep going with the massive operation costs cash burn, without diluting our shares to water in the long run. They probably gonna need more capital raise to keep it going if contracts aren't on knocking on the door. What is your perspective on this?
(im danish, sorry for potential wrong grammar)
 
  • Like
Reactions: 8 users

Baneino

Regular
This big question for us longs now has to be wether or not the company can keep going with the massive operation costs cash burn, without diluting our shares to water in the long run. They probably gonna need more capital raise to keep it going if contracts aren't on knocking on the door. What is your perspective on this?
(im danish, sorry for potential wrong grammar)

The crucial question for us long-term investors in BrainChip is whether the company can continue despite high operating costs and noticeable cash burn, without diluting our shares unnecessarily. I am optimistic about it. Of course, a deep-tech company at this stage requires capital – this is completely normal in the commercialization of high technology. What matters is how it is managed. BrainChip is debt-free, which provides a lot of room for maneuver. At the same time, the management is very aware of the capital structure and has repeatedly emphasized that they aim to act strategically and in partnership – not blindly through dilution. I find the integration of Akida into the Nvidia TAO Toolkit particularly positive. This is a strong signal. Developers who work with Nvidia – and there are many in the AI space – can now directly and easily port their models to Akida. This significantly lowers entry barriers and positions BrainChip in an excellent spot within the global AI developer ecosystem. Even though it's not a formal partnership, this visibility within the Nvidia environment is an important step towards market acceptance. I am convinced that once the first major licensing or partnership deal comes through, the entire perception will change. The technological foundation is there – now it’s all about timing and execution If I may say so as a German Hanoverian 😅. And by the way, your Englisch 😁was perfectly understandable – and your contribution shows that you see the bigger picture. Thank you for your thoughts
!
 
  • Like
  • Fire
  • Love
Reactions: 23 users
This big question for us longs now has to be wether or not the company can keep going with the massive operation costs cash burn, without diluting our shares to water in the long run. They probably gonna need more capital raise to keep it going if contracts aren't on knocking on the door. What is your perspective on this?
(im danish, sorry for potential wrong grammar)
1753347896342.gif
 
  • Haha
Reactions: 1 users
Tomorrow will be the perfect moment for a nice positive prices sensitive announcement IMO!

Not today… tomorrow… yeah… it would be a nice start into the weekend

send crazy ex-girlfriend GIF
 
  • Love
  • Fire
  • Haha
Reactions: 3 users

GStocks123

Regular
Worth the read!





Neuromorphic Computing Advantages​

Moving to Spiking Neural Networks (SNNs) signifies a core alteration from standard neural processing. In contrast to standard neural networks that process continuous values, SNNs utilize discrete spike events for information encoding and transmission. This method yields sparse, event-driven computation that greatly lessens energy use while preserving diagnostic accuracy.

The deployment platform, BrainChip’s Akida (ASX:BRN;OTC:BRCHF) neuromorphic processor, natively supports SNN operations and enables on-chip incremental learning. This capability allows the system to adapt to new patient data without requiring complete model retraining, aligning with clinical workflows that continuously encounter new diagnostic cases.

Spike-Compatible Architecture Design​

The architecture incorporates specific design elements that facilitate seamless conversion from standard neural networks to spike-based processing. The spike-compatible feature transformation module employs separable convolutions with quantization-aware normalization, ensuring all activations remain within bounds suitable for spike encoding while preserving diagnostic information.

The Squeeze-and-Excitation blocks implement adaptive channel weighting through a two-stage bottleneck mechanism, providing additional regularization particularly beneficial for small, imbalanced medical datasets. The quantized output projection produces SNN-ready outputs that can be directly processed by neuromorphic hardware without additional conversion steps.

Comprehensive Performance Validation​

Experimental validation was conducted on both the HAM10000 public benchmark dataset and a real-world clinical dataset from Hospital Sultanah Bahiyah, Malaysia. The clinical dataset comprised 3,162 dermatoscopic images from 1,235 patients, providing real-world validation beyond standard benchmarks.

On HAM10000, QANA achieved 91.6% Top-1 accuracy and 82.4% macro F1 score, maintaining comparable performance on the clinical dataset with 90.8% accuracy and 81.7% macro F1 score. These results demonstrate consistent performance across both standardized benchmarks and clinical practice conditions.

The system showed balanced performance across all seven diagnostic categories, including minority classes such as dermatofibroma and vascular lesions. Notably, melanoma detection achieved 95.6% precision and 93.3% recall, critical metrics for this potentially life-threatening condition.

Hardware Performance and Energy Analysis​

When deployed on Akida hardware, the system delivers 1.5 millisecond inference latency and consumes only 1.7 millijoules of energy per image. These figures represent reductions of over 94.6% in inference latency and 98.6% in energy consumption compared to GPU-based CNN implementations.

Comparative analysis against state-of-the-art architectures converted to SNNs showed QANA’s superior performance across all metrics. While conventional architectures experienced accuracy drops of 3–7% after SNN conversion, QANA maintained high accuracy through its quantization-aware design principles.
 
  • Like
  • Fire
Reactions: 21 users

GStocks123

Regular
A
Worth the read!





Neuromorphic Computing Advantages​

Moving to Spiking Neural Networks (SNNs) signifies a core alteration from standard neural processing. In contrast to standard neural networks that process continuous values, SNNs utilize discrete spike events for information encoding and transmission. This method yields sparse, event-driven computation that greatly lessens energy use while preserving diagnostic accuracy.

The deployment platform, BrainChip’s Akida (ASX:BRN;OTC:BRCHF) neuromorphic processor, natively supports SNN operations and enables on-chip incremental learning. This capability allows the system to adapt to new patient data without requiring complete model retraining, aligning with clinical workflows that continuously encounter new diagnostic cases.

Spike-Compatible Architecture Design​

The architecture incorporates specific design elements that facilitate seamless conversion from standard neural networks to spike-based processing. The spike-compatible feature transformation module employs separable convolutions with quantization-aware normalization, ensuring all activations remain within bounds suitable for spike encoding while preserving diagnostic information.

The Squeeze-and-Excitation blocks implement adaptive channel weighting through a two-stage bottleneck mechanism, providing additional regularization particularly beneficial for small, imbalanced medical datasets. The quantized output projection produces SNN-ready outputs that can be directly processed by neuromorphic hardware without additional conversion steps.

Comprehensive Performance Validation​

Experimental validation was conducted on both the HAM10000 public benchmark dataset and a real-world clinical dataset from Hospital Sultanah Bahiyah, Malaysia. The clinical dataset comprised 3,162 dermatoscopic images from 1,235 patients, providing real-world validation beyond standard benchmarks.

On HAM10000, QANA achieved 91.6% Top-1 accuracy and 82.4% macro F1 score, maintaining comparable performance on the clinical dataset with 90.8% accuracy and 81.7% macro F1 score. These results demonstrate consistent performance across both standardized benchmarks and clinical practice conditions.

The system showed balanced performance across all seven diagnostic categories, including minority classes such as dermatofibroma and vascular lesions. Notably, melanoma detection achieved 95.6% precision and 93.3% recall, critical metrics for this potentially life-threatening condition.

Hardware Performance and Energy Analysis​

When deployed on Akida hardware, the system delivers 1.5 millisecond inference latency and consumes only 1.7 millijoules of energy per image. These figures represent reductions of over 94.6% in inference latency and 98.6% in energy consumption compared to GPU-based CNN implementations.

Comparative analysis against state-of-the-art architectures converted to SNNs showed QANA’s superior performance across all metrics. While conventional architectures experienced accuracy drops of 3–7% after SNN conversion, QANA maintained high accuracy through its quantization-aware design principles.

And the paper published July 2025 :). https://arxiv.org/abs/2507.15958
 
Last edited:
  • Like
  • Fire
Reactions: 7 users

Tothemoon24

Top 20

AI-Powered Radar Brings Instant Threat Detection to the Sea​

Lockheed Martin has trialed a compact system that adapts in real time and re-tasks itself mid-mission.
  • 1 minute read
Share
Tweet
Lockheed Martin successfully demonstrated technology to automatically recognize targets using an AI-powered Synthetic Aperture Radar Lockheed Martin successfully demonstrated technology to automatically recognize targets using an AI-powered Synthetic Aperture Radar. Image: Lockheed Martin
Published on18 July 2025

AUTHOR​

Giulia Bernacchi

SHARE ARTICLE​

Facebook
Twitter
Pinterest
Mail



In trials off the US West Coast, teams from Lockheed Martin’s AI Center, Skunk Works, and Rotary and Mission Systems demonstrated the system’s ability to detect and classify targets in near real-time.

Machine learning tools enabled SAR to retrain and adapt on the fly, fine-tuning performance as conditions changed.

With autonomous sensor control, the radar could re-task itself mid-mission, shifting focus without human input.

The system ran entirely on compact, low-power hardware, without the need for large ground stations or cloud computing, proving its potential for fast, field-ready deployment.

“This is a major leap for harnessing AI to help enhance situational awareness and decision-making capabilities, with unparalleled threat identification across extended ranges and all-weather conditions,” stated John Clark, senior VP of technology and strategic innovation at Lockheed Martin.

SAR Development​

SAR technology is commonly used to detect ships at sea, but typically requires human analysts to interpret the images.

With AI, it can now distinguish civilian from military vessels without the need for manual review.

Testing continues this year to refine sensor integration and boost maritime readiness.

Data from the trials will also feed into other Lockheed Martin autonomous platforms, including collaborative combat aircraft and integrated systems.
 
  • Like
  • Fire
  • Thinking
Reactions: 21 users

manny100

Top 20
Our US AFRL contract with Raytheon - RTX.
" This approach enables real-time processing with significantly reduced power consumption, allowing for enhanced activity discrimination and object identification capabilities in radar systems. Brightfield added: “To translate this into generally understandable terms, the reduced power consumption enables deployment on very small platforms like drones or remote outposts or fencing, and enhanced activity discrimination allows detection and flight path of much smaller platforms, say for example an adversarial drone.”
AKIDA will be deployed on Drones as well as being able to detect and track the flight path of adversarial Drones.
Its easy to see a huge demand for this.
 
  • Like
  • Love
  • Fire
Reactions: 24 users

manny100

Top 20

Yes—It’s a Major Leap Over Conventional Radar​

Existing military radars excel at detection and coarse classification, but the Akida/Raytheon neuromorphic approach adds real-time, on-device object identification with far lower size, weight, power, and cost (SWaP-C).

Recognizing Enemy Drones with Akida-Powered Neuromorphic Radar​

Yes. By embedding micro-Doppler signature analysis onto the Akida neuromorphic processor, radars can detect and classify small, fast-moving platforms—such as adversarial drones—in real time at the edge.

  • Unique rotor-blade and propeller motions induce characteristic Doppler shifts
  • Akida’s event-driven inference isolates those shifts, distinguishing drones from birds, helicopters, or ground vehicles
  • All processing occurs on-chip in milliwatts of power, enabling deployment on compact radar pods or UAVs2
The Brainchip/Ratheon-RTX/AFRL project has the potential to be absolutely huge.
Applications include Air Force and Navy then flowing into commercial flight and marine.
 
  • Like
  • Fire
  • Love
Reactions: 23 users

manny100

Top 20

AI-Powered Radar Brings Instant Threat Detection to the Sea​

Lockheed Martin has trialed a compact system that adapts in real time and re-tasks itself mid-mission.
  • 1 minute read
Share
Tweet
Lockheed Martin successfully demonstrated technology to automatically recognize targets using an AI-powered Synthetic Aperture Radar Lockheed Martin successfully demonstrated technology to automatically recognize targets using an AI-powered Synthetic Aperture Radar. Image: Lockheed Martin
Published on18 July 2025

AUTHOR​

Giulia Bernacchi

SHARE ARTICLE​

Facebook
Twitter
Pinterest
Mail



In trials off the US West Coast, teams from Lockheed Martin’s AI Center, Skunk Works, and Rotary and Mission Systems demonstrated the system’s ability to detect and classify targets in near real-time.

Machine learning tools enabled SAR to retrain and adapt on the fly, fine-tuning performance as conditions changed.

With autonomous sensor control, the radar could re-task itself mid-mission, shifting focus without human input.

The system ran entirely on compact, low-power hardware, without the need for large ground stations or cloud computing, proving its potential for fast, field-ready deployment.

“This is a major leap for harnessing AI to help enhance situational awareness and decision-making capabilities, with unparalleled threat identification across extended ranges and all-weather conditions,” stated John Clark, senior VP of technology and strategic innovation at Lockheed Martin.

SAR Development​

SAR technology is commonly used to detect ships at sea, but typically requires human analysts to interpret the images.

With AI, it can now distinguish civilian from military vessels without the need for manual review.

Testing continues this year to refine sensor integration and boost maritime readiness.

Data from the trials will also feed into other Lockheed Martin autonomous platforms, including collaborative combat aircraft and integrated systems.
Good pick up, sounds like AKIDA may be involved given our links with Lockheed-Martin.
" With AI, it can now distinguish civilian from military vessels without the need for manual review."
 
  • Like
  • Fire
  • Wow
Reactions: 19 users

Frangipani

Top 20
No doubt Frontgrade Gaisler will be spruiking GR801 - their first neuromorphic AI solution for space, powered by Akida - during their upcoming Asia Tour through India, Singapore and Japan:


View attachment 88369

First stop: India 🇮🇳



D2826860-D596-4CFD-87D1-816FA666111F.jpeg






54901755-B84C-4ABB-AC29-F3DCA8E914D4.jpeg


🔍


FECB75F2-4A27-4D6F-818B-7EB4FC32BDF1.jpeg




… although pitching the benefits of Akida to ISRO is somewhat like preaching to the converted… 😉

Not only ESA and NASA are interested in using Akida for space research - ISRO, the Indian Space Research Organisation is, too!

Satellite Technology

D Area Electro-Optical Sensor Technology (SAC)
D4 Sub Area Electronics System Design and Development (SAC)
D4.15 Neuromorphic computing-Low complexity Artificial Intelligence (SAC)




View attachment 85089 View attachment 85092 View attachment 85090 View attachment 85093
View attachment 85095



View attachment 85096



7B6C5DF7-066F-4E04-AB41-35485C8BC43C.jpeg



Next stop: Singapore 🇸🇬


EF3448F1-F937-4CA0-9FF8-1584456D6301.jpeg
 
Last edited:
  • Like
  • Love
  • Fire
Reactions: 35 users

Diogenese

Top 20
A

And the paper published July 2025 :). https://arxiv.org/abs/2507.15958

The authors' associations include UWA and Adelaide Uni, as well as NW Polytechnical Uni, Shaanxi:

Quantization-Aware Neuromorphic Architecture for Efficient Skin Disease Classification on Resource-Constrained Devices

Haitian Wang∗†, Xinyu Wang† , Yiren Wang† , Karen Lee† , Zichen Geng† , Xian Zhang† , Kehkashan Kiran∗ , Yu Zhang∗† , Bo Miao‡

∗Northwestern Polytechnical University, Xi’an, Shaanxi 710129, China
†The University of Western Australia, Perth, WA 6009, Australia
‡Australian Institute for Machine Learning, University of Adelaide, SA 5005, Australia
 
  • Like
  • Fire
Reactions: 16 users

Rach2512

Regular
Another one from Ai Labs.

 
  • Like
  • Fire
  • Love
Reactions: 17 users

Cardpro

Regular
Yes, we all know where you're coming from, it is frustrating with regards to us either accepting this is how it's going to be for at least another
12 months or not accepting or trusting our Board to deliver on all the alleged interaction that is apparently taking place, it's not easy to process when your brain is continually throwing up negatives.

You can always sell your shares, accept a loss or appreciate that the technology is top class, and the market will finally realize that Brainchip's
Akida is market changing, disruptive technology that will benefit millions of people worldwide, especially in the healthcare industry and through
the space industry break-through technology that Akida will bring; we are fast approaching an intersection that nobody wants to enter, and once
again, it's referred to as change or more to the point disruptive technology, our technology that Peter first started researching more than 25 years ago, is finally approaching that point in time (within 5 years) where everything he told me, and many others will be proven to be true.

Maybe IBM or Intel know something that our company doesn't know, yes Quantum and its qubits maybe a threat further down the road, but the next port of call for humanity is Spiking Neural Networks, and I'm referring to "Native SNN's".

Finally, I personally believe that our current partners, like the AFRL and RTX know whose technology is respectfully termed SOTA.

Just my opinion...either way, it's not right nor wrong........take it easy.........Tech x
Look, telling someone to “just sell and move on” when they raise valid concerns isn’t just dismissive, it misses the whole point. Most of us who are still here chose to be long-term holders because we believe in the tech and the vision. We’re not flipping shares like day traders. We're backing something we thought would change the game, and many of us are deeply invested emotionally, not just financially. It’s easy to talk about disruptive potential, five-year horizons, and “trust the board” narratives. But after years of watching major partners come and go with near zero revenue to show for it, you can’t blame people for finally questioning the execution. Feedback and excitement don’t pay salaries or fund R&D, revenue does - or in our case, hard earned money of mum and dad investors' savings minus the fees and profits for LDA capital. And frankly, brushing off concerns by suggesting people give up and walk away sounds like something someone would do if they gave up on their own team or friends the moment things got hard. That’s not loyalty. That’s avoidance. We want to see this succeed... not just in theory, but in practice. Nobody’s asking for instant results, just real transparency and a path to monetisation that doesn’t require blind faith. It's not negativity ... it's accountability.

Take it easy too. But let’s stop pretending skepticism is the enemy and this ship is sailing perfectly with top class captain - it's far from it IMO....
 
  • Like
  • Fire
  • Love
Reactions: 18 users

jrp173

Regular
Look, telling someone to “just sell and move on” when they raise valid concerns isn’t just dismissive, it misses the whole point. Most of us who are still here chose to be long-term holders because we believe in the tech and the vision. We’re not flipping shares like day traders. We're backing something we thought would change the game, and many of us are deeply invested emotionally, not just financially. It’s easy to talk about disruptive potential, five-year horizons, and “trust the board” narratives. But after years of watching major partners come and go with near zero revenue to show for it, you can’t blame people for finally questioning the execution. Feedback and excitement don’t pay salaries or fund R&D, revenue does - or in our case, hard earned money of mum and dad investors' savings minus the fees and profits for LDA capital. And frankly, brushing off concerns by suggesting people give up and walk away sounds like something someone would do if they gave up on their own team or friends the moment things got hard. That’s not loyalty. That’s avoidance. We want to see this succeed... not just in theory, but in practice. Nobody’s asking for instant results, just real transparency and a path to monetisation that doesn’t require blind faith. It's not negativity ... it's accountability.

Take it easy too. But let’s stop pretending skepticism is the enemy and this ship is sailing perfectly with top class captain - it's far from it IMO....


Good points Cardpro.

Telling shareholders to “just sell and move on” because they question management is exactly the kind of attitude that allows poor governance to go unchecked.


I didn’t invest in this company to sit back and watch in silence while valid concerns are ignored. If something doesn't add up—whether it's poor communication, under-delivery, or lack of accountability—then asking questions is not only fair, it's absolutely essential.

Blind loyalty helps no one. If shareholders don’t speak up, management has no pressure to improve. So I won’t be “moving on” just because it makes some people uncomfortable. I care about my investment, and part of that means pushing for better—not pretending everything’s fine when it clearly isn’t.
 
  • Like
  • Fire
  • Haha
Reactions: 8 users

Frangipani

Top 20
This press release on 11 July 2025 got me wondering whether Tata Exlsi was first introduced to BrainChip by Synopsys.

See the screenshot below showing a picture of BrainChip's Akida in Synopsys's "Corporate Overview for Investors May 2022"

BrainChip and Tata Elxsi officially announced their partnership on August 28, 2023, when Tata Elxsi joined BrainChip’s Essential AI ecosystem to integrate Akida neuromorphic technology into medical and industrial applications.

Tata’s continued collaboration with Synopsys (2025) builds upon their existing relationship with BrainChip, completing a triangle of opportunity IMO.

Synopsys + Tata could potentially model and test complex ECUs before production which is critical for SDV.
BrainChip + Tata could potentially allow these ECUs and future ECUs to embed real-time, energy-efficient AI inference.

IMO. DYOR.



Press Releases​

Tata Elxsi and Synopsys Collaborate to Accelerate Software-Defined Vehicle Development through Advanced ECU Virtualization Capabilities​

Date: Jul 11 2025
Integrated capabilities aim to simplify and speed software development and testing to help reduce related costs and de-risk production timelines
Bengaluru, India – July 11, 2025 — Tata Elxsi, a global leader in design and technology services, today announced the signing of a Memorandum of Understanding (MoU) with Synopsys, a leader in silicon to systems design solutions, to collaborate to deliver advanced automotive virtualization solutions. The MoU was signed at the SNUG India 2025 event in Bengaluru by senior leaders from both companies.
The collaboration will provide customers pre-verified, integrated solutions and services that make it easy to design and deploy virtual electronic control units (vECUs), a cornerstone technology critical for efficient software development and testing in today’s software-defined vehicles. The collaboration brings together Tata Elxsi’s engineering capabilities in embedded systems and integration with Synopsys’ industry-leading virtualization solutions that are used by more than 50 global automotive OEMs and Tier 1 suppliers to help reduce development complexity and cost, improve quality of software systems, and de-risk vehicle production timelines.
Together, the companies are already collaborating on programs with several global customers to enable vECUs, as well as software bring-up, board support package (BSP) integration, and early-stage software validation. These solutions are being deployed across vehicle domains such as powertrain, chassis, body control, gateway, and central compute, helping customers simulate real-world scenarios, validate software early, and reduce reliance on physical prototypes.
Through the collaboration, Synopsys and Tata Elxsi will further explore opportunities to scale and accelerate the deployment of electronics digital twins for multi-ECU and application specific systems.
6870a43cc4797-vxHohRYxjp.jpg

Our partnership with Synopsys reflects a future-forward response to how vehicle development is evolving. As OEMs move away from traditional workflows, there is growing demand for engineering services that are tightly integrated with virtualization tools. This strategic collaboration enables us to jointly address that shift with focus, flexibility, and domain depth,” said Sundar Ganapathi, Chief Technology Officer of Automotive, Tata Elxsi.
“The automotive industry’s transformation to software-defined vehicles requires advanced virtualization capabilities from silicon to systems. Our leadership enabling automotive electronics digital twins, combined with Tata Elxsi’s engineering scale and practical experience operationalising automotive system design, will simplify the adoption of virtual ECUs and thereby accelerate software development and testing to improve quality and time to market,” said Marc Serughetti, Vice President, Synopsys Product Management & Markets Group.





Reminder - #28,573

View attachment 88661

IMO it is much more likely that there was cross-pollination within the Tata Group of companies, in this case between Tata Consultancy Services and Tata Elxsi (in the spirit of the ‘One Tata’ concept of leveraging synergies, see below), especially given TCS Research have been working with Akida since 2019.

A fact, which Arijit Mukherjee highlighted earlier this month in a brief LinkedIn exchange with Nurjana Technologies R&D Lead Engineer Cecilia Pisano, after she had commented under a post of his that they had been applying Akida to real use cases at her company for more than a year:

Arijit Mukherjee is already busy co-organising another Edge AI workshop that will also touch on neuromorphic computing. It is scheduled for 8 October and will be co-located with AIMLSys 2025 in Bangalore:

“EDGE-X 2025: Reimagining edge intelligence with low-power, high-efficiency AI systems”.


View attachment 87849



EDGE-X

The EDGE-X 2025 workshop, part of the Fifth International AI-ML Systems Conference (AIMLSys 2025), aims to address the critical challenges and opportunities in nextgeneration edge computing. As intelligent systems expand into diverse environments—from IoT sensors to autonomous devices—traditional applications, architectures, and methodologies face new limits. EDGE-X explores innovative solutions across various domains, including on-device learning and inferencing, ML/DL optimization approaches to achieve efficiency in memory/latency/power, hardware-software co-optimization, and emerging beyond von Neumann paradigms including but not limited to neuromorphic, in-memory, photonic, and spintronic computing. The workshop seeks to unite researchers, engineers, and architects to share ideas and breakthroughs in devices, architectures, algorithms, tools and methodologies that redefine performance and efficiency for edge computing.

Topics of Interest (including but not limited to the following):
We solicit submissions describing original and unpublished results focussed on leveraging software agents for software engineering tasks. Topics of interest include but are not limited to:

1.Ultra-Efficient Machine Learning
    • TinyML, binary/ternary neural networks, federated learning
    • Model pruning, compression, quantization, and edge-training
2.Hardware-Software Co-Design
    • RISC-V custom extensions for edge AI
    • Non-von-Neumann accelerators (e.g., in-memory compute, FPGAs
3.Beyond CMOS & von Neumann Paradigms
    • Neuromorphic computing (spiking networks, event-based sensing)
    • In-memory/compute architectures (memristors, ReRAM)
    • Photonic integrated circuits for low-power signal processing
    • Spintronic logic/memory and quantum-inspired devices
4.System-Level Innovations
    • Near-/sub-threshold computing
    • Power-aware OS/runtime frameworks
    • Approximate computing for error-tolerant workloads
5.Tools & Methodologies
    • Simulators for emerging edge devices (photonic, spintronic)
    • Energy-accuracy trade-off optimization
    • Benchmarks for edge heterogeneous platforms
6.Use Cases & Deployment Challenges
    • Self-powered/swarm systems, ruggedized edge AI
    • Privacy/security for distributed intelligence
    • Sustainability and lifecycle management
  • Program Committee
  • Arijit Mukherjee, Principal Scientist, TCS Research
  • Udayan Ganguly, Professor, IIT Bombay

Cecilia Pisano from Nurjana Technologies has repeatedly liked BrainChip posts on LinkedIn, and hence her Sardinia-based company has been mentioned by several forum members as potentially playing with Akida:

View attachment 88228

And here’s the proof that it was indeed worth keeping an eye on Nurjana Tech :


View attachment 88226



View attachment 88227




View attachment 88229 View attachment 88230 View attachment 88231




View attachment 88232


View attachment 88233



3079DF3B-77D1-49A7-B0D6-5F58DC29D34F.jpeg



As for Synopsis: Actually, the May 2022 Corporate Overview for Investors wasn’t the first time they had used that 2017 (!) image of a BrainChip Accelerator* as a generic (IMO) example of an AI accelerator.

*https://www.edge-ai-vision.com/2017...dware-acceleration-of-neuromorphic-computing/

When googling “BrainChip” on the Synopsis website, I found a September 2020 White Paper (-> way before AKD1000 was released), where that same image was used to represent the category of Edge Computing AI accelerator cards, whereas in the 2022 investor presentation (more than three months after the AKD1000 Mini PCIe Board had been launched), it was used to exemplify “AI accelerators in cloud servers” thatextract insights from large data sets” as part of “Comprehensive IP Solutions for Cloud Computing SoCs”.
However, we have yet to see an image of an actual Akida product in connection with Synopsis.

According to Wikipedia

“Synopsys, Inc. is an American multinational electronic design automation (EDA) company headquartered in Sunnyvale, California, that focuses on silicon design and verification, silicon intellectual property and software security and quality. Synopsys supplies tools and services to the semiconductor design and manufacturing industry. Products include tools for logic synthesis and physical design of integrated circuits, simulators for development, and debugging environments that assist in the design of the logic for chips and computer systems.”

IMO this description of what Synopsis does also makes it unlikely that Tata Elxsi would have found out from Synopsis, of all companies, about BrainChip. Rather, the Tata Elxsi researchers were presumably introduced to BrainChip by engineers (from TCS or elsewhere) who had already happily tested out Akida for actual Edge AI use cases.

After all, the collaboration between Tata Elxsi and BrainChip was announced in August 2023 in order to drive “Akida technology into medical devices and industrial applications” aiming at “greater AI performance at the edge, independent from the cloud”.


FDB255DE-2D55-4B9D-808C-791B43AA123D.jpeg



8E13EC49-963D-4FDB-B849-7BDB331BF0DF.jpeg


E0B65644-CBF4-4DF4-B229-8EDDDED799DF.jpeg
AF46350E-356D-4851-ADE0-5E4E0C611911.jpeg

B129D500-6F1B-44FA-8AE6-CE21E9160CAE.jpeg




Here is a November 2024 article explaining the “One Tata - Group Synergies in Action” philosophy:


“In 2017, after he took over as the Chairman of Tata Sons, N Chandrasekaran laid out his vision of One Tata for the group. “One Tata is a concept I have reflected on for a long time over my journey with the Tata group, and it got crystallised distinctly in my mind after I took over this role,” he had said then.

One Tata is a mindset,” he said. “What does it mean to be a part of the Tata group — for a company, for an employee, for a leader? What is it that we can do better together to draw on our collective strength, to leverage the relationships and partnerships, the know-how and expertise, the goodwill and fortitude we have at our command as a group? The synergetic integration of Tata companies can make a significant difference to each individual within the Tata ecosystem. That, in turn, can make a difference to Tata Sons and to society… Once the ‘One Tata’ concept is ingrained in our DNA, it will be a tool that helps us progress in every dimension.

It set the group on a journey of synergised endeavours — one of the pillars of the Chairman’s early 3S strategy of Simplify, Synergise, Scale — that have seen the group grow from strength to strength.

As the group leaps forward with Speed — a pillar of the Chairman’s expanded 6S strategy along with Supply Chain and Sustainability — this story looks at how group companies are harnessing the power of synergies to leverage global mega trends like supply chain, artificial intelligence (AI), new energy, talent and, thus, fuel growth.

One Tata is a mindset... The synergetic integration of Tata companies can make a significant difference to each individual within the Tata ecosystem. Once the ‘One Tata’ concept is ingrained in our DNA, it will be a tool that helps us progress in every dimension.” - N Chandrasekaran, Chairman, Tata Sons


Growth greater than the sum of its parts


Nowhere has the impact of One Tata been more visible than in the turnaround at Tata Motors and its cascade of benefits on companies like Tata AutoComp Systems (Tata AutoComp) (…)

The One Tata philosophy further benefits group companies with a focus on utilising scale, simplification and synergies between Tata group companies. In addition to benefiting from the high standards of corporate governance and brand value associated with the Tata group, we also have the opportunity to leverage and benefit from the Tata group’s global network for exploring potential business opportunities and acquiring direct access to senior decision makers at potential end clients (…)


Building the nation, together

Leveraging synergies has enabled Tata companies to execute marquee projects — some of national importance.
Tata Projects, bolstered by One Tata synergies from Tata Consulting Engineers, Tata Steel, TCS, Voltas, Titan, Tata Motors and Tata Power, constructed India’s new Parliament building in record time for its inauguration in 2023 (…)

TCS, which has a business unit dedicated to group synergies, has partnered with Tata companies across sectors and continents in this endeavour as well as other digital transformations (…)

The Tata group’s journey of synergising has been transformative at every level. From propelling unprecedented growth for companies and the group at large to invigorating its foray into new businesses that are taking the lead on global megatrends, from transformative digital advancements to innovations that can revolutionise industries, from scaling sustainability ambitions to compounding the impact of community development, the milestones on this journey have successfully underscored One Tata and its infinite possibilities.”




 
  • Like
  • Love
  • Fire
Reactions: 17 users

Frangipani

Top 20

View attachment 87236




From tinyML to the Edge of AI: Connecting AI to the Real World​


Thu, Jun 26 | 01:00 PM - 02:00 PM

Converge Main Stage
Add to Calendar
Full schedule

Session details:​


As artificial intelligence evolves beyond the data center, a new frontier is emerging at the edge—where smart sensors, microcontrollers, and low-power devices are enabling real-time intelligence closer to the physical world. This panel explores the transformative journey from tinyML—AI on resource-constrained devices—to full-fledged edge AI systems that bridge the gap between digital models and real-world action.

Join industry leaders and innovators as they discuss the latest advancements in on-device learning, edge computing architectures, and AI deployment in environments with limited bandwidth, power, or compute. Topics will include practical applications across industries such as healthcare, manufacturing, agriculture, and smart cities, as well as challenges like model compression, latency, privacy, and security.

Whether you're building smart sensors, designing AI pipelines, or looking to understand the future of decentralized intelligence, this session will offer insights into how tinyML and edge AI are reshaping how machines sense, interpret, and interact with the world around them.

Speakers:​

Scott Smyser
Scott SmyserNanoveu
Kurt Busch
Kurt BuschSyntiant
Sumeet Kumar
Sumeet KumarInnatera
Steve Brightfield
Steve BrightfieldBrainChip
GP Singh
GP SinghAmbient Scientific

Moderators:​

Michael Kuptz
Michael KuptzEDGE AI FOUNDATION


Format:
Expo Floor Talks
Track:
Expo Floor Talks

Here’s the video of the 26 June Sensors Converge panel discussion From tinyML to the Edge of AI: Connecting AI to the Real World, organised by the Edge AI Foundation, in which our CMO Steve Brightfield was one of the panelists. 👆🏻





Steve Brightfield talks from 0:52 min, 7:58 min, 26:30 min, 39:45 min and again from 49:53 min, but if you have the time, it is also worth listening to the other panelists’ contributions. They are competitors, yet in the same boat.
 
Last edited:
  • Like
  • Fire
  • Love
Reactions: 21 users
Here’s the video of the 26 June Sensors Converge panel discussion From tinyML to the Edge of AI: Connecting AI to the Real World, organised by the Edge AI Foundation, in which our CMO Steve Brightfield was one of the panelists. 👆🏻





Steve Brightfield talks from 0:52 min, 7:58 min, 26:30 min, 39:45 min and again from 49:53 min, but if you have the time, it is also worth listening to the other panelists’ contributions. They are competitors, yet in the same boat.

Clearly IMO brn doesn’t have a 3 year lead from listening to these guys yet as Sean mentioned we should/ will be one of the top 3 in the market. There are some serious competitors already with millions of devises in the market.
I didn't hear Steve give our on chip learning a plug which disappointed me, isn't that one of our advantages ?

Lets Go brainchip.
 
Last edited:
  • Like
  • Thinking
Reactions: 7 users

Frangipani

Top 20

Thank’s for sharing that link, @Drewski!

I suspect that not that many forum members have actually clicked on it, though, which is a shame, as that BrainChip X post👇🏻 links to an excellent blog post written by our Chief Development Officer Jonathan Tapson titled “How to Think About Large Language Models on the Edge”.


202C0A95-B686-477A-B586-211F04DF99B2.jpeg


BrainChip also referred to that blog post on LinkedIn today:



5CB94C78-2779-420F-938A-086B271876FC.jpeg





I’d especially like to recommend this very articulate article to forum members enamoured with GenAI responses, which I personally tend to take with a bucket of salt (if I read them at all).




How to Think About Large Language Models on the Edge​


Jonathan Tapson, BrainChip Inc.

ChatGPT was released to the public on November 30th, 2022, and the world – at least, the connected world – has not been the same since. Surprisingly, almost three years later, despite massive adoption, we do not seem much closer to understanding how to use Large Language Models effectively in our personal life, but as importantly, in professional and business applications.

What LLMs Really Are​


A large part of this uncertainty stems from misunderstandings about what an LLM is, and how it really works. In this article I’ll unpack some of that and hopefully give a clear picture of LLMs that enable good decision-making.
The key to understanding LLMs is that they all start as what are called Foundational LLMs. These are actually really simple mechanisms, despite being composed of billions of neural elements. The simplicity arises from the way they are trained.

The training consists of taking some text from the internet – e.g., the whole of Wikipedia in all its languages – then feeding it to the LLM one word1 at a time. The LLM is then trained to predict the next word most likely to appear in that context. The entirety of the apparent intelligence of an LLM is based on its ability to predict what comes next in a sentence.

Screenshot-2025-07-23-at-8.34.19%E2%80%AFPM-e1753325213421.png


This simple process can be carried out until the LLM has been trained on pretty much any text ever digitized in any language, which builds a modelthat has an incredible ability to build sentences and paragraphs. LLMs are amazing artifacts, containing a model of all of language, on a scale no human could conceive or visualize. What they do not do, though, is apply any value to the information, or the truthfulness of the sentences and paragraphs they have learned to produce.

An Illusion of Intelligence​


I think of LLMs as being the equivalent of that one person we often have in our social circles – that person who can’t bear conversational silence and fills it with an endless stream-of-consciousness babble. What you are hearing is a grammatical flow of words, more or less connected in context, but there’s no information or usefulness to be derived from most of it.

LLMs are powerful pattern-matching machines but lack human-like understanding, common sense, or ethical reasoning. They can generate content that appears clearly inappropriate to humans but is merely a statistically probable sequence of words based on their training. For example, if you train an LLM on racist or deviant content, it will successfully reproduce this in any context, without any understanding of its meaning.

This lack of factualness notwithstanding, LLMs are amazingly convincing to talk to because they are trained that way. They know, way better than a human, precisely what to say, but they don’t in any real sense know any facts; they know what a fact is supposed to sound like, so they can convincingly produce “facts” on cue.

The Risks of Misusing LLMs​


The tech industry being what it is, multiple products based on foundational LLMs have been launched, without much thinking about how they will be used to just see how people will use them. LLMs are very good at summarizing and this use case works pretty well, but the inappropriate use of LLMs as search engines has produced lots of unhappy results.

A great way to think of an LLM is that it produces a surface of language, like a giant lumpy golf putting green, in the form of interconnected words. Any input sentence, or “prompt”, is like placing a ball down and putting it. The ball rolls along, connecting words into sentences according to its direction and velocity, until it comes to rest. A different ball, hit from the same point but in a different direction, produces different sentences. An LLM simply takes a bunch of input sentences and extends them along the surface of the language. Just as a golf ball rolls downhill and along the path of least resistance, the LLM output follows the path of the most likely words and assembles them into sentences.

As long as we think of an LLM as a machine for producing the next most likely sentences and paragraphs, we can make great use of it. As soon as we try and use a raw Foundational LLM as a search engine or a source of information, it’s like talking to a pathological liar. We’re going to get a response that sounds great but has only a coincidental relationship with the truth, and the algorithm is only guessing the next words based on the previous words from the text it was trained on.

So, how should we use LLMs? The answers depend on applications, but they are incredibly good at turning pre-existing information into words. Don’t let them find (or make up) the facts, but give them facts and let them explain or impart them.

Enter RAG: Retrieval-Augmented Generation​


One way to use LLMs that offers a simple approach to this problem is the RAG-LLM, where RAG stands for Retrieval Augmented Generation. RAG LLMs are usually designed for answering queries in a specific subject, for example, how to operate a particular appliance, tool, or type of machinery. The LLM works by taking as much of the textual information about the subject, user manuals and so forth, then pre-processing it into small chunks containing a few specific facts. When the user asks a question, the software system identifies the chunk of text which is most likely to contain the answer. The question and answer are then fed to an LLM, which generates a human-language answer in response to the query.

When one first builds RAG-LLMs, it seems like a completely counter-intuitive way to use LLMs. All the action of finding the answer happens before LLM involvement; why bother with that? Once you understand the issues with LLMs, it becomes obvious that RAG plays to the strengths of LLMs while mostly addressing their problems. There are many more sophisticated ways to enforce factualness on LLMs, but by and large they follow the RAG pattern in some way.

Screenshot-2025-07-23-at-8.39.19%E2%80%AFPM.png


BrainChip’sApproach to LLMs at the Edge​


At BrainChip, we build edge hardware systems that can execute LLMs to provide domain-specific intelligent assistance at the Edge. We also build models using an extremely compact LLM topology, Temporal Event Neural Networks (TENNs) based on state-space models combined with pre-processing information in a RAG system. Using this technology platform of optimized hardware and LLM models, BrainChip is able to demonstrate a stand-alone, battery-powered AI assistant that covers a huge amount of information. Like many companies working in this space, we believe we’re learning how to deploy LLMs in a way that starts to deliver on their massive promise in the Edge AI space.


Dr. Jonathan Tapson, Chief Development Officer at BrainChip, was a tenured professor at multiple universities before becoming the Executive Director of the MARCS Institute of Brain, Behavior and Development in Western Sydney, Australia. He founded three successful technology companies as spin-outs from his research, and then became the first CSO of GrAI Matter Labs, later acquired by Snap, Inc. He has a PhD in Engineering and Bachelor’s degrees in Theoretical Physics and Electrical Engineering from the University of Cape Town.
 
  • Like
  • Fire
  • Love
Reactions: 27 users

Frangipani

Top 20
View attachment 88635


View attachment 88636


Edge AI solutions have become critically important in today’s fast-paced technological landscape. Edge AI transforms how we utilize and process data by moving computations close to where data is generated. Bringing AI to the edge not only improves performance and reduces latency but also addresses the concerns of privacy and bandwidth usage. Building edge AI demos requires a balance of cutting-edge technology and engaging user experience. Often, creating a well-designed demonstration is the first step in validating an edge AI use case that can show the potential for real-world deployment.
Building demos can help us identify potential challenges early when building AI solutions at the edge. Presenting proof-of-concepts through demos enables edge AI developers to gain stakeholder and product approval, demonstrating how AI solutions effectively create real value for users, within size, weight and power resources. Edge AI demos help customers visualize the real-time interaction between sensors, software and hardware, helping in the process of designing effective AI use cases. Building a use-case demo also helps developers experiment with what is possible.

Understanding the Use Case​


The journey of building demos starts with understanding the use case – it might be detecting objects, analyzing the sensor data, interacting with a voice enabled chatbot, or asking AI agents to perform a task. The use case should be able to answer questions like – what problem are we solving? Who can benefit from this solution? Who is your target audience? What are the timelines associated with developing the demo? These answers work as the main objectives which guide the development of the demo.
Let’s consider our Brainchip Anomaly Classification C++ project demonstrating real-time classification of mechanical vibrations from an ADXL345 accelerometer into 5 motion patterns: forward-backward, normal, side-side, up-down and tap. This use case is valuable for industrial use cases like monitoring conveyor belt movements, detecting equipment malfunctions, and many more industrial applications.
Screenshot-2025-07-17-at-8.56.26%E2%80%AFAM.png

Optimizing Pre-processing and Post-processing​


Optimal model performance relies heavily on the effective implementation of both pre-processing and post-processing components. The pre-processing tasks might involve normalization or image resizing or conversion of audio signals to a required format. The post-processing procedure might include decoding outputs from the model and applying threshold filters to refine those results, creating bounding boxes, or developing a chatbot interface. The design of these components must ensure accuracy and reliability.
In the BrainChip anomaly classification project, the model analyzes the data from the accelerometer which records 100HZ three-dimensional vibration through accX, accY, and accZ channels. The data was collected using Edge Impulse’s data collection feature. Spectral analysis of the accelerometer signals was performed to extract features from the time-series data during the pre-processing step. Use this project and retrain them or use your own models and optimize them for Akida IP using the Edge Impulse platform. It provides user friendly no-code interface for designing ML workflow and optimizing model performance for edge devices including BrainChip’s Akida IP.

Balancing Performance and Resource Constraints​


Models at the edge need to be smaller and faster while maintaining accuracy. Quantization along with knowledge distillation and pruning optimization methods allow for sustained accuracy together with improved model efficiency. BrainChip’s Akida AI Acceleration Processor IP leveragesquantization and also adds sparsity processing to realize extreme levels of energy efficiency and accuracy. It supportsreal-time, on-device inferences to take place with extremely low power.

Building Interactive Interfaces​


Different approaches include modern frameworks such as Flask, FastAPI, Gradio, and Streamlit to enable users to build interactive interfaces using innovative approaches. Flask and FastAPI give developers the ability to build custom web applications with flexibility and control, while Gradio and Streamlit enable quick prototyping of machine learning applications using minimal code. Factors like interface complexity together with deployment requirements and customization needs influence framework selection. The effectiveness of the demo depends heavily on user experience such as UI responsiveness and intuitive design. The rise of vibe coding and tools like Cursor and Replit has greatly accelerated the time to build prototypes and enhance the UX, saving time for the users to focus on edge deployment and optimizing performance where it truly matters.
For the Anomaly Classification demo, we implemented user interfaces for both Python and C++ versions to demonstrate real-time inference capabilities. For the Python implementation, we used Gradio to create a simple web-based interface that displays live accelerometer readings and classification results as the Raspberry Pi 5 processes sensor data in real-time. The C++ version features a PyQt-based desktop application that provides more advanced controls and visualizations for monitoring the vibration patterns. Both interfaces allow users to see the model's predictions instantly, making it easy to understand how the system responds to different types of mechanical movements.

Overcoming Common Challenges​


Common challenges in edge AI demo development include handling hardware constraints, performance consistency across different devices, and real-time processing capabilities. By implementing careful optimization combined with robust error handling and rigorous testing under diverse conditions, developers can overcome these challenges. By combining BrainChip'shardware acceleration with Edge Impulse's model optimization tools, the solution canshow consistent performance across different deployment scenarios while maintaining the low latency required for real-time industrial monitoring.

The Future of Edge AI Demos​


As edge devices become more powerful and AI models more efficient, demos will play a crucial role in demonstrating the practical applications of these advancements. They serve as a bridge between technical innovation and real-world implementation, helping stakeholders understand and embrace the potential of edge AI technology.
If you are ready to turn your edge AI ideas into powerful, real-world demos, you can start building today with BrainChip’s Akida IP and Edge Impulse’s intuitive development platform. Whether you're prototyping an industrial monitoring solution or exploring new user interactions, the tools are here to help you accelerate development and demonstrate what is possible.

Article by:​


Dhvani Kothari is a Machine Learning Solutions Architect at BrainChip. With a background in data engineering, analytics, and applied machine learning, she has held previous roles at Walmart Global Tech and Capgemini. Dhvani has a Master of Science degree in Computer Science from the University at Buffalo and a Bachelor of Engineering in Computer Technology from Yeshwantrao Chavan College of Engineering.

Thanks @Tothemoon24 interesting how Brn are actively promoting Edge Impulse

The Future of Edge AI Demos​


As edge devices become more powerful and AI models more efficient, demos will play a crucial role in demonstrating the practical applications of these advancements. They serve as a bridge between technical innovation and real-world implementation, helping stakeholders understand and embrace the potential of edge AI technology.
If you are ready to turn your edge AI ideas into powerful, real-world demos, you can start building today with BrainChip’s Akida IP and Edge Impulse’s intuitive development platform. Whether you're prototyping an industrial monitoring solution or exploring new user interactions, the tools are here to help you accelerate development and demonstrate what is possible.


Also from X .

https://x.com/BrainChip_inc/status/1946255663950364866


BrainChip
@BrainChip_inc


Building effective #EdgeAI demos isn’t just about showcasing technology, it’s how you validate use cases, optimize performance, and accelerate deployment. Check out how BrainChip + Edge Impulse enable fast, efficient prototyping with real-time inference. https://bit.ly/edgedemos
Image

3:08 AM · Jul 19, 2025
·
560
Views

One could be forgiven for assuming that the BrainChip-Edge Impulse relationship were back to its previous state of reciprocal love:

“If you are ready to turn your edge AI ideas into powerful, real-world demos, you can start building today with BrainChip’s Akida IP and Edge Impulse’s intuitive development platform. Whether you're prototyping an industrial monitoring solution or exploring new user interactions, the tools are here to help you accelerate development and demonstrate what is possible.

Now picture a few developers full of zest and “ready to turn their edge AI ideas into powerful, real-world demos”.

Unfortunately, this 👇🏻 continues to be the way they are greeted when following our company’s suggestion to “Get started with Edge Impulse” in order to build their own Akida models:


0EAFEEE6-DB2E-4D7D-9F37-5839C44ED9D0.jpeg



What an anticlimax… 🙁
 
  • Sad
  • Like
  • Thinking
Reactions: 12 users
Top Bottom