Pom down under
Top 20
Tomorrow will be the perfect moment for a nice positive prices sensitive announcement IMO!
Not today… tomorrow… yeah… it would be a nice start into the weekend
![]()
Tomorrow will be the perfect moment for a nice positive prices sensitive announcement IMO!
Not today… tomorrow… yeah… it would be a nice start into the weekend
![]()
Worth the read!
Neuromorphic Computing Advantages
Moving to Spiking Neural Networks (SNNs) signifies a core alteration from standard neural processing. In contrast to standard neural networks that process continuous values, SNNs utilize discrete spike events for information encoding and transmission. This method yields sparse, event-driven computation that greatly lessens energy use while preserving diagnostic accuracy.
The deployment platform, BrainChip’s Akida (ASX:BRN;OTC:BRCHF) neuromorphic processor, natively supports SNN operations and enables on-chip incremental learning. This capability allows the system to adapt to new patient data without requiring complete model retraining, aligning with clinical workflows that continuously encounter new diagnostic cases.
Spike-Compatible Architecture Design
The architecture incorporates specific design elements that facilitate seamless conversion from standard neural networks to spike-based processing. The spike-compatible feature transformation module employs separable convolutions with quantization-aware normalization, ensuring all activations remain within bounds suitable for spike encoding while preserving diagnostic information.
The Squeeze-and-Excitation blocks implement adaptive channel weighting through a two-stage bottleneck mechanism, providing additional regularization particularly beneficial for small, imbalanced medical datasets. The quantized output projection produces SNN-ready outputs that can be directly processed by neuromorphic hardware without additional conversion steps.
Comprehensive Performance Validation
Experimental validation was conducted on both the HAM10000 public benchmark dataset and a real-world clinical dataset from Hospital Sultanah Bahiyah, Malaysia. The clinical dataset comprised 3,162 dermatoscopic images from 1,235 patients, providing real-world validation beyond standard benchmarks.
On HAM10000, QANA achieved 91.6% Top-1 accuracy and 82.4% macro F1 score, maintaining comparable performance on the clinical dataset with 90.8% accuracy and 81.7% macro F1 score. These results demonstrate consistent performance across both standardized benchmarks and clinical practice conditions.
The system showed balanced performance across all seven diagnostic categories, including minority classes such as dermatofibroma and vascular lesions. Notably, melanoma detection achieved 95.6% precision and 93.3% recall, critical metrics for this potentially life-threatening condition.
Hardware Performance and Energy Analysis
When deployed on Akida hardware, the system delivers 1.5 millisecond inference latency and consumes only 1.7 millijoules of energy per image. These figures represent reductions of over 94.6% in inference latency and 98.6% in energy consumption compared to GPU-based CNN implementations.
Comparative analysis against state-of-the-art architectures converted to SNNs showed QANA’s superior performance across all metrics. While conventional architectures experienced accuracy drops of 3–7% after SNN conversion, QANA maintained high accuracy through its quantization-aware design principles.
Good pick up, sounds like AKIDA may be involved given our links with Lockheed-Martin.AI-Powered Radar Brings Instant Threat Detection to the Sea
Lockheed Martin has trialed a compact system that adapts in real time and re-tasks itself mid-mission.
Share
- 1 minute read
Tweet
Lockheed Martin successfully demonstrated technology to automatically recognize targets using an AI-powered Synthetic Aperture Radar. Image: Lockheed Martin
Published on18 July 2025
AUTHOR
Giulia Bernacchi
SHARE ARTICLE
Facebook
In trials off the US West Coast, teams from Lockheed Martin’s AI Center, Skunk Works, and Rotary and Mission Systems demonstrated the system’s ability to detect and classify targets in near real-time.
Machine learning tools enabled SAR to retrain and adapt on the fly, fine-tuning performance as conditions changed.
With autonomous sensor control, the radar could re-task itself mid-mission, shifting focus without human input.
The system ran entirely on compact, low-power hardware, without the need for large ground stations or cloud computing, proving its potential for fast, field-ready deployment.
“This is a major leap for harnessing AI to help enhance situational awareness and decision-making capabilities, with unparalleled threat identification across extended ranges and all-weather conditions,” stated John Clark, senior VP of technology and strategic innovation at Lockheed Martin.
SAR Development
SAR technology is commonly used to detect ships at sea, but typically requires human analysts to interpret the images.
With AI, it can now distinguish civilian from military vessels without the need for manual review.
Testing continues this year to refine sensor integration and boost maritime readiness.
Data from the trials will also feed into other Lockheed Martin autonomous platforms, including collaborative combat aircraft and integrated systems.
No doubt Frontgrade Gaisler will be spruiking GR801 - their first neuromorphic AI solution for space, powered by Akida - during their upcoming Asia Tour through India, Singapore and Japan:
![]()
#gaisler2025asiatour #spacetech | Frontgrade Gaisler
We are touring across Asia this year! Frontgrade Gaisler is glad to engage directly with partners, agencies, and customers across India, Singapore, and Japan. You can find us here: 🇮🇳 India July 21-22: Ahmedabad July 23: Bangalore 🇸🇬 Singapore July 25 🇯🇵 Japan July 28 - August 1: SPEXA –...www.linkedin.com
View attachment 88369
Not only ESA and NASA are interested in using Akida for space research - ISRO, the Indian Space Research Organisation is, too!
Satellite Technology
D Area Electro-Optical Sensor Technology (SAC)
D4 Sub Area Electronics System Design and Development (SAC)
D4.15 Neuromorphic computing-Low complexity Artificial Intelligence (SAC)
View attachment 85089 View attachment 85092 View attachment 85090 View attachment 85093
View attachment 85095
View attachment 85096
Look, telling someone to “just sell and move on” when they raise valid concerns isn’t just dismissive, it misses the whole point. Most of us who are still here chose to be long-term holders because we believe in the tech and the vision. We’re not flipping shares like day traders. We're backing something we thought would change the game, and many of us are deeply invested emotionally, not just financially. It’s easy to talk about disruptive potential, five-year horizons, and “trust the board” narratives. But after years of watching major partners come and go with near zero revenue to show for it, you can’t blame people for finally questioning the execution. Feedback and excitement don’t pay salaries or fund R&D, revenue does - or in our case, hard earned money of mum and dad investors' savings minus the fees and profits for LDA capital. And frankly, brushing off concerns by suggesting people give up and walk away sounds like something someone would do if they gave up on their own team or friends the moment things got hard. That’s not loyalty. That’s avoidance. We want to see this succeed... not just in theory, but in practice. Nobody’s asking for instant results, just real transparency and a path to monetisation that doesn’t require blind faith. It's not negativity ... it's accountability.Yes, we all know where you're coming from, it is frustrating with regards to us either accepting this is how it's going to be for at least another
12 months or not accepting or trusting our Board to deliver on all the alleged interaction that is apparently taking place, it's not easy to process when your brain is continually throwing up negatives.
You can always sell your shares, accept a loss or appreciate that the technology is top class, and the market will finally realize that Brainchip's
Akida is market changing, disruptive technology that will benefit millions of people worldwide, especially in the healthcare industry and through
the space industry break-through technology that Akida will bring; we are fast approaching an intersection that nobody wants to enter, and once
again, it's referred to as change or more to the point disruptive technology, our technology that Peter first started researching more than 25 years ago, is finally approaching that point in time (within 5 years) where everything he told me, and many others will be proven to be true.
Maybe IBM or Intel know something that our company doesn't know, yes Quantum and its qubits maybe a threat further down the road, but the next port of call for humanity is Spiking Neural Networks, and I'm referring to "Native SNN's".
Finally, I personally believe that our current partners, like the AFRL and RTX know whose technology is respectfully termed SOTA.
Just my opinion...either way, it's not right nor wrong........take it easy.........Tech x
Look, telling someone to “just sell and move on” when they raise valid concerns isn’t just dismissive, it misses the whole point. Most of us who are still here chose to be long-term holders because we believe in the tech and the vision. We’re not flipping shares like day traders. We're backing something we thought would change the game, and many of us are deeply invested emotionally, not just financially. It’s easy to talk about disruptive potential, five-year horizons, and “trust the board” narratives. But after years of watching major partners come and go with near zero revenue to show for it, you can’t blame people for finally questioning the execution. Feedback and excitement don’t pay salaries or fund R&D, revenue does - or in our case, hard earned money of mum and dad investors' savings minus the fees and profits for LDA capital. And frankly, brushing off concerns by suggesting people give up and walk away sounds like something someone would do if they gave up on their own team or friends the moment things got hard. That’s not loyalty. That’s avoidance. We want to see this succeed... not just in theory, but in practice. Nobody’s asking for instant results, just real transparency and a path to monetisation that doesn’t require blind faith. It's not negativity ... it's accountability.
Take it easy too. But let’s stop pretending skepticism is the enemy and this ship is sailing perfectly with top class captain - it's far from it IMO....
This press release on 11 July 2025 got me wondering whether Tata Exlsi was first introduced to BrainChip by Synopsys.
See the screenshot below showing a picture of BrainChip's Akida in Synopsys's "Corporate Overview for Investors May 2022"
BrainChip and Tata Elxsi officially announced their partnership on August 28, 2023, when Tata Elxsi joined BrainChip’s Essential AI ecosystem to integrate Akida neuromorphic technology into medical and industrial applications.
Tata’s continued collaboration with Synopsys (2025) builds upon their existing relationship with BrainChip, completing a triangle of opportunity IMO.
Synopsys + Tata could potentially model and test complex ECUs before production which is critical for SDV.
BrainChip + Tata could potentially allow these ECUs and future ECUs to embed real-time, energy-efficient AI inference.
IMO. DYOR.
Press Releases
Tata Elxsi and Synopsys Collaborate to Accelerate Software-Defined Vehicle Development through Advanced ECU Virtualization Capabilities
Date: Jul 11 2025
Integrated capabilities aim to simplify and speed software development and testing to help reduce related costs and de-risk production timelines
Bengaluru, India – July 11, 2025 — Tata Elxsi, a global leader in design and technology services, today announced the signing of a Memorandum of Understanding (MoU) with Synopsys, a leader in silicon to systems design solutions, to collaborate to deliver advanced automotive virtualization solutions. The MoU was signed at the SNUG India 2025 event in Bengaluru by senior leaders from both companies.
The collaboration will provide customers pre-verified, integrated solutions and services that make it easy to design and deploy virtual electronic control units (vECUs), a cornerstone technology critical for efficient software development and testing in today’s software-defined vehicles. The collaboration brings together Tata Elxsi’s engineering capabilities in embedded systems and integration with Synopsys’ industry-leading virtualization solutions that are used by more than 50 global automotive OEMs and Tier 1 suppliers to help reduce development complexity and cost, improve quality of software systems, and de-risk vehicle production timelines.
Together, the companies are already collaborating on programs with several global customers to enable vECUs, as well as software bring-up, board support package (BSP) integration, and early-stage software validation. These solutions are being deployed across vehicle domains such as powertrain, chassis, body control, gateway, and central compute, helping customers simulate real-world scenarios, validate software early, and reduce reliance on physical prototypes.
Through the collaboration, Synopsys and Tata Elxsi will further explore opportunities to scale and accelerate the deployment of electronics digital twins for multi-ECU and application specific systems.
![]()
“Our partnership with Synopsys reflects a future-forward response to how vehicle development is evolving. As OEMs move away from traditional workflows, there is growing demand for engineering services that are tightly integrated with virtualization tools. This strategic collaboration enables us to jointly address that shift with focus, flexibility, and domain depth,” said Sundar Ganapathi, Chief Technology Officer of Automotive, Tata Elxsi.
“The automotive industry’s transformation to software-defined vehicles requires advanced virtualization capabilities from silicon to systems. Our leadership enabling automotive electronics digital twins, combined with Tata Elxsi’s engineering scale and practical experience operationalising automotive system design, will simplify the adoption of virtual ECUs and thereby accelerate software development and testing to improve quality and time to market,” said Marc Serughetti, Vice President, Synopsys Product Management & Markets Group.
![]()
Tata Elxsi & Synopsys Partner to Speed SDV with Advanced ECU Virtualization
Discover how Tata Elxsi and Synopsys are accelerating software-defined vehicle (SDV) innovation with advanced ECU virtualization for faster development and validation.www.tataelxsi.com
Reminder - #28,573
View attachment 88661
Arijit Mukherjee is already busy co-organising another Edge AI workshop that will also touch on neuromorphic computing. It is scheduled for 8 October and will be co-located with AIMLSys 2025 in Bangalore:
“EDGE-X 2025: Reimagining edge intelligence with low-power, high-efficiency AI systems”.
![]()
#edgeai #tinyml #neuromorphic #spintronics #photoniccomputing #aimlsys2025 #lowpowerai #futuretech #callforpapers #linkedinresearch #aiatedge | Arijit Mukherjee
🔋✨ Rethinking Edge Intelligence at EDGE-X 2025 📍 Co-located with AI-ML Systems Conference 2025 | Chancery Pavilion, Bangalore | October 8, 2025 As intelligent systems scale across diverse environments—from IoT sensors to autonomous platforms—traditional architectures face new limits. The EDGE-X...www.linkedin.com
View attachment 87849
Workshop-EDGE-X | The Fifth International Conference on AI ML Systems
www.aimlsystems.org
EDGE-X
The EDGE-X 2025 workshop, part of the Fifth International AI-ML Systems Conference (AIMLSys 2025), aims to address the critical challenges and opportunities in nextgeneration edge computing. As intelligent systems expand into diverse environments—from IoT sensors to autonomous devices—traditional applications, architectures, and methodologies face new limits. EDGE-X explores innovative solutions across various domains, including on-device learning and inferencing, ML/DL optimization approaches to achieve efficiency in memory/latency/power, hardware-software co-optimization, and emerging beyond von Neumann paradigms including but not limited to neuromorphic, in-memory, photonic, and spintronic computing. The workshop seeks to unite researchers, engineers, and architects to share ideas and breakthroughs in devices, architectures, algorithms, tools and methodologies that redefine performance and efficiency for edge computing.
Topics of Interest (including but not limited to the following):
We solicit submissions describing original and unpublished results focussed on leveraging software agents for software engineering tasks. Topics of interest include but are not limited to:
1.Ultra-Efficient Machine Learning
2.Hardware-Software Co-Design
- TinyML, binary/ternary neural networks, federated learning
- Model pruning, compression, quantization, and edge-training
3.Beyond CMOS & von Neumann Paradigms
- RISC-V custom extensions for edge AI
- Non-von-Neumann accelerators (e.g., in-memory compute, FPGAs
4.System-Level Innovations
- Neuromorphic computing (spiking networks, event-based sensing)
- In-memory/compute architectures (memristors, ReRAM)
- Photonic integrated circuits for low-power signal processing
- Spintronic logic/memory and quantum-inspired devices
5.Tools & Methodologies
- Near-/sub-threshold computing
- Power-aware OS/runtime frameworks
- Approximate computing for error-tolerant workloads
6.Use Cases & Deployment Challenges
- Simulators for emerging edge devices (photonic, spintronic)
- Energy-accuracy trade-off optimization
- Benchmarks for edge heterogeneous platforms
- Self-powered/swarm systems, ruggedized edge AI
- Privacy/security for distributed intelligence
- Sustainability and lifecycle management
- Program Committee
- Arijit Mukherjee, Principal Scientist, TCS Research
- Udayan Ganguly, Professor, IIT Bombay
Cecilia Pisano from Nurjana Technologies has repeatedly liked BrainChip posts on LinkedIn, and hence her Sardinia-based company has been mentioned by several forum members as potentially playing with Akida:
View attachment 88228
And here’s the proof that it was indeed worth keeping an eye on Nurjana Tech :
![]()
#edgeai #tinyml #neuromorphic #spintronics #photoniccomputing #aimlsys2025 #lowpowerai #futuretech #callforpapers #linkedinresearch #aiatedge | Arijit Mukherjee
Paper submissions now open for EDGE-X: Rethinking Edge Intelligence 📍 Co-located with AI-ML Systems Conference 2025 | Chancery Pavilion, Bangalore | October 8, 2025 🧠 Topics include: • On-device learning and inference • ML/DL optimization for memory, latency, and power • Hardware-software...www.linkedin.com
View attachment 88226
View attachment 88227
![]()
Home - NurjanaTech
NurjanaTech provides a wide range of engineering capabilities to support a global customer base whose operations and complex environments require a partner that possesses unique expertise and comprehensive solutions.www.nurjanatech.com
View attachment 88229 View attachment 88230 View attachment 88231
View attachment 88232
View attachment 88233
![]()
Sensors Converge
www.sensorsconverge.com
View attachment 87236
From tinyML to the Edge of AI: Connecting AI to the Real World | Sensors Converge
www.sensorsconverge.com
From tinyML to the Edge of AI: Connecting AI to the Real World
Thu, Jun 26 | 01:00 PM - 02:00 PM
Converge Main Stage
Add to Calendar
Full schedule
Session details:
As artificial intelligence evolves beyond the data center, a new frontier is emerging at the edge—where smart sensors, microcontrollers, and low-power devices are enabling real-time intelligence closer to the physical world. This panel explores the transformative journey from tinyML—AI on resource-constrained devices—to full-fledged edge AI systems that bridge the gap between digital models and real-world action.
Join industry leaders and innovators as they discuss the latest advancements in on-device learning, edge computing architectures, and AI deployment in environments with limited bandwidth, power, or compute. Topics will include practical applications across industries such as healthcare, manufacturing, agriculture, and smart cities, as well as challenges like model compression, latency, privacy, and security.
Whether you're building smart sensors, designing AI pipelines, or looking to understand the future of decentralized intelligence, this session will offer insights into how tinyML and edge AI are reshaping how machines sense, interpret, and interact with the world around them.
Speakers:
![]()
Scott SmyserNanoveu
![]()
Kurt BuschSyntiant
![]()
Sumeet KumarInnatera
![]()
Steve BrightfieldBrainChip
![]()
GP SinghAmbient Scientific
Moderators:
![]()
Michael KuptzEDGE AI FOUNDATION
Format:
Expo Floor Talks
Track:
Expo Floor Talks
Here’s the video of the 26 June Sensors Converge panel discussion From tinyML to the Edge of AI: Connecting AI to the Real World, organised by the Edge AI Foundation, in which our CMO Steve Brightfield was one of the panelists.
Steve Brightfield talks from 0:52 min, 7:58 min, 26:30 min, 39:45 min and again from 49:53 min, but if you have the time, it is also worth listening to the other panelists’ contributions. They are competitors, yet in the same boat.
View attachment 88635
View attachment 88636
Edge AI solutions have become critically important in today’s fast-paced technological landscape. Edge AI transforms how we utilize and process data by moving computations close to where data is generated. Bringing AI to the edge not only improves performance and reduces latency but also addresses the concerns of privacy and bandwidth usage. Building edge AI demos requires a balance of cutting-edge technology and engaging user experience. Often, creating a well-designed demonstration is the first step in validating an edge AI use case that can show the potential for real-world deployment.
Building demos can help us identify potential challenges early when building AI solutions at the edge. Presenting proof-of-concepts through demos enables edge AI developers to gain stakeholder and product approval, demonstrating how AI solutions effectively create real value for users, within size, weight and power resources. Edge AI demos help customers visualize the real-time interaction between sensors, software and hardware, helping in the process of designing effective AI use cases. Building a use-case demo also helps developers experiment with what is possible.
Understanding the Use Case
The journey of building demos starts with understanding the use case – it might be detecting objects, analyzing the sensor data, interacting with a voice enabled chatbot, or asking AI agents to perform a task. The use case should be able to answer questions like – what problem are we solving? Who can benefit from this solution? Who is your target audience? What are the timelines associated with developing the demo? These answers work as the main objectives which guide the development of the demo.
Let’s consider our Brainchip Anomaly Classification C++ project demonstrating real-time classification of mechanical vibrations from an ADXL345 accelerometer into 5 motion patterns: forward-backward, normal, side-side, up-down and tap. This use case is valuable for industrial use cases like monitoring conveyor belt movements, detecting equipment malfunctions, and many more industrial applications.
![]()
Optimizing Pre-processing and Post-processing
Optimal model performance relies heavily on the effective implementation of both pre-processing and post-processing components. The pre-processing tasks might involve normalization or image resizing or conversion of audio signals to a required format. The post-processing procedure might include decoding outputs from the model and applying threshold filters to refine those results, creating bounding boxes, or developing a chatbot interface. The design of these components must ensure accuracy and reliability.
In the BrainChip anomaly classification project, the model analyzes the data from the accelerometer which records 100HZ three-dimensional vibration through accX, accY, and accZ channels. The data was collected using Edge Impulse’s data collection feature. Spectral analysis of the accelerometer signals was performed to extract features from the time-series data during the pre-processing step. Use this project and retrain them or use your own models and optimize them for Akida IP using the Edge Impulse platform. It provides user friendly no-code interface for designing ML workflow and optimizing model performance for edge devices including BrainChip’s Akida IP.
Balancing Performance and Resource Constraints
Models at the edge need to be smaller and faster while maintaining accuracy. Quantization along with knowledge distillation and pruning optimization methods allow for sustained accuracy together with improved model efficiency. BrainChip’s Akida AI Acceleration Processor IP leveragesquantization and also adds sparsity processing to realize extreme levels of energy efficiency and accuracy. It supportsreal-time, on-device inferences to take place with extremely low power.
Building Interactive Interfaces
Different approaches include modern frameworks such as Flask, FastAPI, Gradio, and Streamlit to enable users to build interactive interfaces using innovative approaches. Flask and FastAPI give developers the ability to build custom web applications with flexibility and control, while Gradio and Streamlit enable quick prototyping of machine learning applications using minimal code. Factors like interface complexity together with deployment requirements and customization needs influence framework selection. The effectiveness of the demo depends heavily on user experience such as UI responsiveness and intuitive design. The rise of vibe coding and tools like Cursor and Replit has greatly accelerated the time to build prototypes and enhance the UX, saving time for the users to focus on edge deployment and optimizing performance where it truly matters.
For the Anomaly Classification demo, we implemented user interfaces for both Python and C++ versions to demonstrate real-time inference capabilities. For the Python implementation, we used Gradio to create a simple web-based interface that displays live accelerometer readings and classification results as the Raspberry Pi 5 processes sensor data in real-time. The C++ version features a PyQt-based desktop application that provides more advanced controls and visualizations for monitoring the vibration patterns. Both interfaces allow users to see the model's predictions instantly, making it easy to understand how the system responds to different types of mechanical movements.
Overcoming Common Challenges
Common challenges in edge AI demo development include handling hardware constraints, performance consistency across different devices, and real-time processing capabilities. By implementing careful optimization combined with robust error handling and rigorous testing under diverse conditions, developers can overcome these challenges. By combining BrainChip'shardware acceleration with Edge Impulse's model optimization tools, the solution canshow consistent performance across different deployment scenarios while maintaining the low latency required for real-time industrial monitoring.
The Future of Edge AI Demos
As edge devices become more powerful and AI models more efficient, demos will play a crucial role in demonstrating the practical applications of these advancements. They serve as a bridge between technical innovation and real-world implementation, helping stakeholders understand and embrace the potential of edge AI technology.
If you are ready to turn your edge AI ideas into powerful, real-world demos, you can start building today with BrainChip’s Akida IP and Edge Impulse’s intuitive development platform. Whether you're prototyping an industrial monitoring solution or exploring new user interactions, the tools are here to help you accelerate development and demonstrate what is possible.
- Explore the BrainChip Developer Hub
- Get started with Edge Impulse
Article by:
Dhvani Kothari is a Machine Learning Solutions Architect at BrainChip. With a background in data engineering, analytics, and applied machine learning, she has held previous roles at Walmart Global Tech and Capgemini. Dhvani has a Master of Science degree in Computer Science from the University at Buffalo and a Bachelor of Engineering in Computer Technology from Yeshwantrao Chavan College of Engineering.
Thanks @Tothemoon24 interesting how Brn are actively promoting Edge Impulse
The Future of Edge AI Demos
As edge devices become more powerful and AI models more efficient, demos will play a crucial role in demonstrating the practical applications of these advancements. They serve as a bridge between technical innovation and real-world implementation, helping stakeholders understand and embrace the potential of edge AI technology.
If you are ready to turn your edge AI ideas into powerful, real-world demos, you can start building today with BrainChip’s Akida IP and Edge Impulse’s intuitive development platform. Whether you're prototyping an industrial monitoring solution or exploring new user interactions, the tools are here to help you accelerate development and demonstrate what is possible.
- Explore the BrainChip Developer Hub
- Get started with Edge Impulse
Also from X .
https://x.com/BrainChip_inc/status/1946255663950364866
![]()
BrainChip
@BrainChip_inc
Building effective #EdgeAI demos isn’t just about showcasing technology, it’s how you validate use cases, optimize performance, and accelerate deployment. Check out how BrainChip + Edge Impulse enable fast, efficient prototyping with real-time inference. https://bit.ly/edgedemos
![]()
3:08 AM · Jul 19, 2025
·
560
Views
Morning @smoothsailing18,What happened to this with Renesas can anyone tell me ?. Shouldn't we be seeing revenue if it happened or was it halted for some reason ?.
Renesas manufacture the Akida IP on its R-Car V3H system-on-a-chip (SoC) platform. The Akida IP is a neuromorphic processor that is designed to accelerate artificial intelligence (AI) applications. It is based on Brainchip's Akida neuromorphic processor architecture, which is inspired by the human brain. The Akida IP is capable of running AI applications at much lower power consumption than traditional processors. This makes it ideal for a wide range of applications, including edge AI, automotive, and industrial automation.
In Dec'20 Renesas look out a licence.What happened to this with Renesas can anyone tell me ?. Shouldn't we be seeing revenue if it happened or was it halted for some reason ?.
Renesas manufacture the Akida IP on its R-Car V3H system-on-a-chip (SoC) platform. The Akida IP is a neuromorphic processor that is designed to accelerate artificial intelligence (AI) applications. It is based on Brainchip's Akida neuromorphic processor architecture, which is inspired by the human brain. The Akida IP is capable of running AI applications at much lower power consumption than traditional processors. This makes it ideal for a wide range of applications, including edge AI, automotive, and industrial automation.