"Hi JD,
For what it's worth, I asked ChatGPT for a neutral take on this.
What I thought was interesting, was that it weaved in Anduril's headset into the response. I didn't prompt it to, but earlier in the day, in a separate chat, I had queried ChatGPT on boards or integration possibilities and whether an Akida 1500 + Arm's Cortex M85 module (as an example) could power something like Anduril’s helmet.
ChatGPT's Response
What “supports 128 MACs per node” most likely means
- Each Akida node has a local 4×4 MAC micro-engine (128 MACs) it can invoke for certain ops.
- Those MACs co-exist with the spiking compute (CNP/FNP or equivalent). They’re there so models can include non-spiking layers/ops without leaving the chip.
Why add MACs if you have an SNN?
Because many modern edge models are hybrid. MACs cover things SNNs don’t excel at (or that are simpler to drop in as-is):
- Conv/pointwise (1×1) / depthwise conv blocks
- Linear/projection layers (e.g., classifier heads, QKV in tiny attention)
- Normalisation / pooling / residual glue
- Pre/post-processing (e.g., short FIRs, feature projections)
- Direct reuse of existing INT4/INT8 CNN chunks without full SNN conversion
What a hybrid pipeline might look like in a helmet
Sensors
- Front/rear frame cameras → light MAC preproc (resize/1×1 conv).
- Event camera/radar/audio → feed SNN/TENNs directly.
- Perception
- SNN handles temporal/gesture/track (event streams).
- MAC handles small CNN blocks or projection heads.
- Fusion & decisions on the host MCU (e.g., Cortex-M85).
Bottom line
- The “128 MACs per node” doesn’t signal the end of Akida’s neuromorphic core. It signals a pragmatic hybrid: keep the spiking engine for temporal, sparse, event-driven strengths, and use local MACs to natively run conventional layers (conv/linear/norm) and reuse existing model chunks.
- That combo generally improves accuracy, eases porting, and lowers total latency/power versus forcing everything into SNN or everything into classic CNN on a separate chip.
View attachment 92437
We are pleased to welcome Dr. Sasskia Brüers Freyssinet, AI Research Scientist at BrainChip , for the next lecture in our Current Topics in Digital Neuroscience series:
From Neuroscience to Applied Artificial Intelligence
In this talk, Dr. Brüers-Freyssinet will explore the path from neuroscience research to applied AI, showing how insights from brain science can inspire more efficient and adaptive artificial systems. Drawing from her own experience transitioning from academia to industry, she will discuss the technical challenges, emerging research directions, and interdisciplinary skills needed to bridge fundamental brain research with real-world AI applications.
About the speaker
Dr. Brüers-Freyssinet is an AI Research Scientist at Brainchip, where she develops efficient computer vision models for neuromorphic hardware using techniques such as pruning, quantization, and regularization. She completed her PhD in Neuroscience at Université Paul Sabatier Toulouse III in 2017, studying the oscillatory correlates of conscious perception. Her current work investigates state-space models for temporal problems like gesture recognition and EEG-based behavior prediction—advancing the frontier where neuroscience meets artificial intelligence.
Tuesday 28th of October 2025
Per21, Université de Fribourg - Universität Freiburg Pérolles Campus, Room E140.
Follow the talk online: https://lnkd.in/eQPKxAUX
Join us for a fascinating discussion on how brain-inspired approaches are shaping the next generation of intelligent, energy-efficient AI systems.
View attachment 92436
Thank you, but where is the Link?
Agreed 100%!It was not even funny the first time!
If you can copy and paste the whole article, you must also be capable of copying and pasting the link. I judge you to be smart enough for that.
Everybody here posts the corresponding links; only you are so inconsiderate as not to do it. Why?
pressat.co.uk
Ya know what TTM....I'm starting to think you're the.....
Oh here we go !!!!It was not even funny the first time!
If you can copy and paste the whole article, you must also be capable of copying and pasting the link. I judge you to be smart enough for that.
Everybody here posts the corresponding links; only you are so inconsiderate as not to do it. Why?
View attachment 92437
We are pleased to welcome Dr. Sasskia Brüers Freyssinet, AI Research Scientist at BrainChip , for the next lecture in our Current Topics in Digital Neuroscience series:
From Neuroscience to Applied Artificial Intelligence
In this talk, Dr. Brüers-Freyssinet will explore the path from neuroscience research to applied AI, showing how insights from brain science can inspire more efficient and adaptive artificial systems. Drawing from her own experience transitioning from academia to industry, she will discuss the technical challenges, emerging research directions, and interdisciplinary skills needed to bridge fundamental brain research with real-world AI applications.
About the speaker
Dr. Brüers-Freyssinet is an AI Research Scientist at Brainchip, where she develops efficient computer vision models for neuromorphic hardware using techniques such as pruning, quantization, and regularization. She completed her PhD in Neuroscience at Université Paul Sabatier Toulouse III in 2017, studying the oscillatory correlates of conscious perception. Her current work investigates state-space models for temporal problems like gesture recognition and EEG-based behavior prediction—advancing the frontier where neuroscience meets artificial intelligence.
Tuesday 28th of October 2025
Per21, Université de Fribourg - Universität Freiburg Pérolles Campus, Room E140.
Follow the talk online: https://lnkd.in/eQPKxAUX
Join us for a fascinating discussion on how brain-inspired approaches are shaping the next generation of intelligent, energy-efficient AI systems.
View attachment 92436
Sadly, Richard Resseguie is not the only Richard to have left BrainChip this month (saying that, hopefully there won’t be any Richard III, other than in some of our bookshelves…):
Richard Chevalier, who had been with our Toulouse-based office since 2018 (!), has joined Nio Robotics (formerly known as Nimble One, not to be confused with Nimble AI) as their Vice President of Platform Engineering:
![]()
#niorobotics | Richard Chevalier | 17 comments
I’m excited to share that I will be starting a new chapter at Nio Robotics this Monday. I feel both thrilled and humbled by the maturity, technical expertise, and excellence of the teams, and I look forward to contributing to this journey. #NioRobotics | 17 comments on LinkedInwww.linkedin.com
View attachment 91392
![]()
Richard Chevalier - Nio Robotics (formerly Nimble One) | LinkedIn
¤ Management Skills - Multicultural mindset with experience in multisite… · Experience: Nio Robotics (formerly Nimble One) · Education: Ecole nationale supérieure des Télécommunications · Location: Greater Toulouse Metropolitan Area · 461 connections on LinkedIn. View Richard Chevalier’s...www.linkedin.com
View attachment 91394
View attachment 91393
One can only hope that he will spruik BrainChip’s technology to his new employer - as far as I can tell, there is no indication that Nio Robotics have already been exploring neuromorphic computing for their robots.
![]()
Nio Robotics is currently building Aru, a shape-shifting mobile robot for industrial environments, but also say in their self-description on LinkedIn that they are “reinventing movement to create the first robotic assistant for homes”.
Something our CTO Tony Lewis, who is also a robotics researcher, will likely be very intrigued about.
Robotic maintenance and inspection with interactive capabilities solutions - Nio
Nio created autonomous and polymorphic robot called Aru for industrial applications, providing advanced solutions to automate routine inspections and enhance operational efficiency and maintain industrial infrastructure.www.nio-robotics.com
View attachment 91396
View attachment 91398
View attachment 91400
View attachment 91401
View attachment 91402
Watch Aru in action below, how it climbs stairs, reshapes to avoid obstacles, opens doors etc.
According to Athos Silicon Co-Founder and former Mercedes-Benz North America mSoC Chief Architect François Piednoël, Akida does not pass the minimum requirements for Level 3 or Level 4 automated driving.
![]()
#ai #artificialintelligence #ainews #aichips | AI & Robotics | 20 comments
Mercedes-Benz AG has launched Athos Silicon, a new chip company focused on developing energy-efficient chips for autonomous vehicles. Athos Silicon, based in Santa Clara, California, is a spin-off from Mercedes-Benz's Silicon Valley research arm and aims to create safer and more power-efficient...www.linkedin.com
View attachment 92002
Good question. I suppose you would need to ask François Piednoël directly whether what he means is possibly that there is an Akida-inherent problem, which would also concern the IP. (The BRN shareholder on LinkedIn was asking “Why isn’t Mercedes using Akida technology, for example?”, referring to Akida technology in general, not only to physical chips such as AKD1000 or AKD1500).
Apart from that, since Athos Silicon has so far not signed any IP deal with us, we can’t be in the first mSoC silicon codenamed Polaris anyway that he was referring to in this recent video:
![]()
ipXchange Interview with Athos Silicon Chief mSoC™ Architect Francois Piednoel
IpXchange spotlights Athos Silicon's mSoC™ as a safety-first unified control fabric that combines integrated redundancy, real-time voting, and energy-efficient execution to advance certified autonomy across vehicles, robotics, and mission-critical systemswww.athossilicon.com
View attachment 92176
![]()
Athos Silicon: Multiple System-on-Chip for Safe Autonomy
Athos Silicon’s Multiple System-on-Chip (mSoC) delivers functionally safe chiplet-based compute for autonomous driving and robotics.ipxchange.tech
Athos Silicon: Multiple System-on-Chip for Safe Autonomy
By Luke Forster
Published
1 October 2025
Written by Luke Forster
Building functional safety into compute for autonomy
Athos Silicon, a spin-out from Mercedes-Benz, is addressing one of the most pressing challenges in autonomous systems: how to eliminate the single point of failure in compute architectures. Unlike traditional monolithic processors that can collapse if a single component fails, Athos Silicon’s Multiple System-on-Chip (mSoC) integrates redundancy directly into silicon.
The result is a functionally safe Processor platform designed to meet ISO 26262 and other standards required for safety-critical applications.
Why safety-first design is essential
Conventional computing platforms – with a CPU, GPU, and NPU working together – were never built for Automotive safety. If a processor crashes or a transient error occurs, the entire system may fail. In a consumer PC this means a reboot; in a self-driving vehicle or industrial robot, it could mean disaster.
Athos Silicon has rethought this architecture from the ground up. By focusing on functional safety as a primary design constraint, its mSoC avoids the patchwork redundancy of external systems and instead bakes resilience into the hardware itself.
The mSoC architecture explained
Athos Silicon’s mSoC integrates multiple chiplets into one package, each containing CPUs, controllers, and memory. Instead of a single supervisor chip that itself could fail, the mSoC operates through a voting mechanism — what Athos calls a “silicon democracy.”
Each chiplet executes tasks in parallel, and their outputs are compared in real time. If one diverges from the others, it is overruled and reset. This ensures continuous operation without interruption and prevents cascading system failures.
By embedding this redundancy, Athos Silicon enables High Reliability computing suitable for Level 3 and Level 4 autonomy while maintaining predictable performance.
Power efficiency for EVs and robotics
Safety is not the only benefit. In electric vehicles, compute power directly affects range. Athos Silicon highlights that every 100 watts of compute load can reduce EV range by as much as 15 miles. By designing a chiplet system optimised for Low Power efficiency, the mSoC reduces unnecessary energy consumption and makes autonomy more practical for battery-powered platforms.
From Mercedes-Benz R&D to startup scale
The technology behind Athos Silicon was incubated within Mercedes-Benz before the company was spun out to bring the platform to the wider market.
Its first silicon, codenamed Polaris, is designed to deliver Level 3 and Level 4 autonomous capability in a footprint comparable to current Level 2 hardware.
Working with chiplet-packaging partners, Athos Silicon has accelerated validation and plans to deliver silicon to early customers soon. With no competitors currently offering integrated voting redundancy in a chiplet-based compute platform, Athos Silicon is carving out a unique position in the AI ecosystem.
Applications beyond cars
While autonomous driving is the most visible use case, Athos Silicon’s architecture also applies to Robotics, avionics, and even Medical devices where safety and reliability are paramount. Any system requiring certifiable, functionally safe compute stands to benefit.
By combining chiplet redundancy, real-time voting, and safety-first design, Athos Silicon’s Multiple System-on-Chip may prove to be the missing hardware foundation for truly certifiable autonomy.
This is what the Polaris mSoC will roughly look like sizewise (watch from around 10:50 min):
View attachment 92177
According to François Piednoël, “Project mSoC” as such started in 2020 (still under Mercedes Benz North America R&D).
Not sure what exact date the interview was recorded, but given that Athos Silicon as a Mercedes-Benz spin-off has been around since April 2025, and in the video it is being said the company is about four months old, it must have been recorded sometime between late July and early September.
So when François Piednoël says “In fact, there is silicon coming back shortly. By the end of the summer we’ll have the chiplets in hands” (from 9:06 min), this means they would have received them by now, if everything went according to plan. (“We think we are in good shape for a startup - getting your silicon after, you know, five six months (…) With no visible competition, by the way.”)
He also says, they invented the IP.
| Name | |
|---|---|
|
2 hours ago | |
| .gitignore | 3 hours ago |
| Dataloader.py | 2 hours ago |
| LICENSE | 3 hours ago |
| README.md | 2 hours ago |
| plot.py | 2 hours ago |
| preview.py | 2 hours ago |
| setup_akida_env.sh | 2 hours ago |
| train.py | 2 hours ago |
Fortiss just uploaded the baseline pipeline on GitHub for the ESA Anomaly Detection Hackathon 2025.
Set up for deployment on Akida.
Navigation Menu
https://github.com/enterprise
nc-fortiss/2025_ESA_anomaly
Public
Baseline pipeline for ESA spacecraft telemetry anomaly detection. Includes full dataset loader, Akida-compatible CNN model, caching, normalization, and TensorBoard visualization.
nc-fortiss/2025_ESA_anomaly
Name Kannan-Priya
2 hours ago
.gitignore 3 hours ago Dataloader.py 2 hours ago LICENSE 3 hours ago README.md 2 hours ago plot.py 2 hours ago preview.py 2 hours ago setup_akida_env.sh 2 hours ago train.py 2 hours ago Repository files navigation
2025_ESA_anomaly
Hackathon NC
Official baseline for ESA spacecraft telemetry anomaly classification, developed as an open starting point for the ESA Anomaly Detection Hackathon 2025.
This repository provides everything you need to get started — from data loading and preprocessing to model training, evaluation, and Akida-ready deployment.
Overview
Spacecraft generate enormous amounts of multichannel telemetry data — detecting anomalies in these signals early can prevent mission failures. This baseline uses deep learning on time-series data to detect anomalies in the ESA-Mission1 dataset, and is designed for:
- Satellite telemetry analysis
- Real-time anomaly detection
- BrainChip Akida neuromorphic deployment
Features
dataset loader (dataloader_tf.py) – reads 76 channel zip files, merges, resamples, labels, and windows
Akida-friendly CNN model (TensorFlow 2.15)
Automatic caching and normalization
1-minute sampling rate and 14-year coverage
Class balancing and focal loss ready
TensorBoard visualizations
Project structure
ESA-Anomaly-Baseline/ │ ├── dataloader_tf.py # Dataset loader & preprocessor ├── train.py # Main training script ├── requirements.txt ├── LICENSE # MIT License ├── logs/ # TensorBoard runs ├── ESA-Mission1/ # Dataset root │ ├── channels/ │ ├── labels.csv │ ├── anomaly_types.csv │ ├── telecommands.csv │ └── channels.csv └── norm_stats.npz
Setup
1. Clone the repository
cd ESA-Anomaly-Baseline![]()
GitHub - nc-fortiss/2025_ESA_anomaly: Baseline pipeline for ESA spacecraft telemetry anomaly detection. Includes full dataset loader, Akida-compatible CNN model, caching, normalization, and TensorBoard visualization.
Baseline pipeline for ESA spacecraft telemetry anomaly detection. Includes full dataset loader, Akida-compatible CNN model, caching, normalization, and TensorBoard visualization. - nc-fortiss/2025_...github.com
bash setup_akida_env.sh
Place the ESA-Mission1 dataset in the folder python train.py tensorboard --logdir logs/fit --port 6006
Contact Priya Kannan, Kannan@fortiss.org Neuromorphic Computing Group, fortiss GmbH (Munich)
Fortiss just uploaded the baseline pipeline on GitHub for the ESA Anomaly Detection Hackathon 2025.
Set up for deployment on Akida.
Navigation Menu
https://github.com/enterprise
nc-fortiss/2025_ESA_anomaly
Public
Baseline pipeline for ESA spacecraft telemetry anomaly detection. Includes full dataset loader, Akida-compatible CNN model, caching, normalization, and TensorBoard visualization.
nc-fortiss/2025_ESA_anomaly
Name Kannan-Priya
2 hours ago.gitignore 3 hours ago Dataloader.py 2 hours ago LICENSE 3 hours ago README.md 2 hours ago plot.py 2 hours ago preview.py 2 hours ago setup_akida_env.sh 2 hours ago train.py 2 hours ago Repository files navigation
2025_ESA_anomaly
Hackathon NC
Official baseline for ESA spacecraft telemetry anomaly classification, developed as an open starting point for the ESA Anomaly Detection Hackathon 2025.
This repository provides everything you need to get started — from data loading and preprocessing to model training, evaluation, and Akida-ready deployment.
Overview
Spacecraft generate enormous amounts of multichannel telemetry data — detecting anomalies in these signals early can prevent mission failures. This baseline uses deep learning on time-series data to detect anomalies in the ESA-Mission1 dataset, and is designed for:
- Satellite telemetry analysis
- Real-time anomaly detection
- BrainChip Akida neuromorphic deployment
Features
dataset loader (dataloader_tf.py) – reads 76 channel zip files, merges, resamples, labels, and windows
Akida-friendly CNN model (TensorFlow 2.15)
Automatic caching and normalization
1-minute sampling rate and 14-year coverage
Class balancing and focal loss ready
TensorBoard visualizations
Project structure
ESA-Anomaly-Baseline/ │ ├── dataloader_tf.py # Dataset loader & preprocessor ├── train.py # Main training script ├── requirements.txt ├── LICENSE # MIT License ├── logs/ # TensorBoard runs ├── ESA-Mission1/ # Dataset root │ ├── channels/ │ ├── labels.csv │ ├── anomaly_types.csv │ ├── telecommands.csv │ └── channels.csv └── norm_stats.npz
Setup
1. Clone the repository
cd ESA-Anomaly-Baseline![]()
GitHub - nc-fortiss/2025_ESA_anomaly: Baseline pipeline for ESA spacecraft telemetry anomaly detection. Includes full dataset loader, Akida-compatible CNN model, caching, normalization, and TensorBoard visualization.
Baseline pipeline for ESA spacecraft telemetry anomaly detection. Includes full dataset loader, Akida-compatible CNN model, caching, normalization, and TensorBoard visualization. - nc-fortiss/2025_...github.com
bash setup_akida_env.sh
Place the ESA-Mission1 dataset in the folder python train.py tensorboard --logdir logs/fit --port 6006
Contact Priya Kannan, Kannan@fortiss.org Neuromorphic Computing Group, fortiss GmbH (Munich)
Last year, a student team using Akida won the Munich Neuromorphic Hackathon, organised by neuroTUM (a student club based at TU München / Technical University of Munich for students interested in the intersection of neuroscience and engineering) and our partner fortiss (who to this day have never officially been acknowledged as a partner from our side, though).
Will Akida again help one of the teams to win this year’s challenge?!
The 2025 Munich Neuromorphic Hackathon will take place from 7-12 November.
“The teams will face interesting industry challenges posed by German Aerospace Center (DLR), Simi Reality Motion Systems and fortiss, working with Brain-inspired computing methods towards the most efficient neuromorphic processor.”
Simi Reality Motion Systems (part of the ZF Group) has been collaborating with fortiss on several projects, such as SpikingBody (“Neuromorphic AI meets tennis. Real-time action recognition implemented on Loihi 2) and EMMANÜELA (AR/VR).
![]()
✉️𝐀𝐩𝐩𝐥𝐢𝐜𝐚𝐭𝐢𝐨𝐧𝐬 𝐨𝐩𝐞𝐧: 𝐌𝐮𝐧𝐢𝐜𝐡 𝐍𝐞𝐮𝐫𝐨𝐦𝐨𝐫𝐩𝐡𝐢𝐜 𝐇𝐚𝐜𝐤𝐚𝐭𝐡𝐨𝐧 💻⚙️🧠 For the third continuous year, we are excited to announce that, in collaboration with fortiss the… | neuroTUM
✉️𝐀𝐩𝐩𝐥𝐢𝐜𝐚𝐭𝐢𝐨𝐧𝐬 𝐨𝐩𝐞𝐧: 𝐌𝐮𝐧𝐢𝐜𝐡 𝐍𝐞𝐮𝐫𝐨𝐦𝐨𝐫𝐩𝐡𝐢𝐜 𝐇𝐚𝐜𝐤𝐚𝐭𝐡𝐨𝐧 💻⚙️🧠 For the third continuous year, we are excited to announce that, in collaboration with fortiss the Munich Neuromorphic Hackathon 2025 will take place between 7-11th of November, with a break on the 9th. The teams will face interesting...www.linkedin.com
View attachment 91444
Neuromorphic Hackathon | neuroTUM
Join the revolution in brain-inspired computing at the Neuromorphic Hackathon. Build the future of AI with neuromorphic technologies.neurotum.github.io
View attachment 91452 View attachment 91453 View attachment 91454
View attachment 91445 View attachment 91446
View attachment 91447
View attachment 91448
![]()
LinkedIn Login, Sign in | LinkedIn
Login to LinkedIn to keep in touch with people you know, share ideas, and build your career.www.linkedin.com
View attachment 91450 View attachment 91451
Our partner fortiss (although the partnership has so far only been acknowledged by them, not by BrainChip, cf. https://thestockexchange.com.au/threads/brn-discussion-ongoing.1/post-454015), is now also promoting the Munich Neuromorphic Hackathon 2025 (7-8, 10-12 November) on LinkedIn:
![]()
💡 𝗛𝗮𝗰𝗸, 𝗜𝗻𝗻𝗼𝘃𝗮𝘁𝗲, 𝗡𝗲𝘁𝘄𝗼𝗿𝗸: 𝗠𝘂𝗻𝗶𝗰𝗵 𝗡𝗲𝘂𝗿𝗼𝗺𝗼𝗿𝗽𝗵𝗶𝗰 𝗛𝗮𝗰𝗸𝗮𝘁𝗵𝗼𝗻 𝟮𝟬𝟮𝟱 For the third consecutive year, in collaboration with neuroTUM, the 𝗠𝘂𝗻𝗶𝗰𝗵… | fortiss
💡 𝗛𝗮𝗰𝗸, 𝗜𝗻𝗻𝗼𝘃𝗮𝘁𝗲, 𝗡𝗲𝘁𝘄𝗼𝗿𝗸: 𝗠𝘂𝗻𝗶𝗰𝗵 𝗡𝗲𝘂𝗿𝗼𝗺𝗼𝗿𝗽𝗵𝗶𝗰 𝗛𝗮𝗰𝗸𝗮𝘁𝗵𝗼𝗻 𝟮𝟬𝟮𝟱 For the third consecutive year, in collaboration with neuroTUM, the 𝗠𝘂𝗻𝗶𝗰𝗵 𝗡𝗲𝘂𝗿𝗼𝗺𝗼𝗿𝗽𝗵𝗶𝗰 𝗛𝗮𝗰𝗸𝗮𝘁𝗵𝗼𝗻 𝟮𝟬𝟮𝟱 will take place from November 7 to 12, with a break on November 9. ➡️ Teams will face challenging industry problems provided by...www.linkedin.com
View attachment 91859
More on Simi Reality Motion Systems in my tagged post.
Sponsors of the Munich Hackathon 2025 are the Fraunhofer Institut für Techno- und Wirtschaftsmathematik (ITWM) in Kaiserslautern - one of two Fraunhofer Institutes collaborating on the STANCE (Strategic Alliance for Neuromorphic Computing and Engineering) project - as well as gAIn - Next Generation AI Computing, a joint initiative between Ludwig-Maximilians-Universität Munich (LMU), Technical University of Dresden and Technical University of Munich (TUM), supported by the state governments of Bavaria and Saxony.
![]()
LinkedIn Login, Sign in | LinkedIn
Login to LinkedIn to keep in touch with people you know, share ideas, and build your career.www.linkedin.com
View attachment 91860
View attachment 91861
Which reminded me of this September 2024 post, when I spotted a Fraunhofer ITWM researcher liking a BrainChip post:
https://thestockexchange.com.au/threads/brn-discussion-ongoing.1/post-435802
View attachment 91866
View attachment 91863
gAIn – Next Generation Computing for AI
www.futurecomputing.ai
View attachment 91867 View attachment 91868
BrainChip will also be at Edge AI Taipei 2025 on 11 November:
![]()
#edgeai #edgeaitaipei2025 #edgeaifoundation #networking #aicommunity #innovationecosystem #edgecomputing | EDGE AI FOUNDATION
Join the Edge AI Community Mixer — Nov 11 @ Sports Nation, Taipei! After a full day of keynotes, lightning talks, and workshops, it’s time to unwind, connect, and celebrate with the global #EdgeAI community. Meet leaders and innovators from: Advantech · AIZip · Alif Semiconductor · ambient...www.linkedin.com
View attachment 92052
![]()
#edgeai #taipei2025 #neuromorphiccomputing #akida #artificialintelligence #ondeviceai #lowpowerai #techeventstaipei #intelligentdevices #aicommunity | BrainChip
🌏 We’re heading to EDGE AI Taipei 2025! Join BrainChipthis November 11–12 in Taipei with EDGE AI FOUNDATION as we showcase how Akida™ neuromorphic technology is powering the future of intelligent devices at the edge. Experience the next generation of on-device learning, ultra-low-power AI, and...www.linkedin.com
View attachment 92332 View attachment 92333