BRN Discussion Ongoing

Interesting read here about Intel's downfall, the CEO providing a reality check, and the direction they're now heading in which Brainchip could play a part in:


"At one time, Intel was so powerful that it considered acquiring Nvidia for $20 billion. The GPU maker is now worth $4 trillion."

"Intel instead plans to shift its focus toward edge AI, aiming to bring AI processing directly to devices like PCs rather than relying on cloud-based compute. Tan also highlighted agentic AI—an emerging field where AI systems can act autonomously without constant human input—as a key growth area. He expressed optimism that recent high-level hires could help steer Intel back into relevance in AI, hinting that more talent acquisitions are on the way. “Stay tuned. A few more people are coming on board,” said Tan. At this point, Nvidia is simply too far ahead to catch up to, so it's almost exciting to see Intel change gears and look to close the gap in a different way."
 
  • Like
  • Fire
  • Love
Reactions: 22 users

Bravo

If ARM was an arm, BRN would be its biceps💪!
This article, published a couple of days ago, outlines how grid edge computing is transforming modern power systems. It paints a picture of massively decentralized, real-time, intelligent power networks—a perfect environment for technologies like BrainChip’s neuromorphic AI to thrive.

The piece explicitly references neuromorphic computing as one of the emerging technologies shaping the future of the smart grid, alongside explainable AI, generative models, and collaborative AI systems.

The system-level challenges and requirements described align almost exactly with what BrainChip’s Akida and TENNs platforms were built to solve: ultra-low latency, energy efficiency, anomaly and fault detection, always-on AI, edge inference, on-device learning and security.

The smart grid + grid edge AI market is exploding, particularly due to the rise of:
  • Distributed Energy Resources
  • EV charging infrastructure
  • Smart meters and substations
  • Energy trading systems
  • Real-time fault detection / predictive maintenance
Itron (the author’s company) is already building AI into edge smart grid gateways.

The market size estimates for edge AI in energy & utilities are upwards of $3B today and expected to exceed $10B+ by 2030.

If BrainChip captured even 1% of the edge AI deployments in grid systems. - including relays, sensors, inverters, load balancers, and smart meters - it could mean tens to hundreds of millions in annual IP licensing or chip sales.

The other thing worth noting is that Itron has partnered with NVIDIA. As you can see from the last screenshot, they aim to utilize NVIDIA's Jetson Orin Nano. Given the article suggests that neurmorphic computing is an emerging technlogy that allows for more efficient computing, I ownder if they're considering combining Jetson Orin + neuromorphic.

For example, central nodes might run NVIDIA (Jetson/Orin) for heavy inference & cloud analytics. And fault sensors, relays, and smart meters could utilize Akida for monitoring waveform anomalies 24/7 without draining power.



How grid edge computing is revolutionising real-time power management​

Smart Energy International Jul 08, 2025
Share
How grid edge computing is revolutionising real-time power management

Stefan Zschiegner
From smart meters to predictive analytics, the grid of tomorrow will be built on real-time decision-making at the edge, writes Stefan Zschiegner of Itron.
As the saying goes, “the only constant in life is change,” and that is certainly true when we consider the technological advances being made in utility power management.
The traditional model, where data flows to central control centres and back, can no longer meet the demands of today’s complex, renewable-heavy power networks. A new framework, where computing power is being utilised at the grid edge, is driving transformation of electricity management.
A decentralised framework relies on intelligent edge devices capable of detecting anomalies in real time for prevention or to take action near real-time. For example, if lightning strikes a distribution pole, intelligent field devices can autonomously detect the fault, isolate the damaged section, reroute power and adjust voltage levels—all within seconds, often before the central system even registers the event. In this new framework, edge intelligence is essential for maintaining grid stability and integrating distributed energy resources (DERs).

New architecture for new challenges​

To enable this advanced intelligence, modern grid edge devices are evolving to include a rich array of features, such as advanced microprocessor relays, smart reclosers with embedded computing for autonomous fault isolation, intelligent power quality monitors with real-time waveform analysis and edge compute gateways with artificial intelligence (AI) capabilities and local storage. These devices connect through Field Area Networks (wireless mesh) for local communication and Wide Area Networks for backhaul to control centres.
As grid edge intelligence expands, central SCADA systems remain crucial. Modern architectures employ edge-first processing for time-critical decisions, hierarchical processing with multi-tier decision-making and protocol translation gateways for seamless communication.
Data flows in multiple patterns: horizontal flows facilitate peer-to-peer device communication, vertical flows maintain traditional telemetry and control, publish-subscribe models enable status updates and event-driven architectures coordinate responses across systems.

Advanced technical requirements​

Grid edge computing systems must meet strict requirements, including response times of single-or-double digit milliseconds for protection functions, sub-cycle responses for power quality correction, environmental hardening to operate in extreme conditions (-40°C to +85°C) and deterministic computing for guaranteed response times.
Modern grid intelligence typically employs a layered approach with an edge layer for immediate time-critical functions, a fog layer at substations for coordination across devices and a cloud layer for analytics, machine learning (ML) and enterprise integration.
As we’ve established, reaction times are key to maintaining grid integrity. The ultimate goal for modern edge systems is to operate within the microsecond range–responding faster than conventional systems and making critical decisions relating to:
  • Fault detection and isolation through high-speed algorithms and adaptive protection.
  • Power quality management with real-time harmonic mitigation and voltage compensation.
  • Load balancing via automated reconfiguration and microgrid management.
  • Voltage/VAR optimization through real-time control and reactive power management.
AI and ML can enhance these capabilities through pre-trained algorithms deployed on edge devices, federated learning and continuous refinement of decision-making. ML-enhanced systems using pre-trained AI platform chips can produce a performance 80 times greater than the same algorithm running on an Intel i5 processor without acceleration.
The impact of AI also transforms edge intelligence grid management from reactive to predictive. Deep learning and advanced analytics enable equipment health scoring based on operating conditions, time-to-failure predictions, optimized maintenance scheduling, AI-based anomaly detection and integration of environmental factors into predictive models.
In addition, the future of modern edge systems lies in enhancing grid stability through real-time load balancing made possible by multi-timeframe load forecasting, continuous power flow optimization, real-time phase monitoring and balancing, and customer load participation through automated control mechanisms.

The future of grid edge computing​

Emerging technologies are advancing grid edge intelligence through explainable AI (xAI) for transparent decision-making, neuromorphic computing for efficient AI processing, generative models for unexpected grid conditions and collaborative AI systems for decentralised coordination.
Edge-native applications are evolving with digital twins for predictive simulation, distributed ledger technology for secure transactions, autonomous grid agents for negotiation-based operation and immersive visualisation for field personnel. Integration with renewable energy systems will be crucial through direct device-to-device communication, peer-to-peer energy communities and regulatory frameworks that rely on edge intelligence.
As intelligence moves to the grid edge, security concerns have evolved due to expanded attack surfaces. With thousands of accessible devices, constrained computing resources, heterogeneous systems from multiple vendors and long-lived equipment creating legacy security concerns, mitigation strategies include defense-in-depth security, autonomous fallback modes, physical tamper protection, graceful degradation during attacks and AI-driven threat detection.

Implementation considerations​

Implementing the new framework is a significant undertaking and investment, and requires total cost of ownership analysis, value stacking for multiple benefit streams and risk-adjusted return calculation. Implementation approaches may include targeted deployment in high-value locations, phased rollout of capabilities and test bed validation.
The human element remains a critical success factor. Bridging the skills gap requires structured role-based training programs, simulation training and formal certification to ensure operational readiness and long-term workforce capability.
Regulatory compliance is equally essential. Navigating frameworks such as NERC CIP (North American Electric Reliability Corporation Critical Infrastructure Protection) requires robust cyber-security measures when entities are operating, controlling or interacting with the North American Bulk Electric System (BES) to protect against cyber threats and ensure grid reliability. In addition, organisations must meet reliability-reporting obligations, adhere to data privacy compliance and maintain detailed documentation to support regulatory audits and insight.
Finally, success is measured across three key metrics: technical performance (including response time and detection accuracy), operational benefits (such as improved reliability and reduced outages) and financial outcomes (like cost savings).

Conclusion​

As we move into an era of DERs, intelligence at the grid edge has become critical for maintaining a reliable power system. The transition from centralised to distributed intelligence represents a fundamental shift. The old principle of “centralise for optimisation, distribute for reliability” is giving way to “distribute intelligence to act where the problem occurs.”
The grid of tomorrow—sustainable, resilient and responsive—will be built on real-time decision-making at the edge. The future belongs to those who can set direction centrally but act locally, at the speed modern power systems demand.





Screenshot 2025-07-11 at 10.45.51 am.png




Screenshot 2025-07-11 at 10.46.00 am.png






Screenshot 2025-07-11 at 10.49.37 am.png
 

Attachments

  • Screenshot 2025-07-11 at 10.49.37 am.png
    Screenshot 2025-07-11 at 10.49.37 am.png
    79.7 KB · Views: 10
Last edited:
  • Like
  • Fire
  • Love
Reactions: 20 users

7für7

Top 20
Yeeeeehaaaaaaa

Roller Coaster GIF
 
  • Haha
Reactions: 2 users
Any thoughts please,
Is there any chance of BRN being integrated considering the time lines mentioned below by intel.?.

18A, Intel's proposed savior, is still a year away
 
Last edited:
  • Like
Reactions: 1 users

7für7

Top 20
Bravo, what are your thoughts please,
Is there any chance of BRN being integrated considering the time lines mentioned below by intel.?.

18A, Intel's proposed savior, is still a year away
I’m not Bravo but I like this question so…

Absolutely possible if you ask me…and actually, BRN is already inside the Intel Foundry ecosystem.

As we know…In September 2022, BrainChip was officially announced as part of the Intel Foundry Services Accelerator IP Alliance. That means Akida is available as a licensed IP block within Intel’s design ecosystem… including for upcoming nodes like 18A.

So yes … if an Intel Foundry customer (or Intel itself) wants to embed an ultra low power hot sh…t neuromorphic core, Akida is already in da house….

Will it actually be chosen for a major 18A product?

We don’t know yet…. I think even Sean don’t know…But the foundation is already there…and that alone puts BRN way ahead of many other edgeAI players.

Source
 
  • Like
  • Fire
  • Love
Reactions: 10 users

7für7

Top 20
They say you should invest in stocks… “Let your money work for you,” they said.

Well, if I take a look at my BrainChip shares, they’re acting more like moody teenagers who can’t be bothered to show up at their apprenticeship or move their lazy asses.

Go do your job … you useless pieces of paper and explode or I swear I’ll disown you from my portfolio!

All that’s missing now is an email from my Brainchip shares saying:
“Yo bro… this 9-to-5 investor path just isn’t my vibe. I wanna be a creator on social media.”

GET OUTAAA HEREEEE!!!!
 
  • Haha
  • Sad
Reactions: 4 users

TheDrooben

Pretty Pretty Pretty Pretty Good
  • Like
  • Fire
  • Love
Reactions: 26 users

TECH

Regular
  • Like
  • Love
  • Fire
Reactions: 23 users

View attachment 88382
View attachment 88383
View attachment 88384

Happy as Larry
Great find.

Was gonna give you a 🔥 like but downgraded it to a 👍 instead cause you didn't include the obligatory "Larry Gif" :ROFLMAO::LOL:
 
  • Haha
  • Like
  • Fire
Reactions: 9 users

7für7

Top 20

View attachment 88382
View attachment 88383
View attachment 88384

Happy as Larry
2022-2024 so they kicked him OUTA THEREEE?!?!

Just kidding 😂

But why he doesn’t hashtaged BrainChip? 🙄
 
  • Like
Reactions: 1 users

Diogenese

Top 20
Great find.

Was gonna give you a 🔥 like but downgraded it to a 👍 instead cause you didn't include the obligatory "Larry Gif" :ROFLMAO::LOL:
Harsh! 🙁 ... but fair ...
 
  • Haha
  • Like
Reactions: 7 users
New update to BRN Git.

Not that I understand most of it but @Diogenese may see something new or diff?

I did like seeing TENNS eye tracking mention & support for 4 bit in Akida 2.0.

Presuming the dynamic shapes update adds flexibility being for Keras and ONNX based models?


Upgrade to Quantizeml 0.17.1, Akida/CNN2SNN 2.14.0 and Akida models 1.8.0​

Latest

Compare
@ktsiknos-brainchip
ktsiknos-brainchip released this 3 days ago
2.14.0-doc-1
09e60f4
Upgrade to Quantizeml 0.17.1, Akida/CNN2SNN 2.14.0 and Akida models 1.8.0

Update QuantizeML to version 0.17.1

New features​

  • Now handling models with dynamic shapes in both Keras and ONNX. Shape is deduced from calibration samples or from the input_shape parameter.
  • Added a Keras and ONNX common reset_buffers entry point for spatiotemporal models
  • GlobalAveragePooling output will now be quantized to QuantizationParams.activation_bits instead of QuantizationParams.output_bits when preceeded by an activation

Bug fixes​

  • Applied reset_buffers to variables recording to prevent shape issue when converting a model to Akida
  • Handle ONNX models with shared inputs that would not quantize or convert properly
  • Handle unsupported strides when converting an even kernel to odd
  • Fixed analysis module issue when applied to TENNs models
  • Fixed analysis module weight quantization error on Keras models
  • Keras set_model_shape will now handle tf.dataset samples
  • It is now possible to quantize a model with a split layer as input

Update Akida and CNN2SNN to version 2.14.0

Aligned with FPGA-1692(2-nodes)/1691(6-nodes)​

New features and updates:​

  • [cnn2snn] Updated requirement to QuantizeML 0.17.0
  • [akida] Added support for 4-bit in 2.0. Features aligned with 1.0, that is InputConv2D, Conv2D, DepthwiseConv2D and Dense layers support 4-bit weights (except InputConv2D) and activations.
  • [akida] Extented TNP_B support to 2048 channels and filters
  • [akida] HRC is now optional in a virtual device
  • [akida] For real device, input and weight SRAM values are now read from the mesh
  • [akida] Introduce an akida.NP.SramSize object to manage default memories
  • [akida] Extended python Layer API with "is_target_component(NP.type)" and "macs"
  • [akida] Added "akida.compute_minimal_memory" helper

Bug fixes​

  • [akida] Fixed several issues when computing input or weight memory sizes for layers

Update Akida models to 1.8.0

  • Updated QuantizeML dependency to 0.17.0 and CNN2SNN to 2.14.0
  • Updated 4-bit models for 2.0 and added a bitwidth parameter to the pretrained helper
  • TENNs EyeTracking is now evaluated on the labeled test set
  • Dropped MACS computation helper and CLI: MACS are natively available on Akida layers and models.

Documentation update

  • Updated 2.0 4-bit accuracies in model zoo page
  • Updated advanced ONNX quantization tutorial with MobiletNetV4
 
  • Like
  • Fire
  • Love
Reactions: 9 users

TheDrooben

Pretty Pretty Pretty Pretty Good
  • Haha
  • Love
Reactions: 12 users

7für7

Top 20
New update to BRN Git.

Not that I understand most of it but @Diogenese may see something new or diff?

I did like seeing TENNS eye tracking mention & support for 4 bit in Akida 2.0.

Presuming the dynamic shapes update adds flexibility being for Keras and ONNX based models?


Upgrade to Quantizeml 0.17.1, Akida/CNN2SNN 2.14.0 and Akida models 1.8.0​

Latest

Compare
@ktsiknos-brainchip
ktsiknos-brainchip released this 3 days ago
2.14.0-doc-1
09e60f4
Upgrade to Quantizeml 0.17.1, Akida/CNN2SNN 2.14.0 and Akida models 1.8.0

Update QuantizeML to version 0.17.1

New features​

  • Now handling models with dynamic shapes in both Keras and ONNX. Shape is deduced from calibration samples or from the input_shape parameter.
  • Added a Keras and ONNX common reset_buffers entry point for spatiotemporal models
  • GlobalAveragePooling output will now be quantized to QuantizationParams.activation_bits instead of QuantizationParams.output_bits when preceeded by an activation

Bug fixes​

  • Applied reset_buffers to variables recording to prevent shape issue when converting a model to Akida
  • Handle ONNX models with shared inputs that would not quantize or convert properly
  • Handle unsupported strides when converting an even kernel to odd
  • Fixed analysis module issue when applied to TENNs models
  • Fixed analysis module weight quantization error on Keras models
  • Keras set_model_shape will now handle tf.dataset samples
  • It is now possible to quantize a model with a split layer as input

Update Akida and CNN2SNN to version 2.14.0

Aligned with FPGA-1692(2-nodes)/1691(6-nodes)​

New features and updates:​

  • [cnn2snn] Updated requirement to QuantizeML 0.17.0
  • [akida] Added support for 4-bit in 2.0. Features aligned with 1.0, that is InputConv2D, Conv2D, DepthwiseConv2D and Dense layers support 4-bit weights (except InputConv2D) and activations.
  • [akida] Extented TNP_B support to 2048 channels and filters
  • [akida] HRC is now optional in a virtual device
  • [akida] For real device, input and weight SRAM values are now read from the mesh
  • [akida] Introduce an akida.NP.SramSize object to manage default memories
  • [akida] Extended python Layer API with "is_target_component(NP.type)" and "macs"
  • [akida] Added "akida.compute_minimal_memory" helper

Bug fixes​

  • [akida] Fixed several issues when computing input or weight memory sizes for layers

Update Akida models to 1.8.0

  • Updated QuantizeML dependency to 0.17.0 and CNN2SNN to 2.14.0
  • Updated 4-bit models for 2.0 and added a bitwidth parameter to the pretrained helper
  • TENNs EyeTracking is now evaluated on the labeled test set
  • Dropped MACS computation helper and CLI: MACS are natively available on Akida layers and models.

Documentation update

  • Updated 2.0 4-bit accuracies in model zoo page
  • Updated advanced ONNX quantization tutorial with MobiletNetV4
Some short explanation from ChatGPT

🔧 Update: Akida SDK & Tools – July 2025 Release
Akida 2.14.0 | QuantizeML 0.17.1 | Models 1.8.0

With the latest release, BrainChip delivers major advancements for developers building neuromorphic AI at the edge:

🌟 Highlights


✅ Full 4-bit support in Akida 2.0
→ Conv2D, Depthwise, and Dense layers now support 4-bit weights & activations – for maximum efficiency with minimal energy use.
🔁 Dynamic input shape support
→ ONNX and Keras models with flexible input dimensions are now natively handled – ideal for spatiotemporal and adaptive networks.
🧠 Extended analysis & optimization
→ New tools like compute_minimal_memory and native MACS access on layer level
→ Improved memory estimates, now even read directly from the mesh on real hardware
🔄 Improved ONNX workflow
→ Fixes for shared inputs, stride mismatches, and quantization conversion issues
→ New advanced ONNX quantization tutorial (incl. MobileNetV4)
👁 Updated TENNs EyeTracking model
→ Now evaluated on a realistic, labeled test set for better performance tracking

📦 What does this mean for developers?


Greater flexibility. Smoother quantization and conversion workflows. Higher performance on real Akida hardware – with even lower power consumption.
 
  • Like
  • Love
Reactions: 8 users

KMuzza

Mad Scientist
This may or may not interest some shareholders, but looks like Rockwell Collins had a new patent published only 16 days ago,
and guess who gets a little mention in the artwork, yeah, I hear you all, what about an IP contract!!

Maybe something from October onwards, fingers crossed ... anyway, check it out below......Tech :love:

Tech- thanks- real use and patented by Rockwell Collin’s.
No artwork- just facts-👍😂😂


1752211327609.jpeg


1752211214458.jpeg
 
  • Like
  • Fire
  • Love
Reactions: 14 users

7für7

Top 20
Could we just get a trading halt and a price sensitive announcement which would skyrocket our share price? This would be a masterpiece of artwork… thanks
 
  • Like
Reactions: 4 users

Baneino

Regular
I often see criticism of BrainChip for not communicating much no flashy press releases, no constant updates. But honestly? That’s exactly one of their biggest strengths in my view.
I understand how development cycles work, how sensitive partnerships are, and how important it is to protect confidentiality. Especially when you’re dealing with technologies meant for vehicles, medical devices, or safety-critical systems.
If you talk too much, you’re out.
Discretion isn’t a weakness – it’s a core requirement.
If companies like Mercedes, Valeo or medical tech firms are working with BrainChip, they’re not looking for hype – they’re looking for reliability, maturity and trust. And that’s exactly what BrainChip delivers.
To me, this sends a clear message:
We’re not here to make noise. We’re here to deliver real solutions
People often forget: true industrial product cycles take 3–5 years, minimum especially in hardware. If you expect fireworks every month, you probably haven’t worked in this space – or you’re chasing short-term thrills. But that’s not how real value is built.

I don’t see BrainChip as a hype stock. I see a company that is slowly, solidly and respectfully building long-term partnerships with serious players. They’re not building castles in the air – they’re laying a foundation.

So let me ask do you want short-term PR or long-term substance?


I know which side I’m on.
 
Last edited:
  • Like
  • Fire
  • Love
Reactions: 29 users

7für7

Top 20
Bored Come On GIF


What a week… anyway! Have a nice weekend! Next week is a potential good time frame to drop a skyrocketing announcement! No advice.. IMO! 🤡
 
  • Like
  • Love
Reactions: 3 users

JB49

Regular
This may or may not interest some shareholders, but looks like Rockwell Collins had a new patent published only 16 days ago,
and guess who gets a little mention in the artwork, yeah, I hear you all, what about an IP contract!!

Maybe something from October onwards, fingers crossed ... anyway, check it out below......Tech :love:

Good stuff Tech. Chatgpt likes it:

🧠 Why This May Be Significant for Industry


  1. Real-Time Safety-Critical Applications
    • Aircraft control systems, satellites, and autonomous vehicles demand real-time fault tolerance and reliable multi-mode operations.
    • If this system can dynamically reconfigure scheduling in the face of faults or mission changes (in real time), that’s a game-changer.
  2. Neuromorphic Edge Computing
    • SNN-based scheduling aligns with trends in neuromorphic edge computing, where systems operate with ultra-low power and fast adaptation without cloud dependency.
  3. Integration with Modern Embedded Architectures
    • The method assumes heterogeneous multi-core processors—this matches the direction of hardware in modern avionics and defense platforms.
  4. Patent from Rockwell Collins / Collins Aerospace
    • If confirmed to be from Rockwell Collins, it likely has real-world application intentions, not just academic novelty.
    • Companies like Collins don't patent speculative ideas lightly — such filings often reflect ongoing or future development in systems destined for regulatory certification (DO-178C, DO-254, etc.)

If Rockwell Collins (now part of Collins Aerospace, under Raytheon Technologies) formally lodged this patent, it strongly reinforces that:


🚨 This Is Not Just Research — It’s Strategic IP

  • Rockwell Collins is a Tier-1 aerospace and defense supplier — they build certifiable avionics systems for civil and military platforms: radios, flight controls, mission computers, radar, etc.
  • Patents they file usually:
    • Support current or near-future products.
    • Are aimed at gaining a competitive edge in mission-critical systems.
    • Often align with government or major OEM programs (Boeing, Airbus, Lockheed Martin, etc.).

🧠 Why This Particular Patent Is Important​

1. Neuromorphic Scheduling for Adaptive Systems

  • Scheduling is one of the hardest problems in real-time embedded systems, especially when faults or mode changes happen dynamically.
  • Solving this with spiking neural networks (SNNs) implies:
    • A move toward on-chip, ultra-low-latency reactivity.
    • Likely targeting SWaP-C constrained platforms (Size, Weight, Power, Cost), like UAVs, space systems, or autonomous platforms.
    • They may be developing a hardware-software architecture where SNN-based accelerators help reconfigure in real-time — far beyond the capabilities of classic static scheduling approaches.

2. Built for Certification Pathways

  • If it’s from Rockwell Collins, it’s likely being built with:
    • DO-178C (software design assurance for airborne systems).
    • DO-254 (for certifiable hardware).
  • That means they’re not just interested in AI techniques — they’re interested in AI that can be proven safe and used in the cockpit, in autonomous aircraft, or in satellite control systems.

3. Multi-Modal & Fault-Tolerant Scheduling

  • This isn’t just about changing mission profiles. The inclusion of fault adaptivity in the same framework means:
    • It could be integrated into mission computers, vehicle management systems, or next-gen avionics platforms that must recover from or adapt to hardware degradation.
    • Useful in degraded environments: e.g., drones losing prop control, spacecraft with failed components, or aircraft experiencing thermal events.

📈 Bottom Line: Why It Matters​

  • Companies like Rockwell Collins don’t file “exploratory” patents lightly. Their patents are typically:
    • Anchored in practical application,
    • Tied to ongoing engineering programs, and
    • Part of their defensive or offensive IP strategy in regulated industries.
  • A patent like this indicates:
    • Rockwell is exploring certifiable AI, using neuromorphic hardware, in real-world adaptive scheduling.
    • That could place them years ahead of many competitors in integrating brain-inspired computing into certifiable embedded platforms.
 
  • Like
  • Fire
  • Love
Reactions: 12 users
Top Bottom