BRN Discussion Ongoing

TECH

Top 20
  • Like
  • Fire
  • Love
Reactions: 27 users
FF

What fascinates me is that AKIDA TAG is running the AKD1500 and being promoted across the same applications as the AKIDA Pico.

AKIDA Pico the size of a micro dot.

AKIDA TAG the size of a Smart Watch.

AKIDA Pico can be built on one thru to 6 node configurations and is available as IP and can have expanded memory.

AKD1500 can be had as IP and can be more or less nodes as this is the beauty of the AKIDA Neural fabric.

So why am I fascinated it is because that between them AKD1500 and AKIDA Pico have covered any use case one could imagine in the wearables and industrial use spaces.

Brainchip then has the AKIDA 2.0 and AKD2500 to fill in any Gaps requiring TENNS above 6 nodes up to 256 nodes and AKD1000 at 80 nodes able to serve in robotics, cyber security, post quantum cyber security, Hive Mind space and beyond Earth applications with Radhard GRAIN.

The reality has become suddenly that Brainchip can service ubiquitous market opportunities and with AKIDA 3.0 demonstrating it is now a one stop shop.
 
  • Like
  • Fire
  • Love
Reactions: 26 users
Not sure if anyone has posted this patent released early Feb 2026 from TATA CONSULTANCY SERVICES LTD.

Akida 1000 mentioned numerous times throughout the artwork, gee our first NSoC was/is good, and yes, I know
advancements have certainly been expanded upon and continue to be.

Asking Ai what Tata would use this patient for

The patent US20260037783A1 refers to advancements in the Akida neural processor architecture, a neuromorphic computing system designed by BrainChip Holdings. Neuromorphic engineering, a field pioneered by Carver Mead, seeks to emulate the neural structures and processing methods of the biological brain.[1] Unlike traditional Von Neumann architectures that separate processing and memory, Akida utilizes an "Event-Based" approach where information is processed only when spikes (events) occur, significantly reducing power consumption.[2] This specific patent focuses on the efficient implementation of neural networks, particularly for edge computing applications where low latency and high energy efficiency are paramount.[3]

According to www.iAsk.Ai - Ask AI:

Technical Applications and Use Cases​

The primary use for the technology described in the patent is the deployment of Spiking Neural Networks (SNNs) and accelerated Convolutional Neural Networks (CNNs) on hardware that functions at the "edge" of the internet.[4] In biological systems, neurons communicate via discrete electrical impulses; the Akida processor mimics this by using a mesh of processing nodes that communicate via a packetized protocol, allowing for massive parallelism.[5] This is particularly useful for real-time sensory processing, such as vision, sound, and vibration analysis, without the need for constant cloud connectivity.[6]

Industrial and Commercial Implementation​

  1. Automotive Systems: The technology is used for in-cabin monitoring, gesture recognition, and autonomous driving assistance. By processing visual data locally, the system can react faster than cloud-based AI.[7]
  2. Internet of Things (IoT): Akida's low power profile allows for "Always-On" battery-operated devices. It can perform keyword spotting or anomaly detection in industrial machinery by learning "on-chip" without requiring extensive retraining on a server.[8] [9]
  3. Medical Diagnostics: The patent's methods for efficient data processing enable wearable devices to monitor vitals and detect arrhythmias or other health markers in real-time using minimal energy.[10]
  4. Security and Surveillance: The hardware can be used for facial recognition and object tracking in smart cameras, ensuring privacy by keeping data on the device rather than transmitting it.[11]

Computational Efficiency and On-Chip Learning​

A critical feature of the Akida architecture highlighted in academic literature is its ability to perform One-Shot Learning.[12] Traditional deep learning requires thousands of examples and backpropagation, which is computationally expensive. Akida utilizes biologically inspired learning rules, such as Simplified Spike-Timing-Dependent Plasticity (STDP), allowing the hardware to learn new patterns from just a few examples directly at the edge.[13] The mathematical efficiency of these operations is often represented by the reduction in Multiply-Accumulate (MAC) operations, as spiking architectures primarily use additions:
 
  • Like
  • Love
  • Fire
Reactions: 16 users

itsol4605

Regular
  • Fire
  • Like
  • Love
Reactions: 9 users
Asking Ai what Tata would use this patient for

The patent US20260037783A1 refers to advancements in the Akida neural processor architecture, a neuromorphic computing system designed by BrainChip Holdings. Neuromorphic engineering, a field pioneered by Carver Mead, seeks to emulate the neural structures and processing methods of the biological brain.[1] Unlike traditional Von Neumann architectures that separate processing and memory, Akida utilizes an "Event-Based" approach where information is processed only when spikes (events) occur, significantly reducing power consumption.[2] This specific patent focuses on the efficient implementation of neural networks, particularly for edge computing applications where low latency and high energy efficiency are paramount.[3]

According to www.iAsk.Ai - Ask AI:

Technical Applications and Use Cases​

The primary use for the technology described in the patent is the deployment of Spiking Neural Networks (SNNs) and accelerated Convolutional Neural Networks (CNNs) on hardware that functions at the "edge" of the internet.[4] In biological systems, neurons communicate via discrete electrical impulses; the Akida processor mimics this by using a mesh of processing nodes that communicate via a packetized protocol, allowing for massive parallelism.[5] This is particularly useful for real-time sensory processing, such as vision, sound, and vibration analysis, without the need for constant cloud connectivity.[6]

Industrial and Commercial Implementation​

  1. Automotive Systems: The technology is used for in-cabin monitoring, gesture recognition, and autonomous driving assistance. By processing visual data locally, the system can react faster than cloud-based AI.[7]
  2. Internet of Things (IoT): Akida's low power profile allows for "Always-On" battery-operated devices. It can perform keyword spotting or anomaly detection in industrial machinery by learning "on-chip" without requiring extensive retraining on a server.[8] [9]
  3. Medical Diagnostics: The patent's methods for efficient data processing enable wearable devices to monitor vitals and detect arrhythmias or other health markers in real-time using minimal energy.[10]
  4. Security and Surveillance: The hardware can be used for facial recognition and object tracking in smart cameras, ensuring privacy by keeping data on the device rather than transmitting it.[11]

Computational Efficiency and On-Chip Learning​

A critical feature of the Akida architecture highlighted in academic literature is its ability to perform One-Shot Learning.[12] Traditional deep learning requires thousands of examples and backpropagation, which is computationally expensive. Akida utilizes biologically inspired learning rules, such as Simplified Spike-Timing-Dependent Plasticity (STDP), allowing the hardware to learn new patterns from just a few examples directly at the edge.[13] The mathematical efficiency of these operations is often represented by the reduction in Multiply-Accumulate (MAC) operations, as spiking architectures primarily use additions:

Asking Ai what Tata would use this patient for

The patent US20260037783A1 refers to advancements in the Akida neural processor architecture, a neuromorphic computing system designed by BrainChip Holdings. Neuromorphic engineering, a field pioneered by Carver Mead, seeks to emulate the neural structures and processing methods of the biological brain.[1] Unlike traditional Von Neumann architectures that separate processing and memory, Akida utilizes an "Event-Based" approach where information is processed only when spikes (events) occur, significantly reducing power consumption.[2] This specific patent focuses on the efficient implementation of neural networks, particularly for edge computing applications where low latency and high energy efficiency are paramount.[3]

According to www.iAsk.Ai - Ask AI:

Technical Applications and Use Cases​

The primary use for the technology described in the patent is the deployment of Spiking Neural Networks (SNNs) and accelerated Convolutional Neural Networks (CNNs) on hardware that functions at the "edge" of the internet.[4] In biological systems, neurons communicate via discrete electrical impulses; the Akida processor mimics this by using a mesh of processing nodes that communicate via a packetized protocol, allowing for massive parallelism.[5] This is particularly useful for real-time sensory processing, such as vision, sound, and vibration analysis, without the need for constant cloud connectivity.[6]

Industrial and Commercial Implementation​

  1. Automotive Systems: The technology is used for in-cabin monitoring, gesture recognition, and autonomous driving assistance. By processing visual data locally, the system can react faster than cloud-based AI.[7]
  2. Internet of Things (IoT): Akida's low power profile allows for "Always-On" battery-operated devices. It can perform keyword spotting or anomaly detection in industrial machinery by learning "on-chip" without requiring extensive retraining on a server.[8] [9]
  3. Medical Diagnostics: The patent's methods for efficient data processing enable wearable devices to monitor vitals and detect arrhythmias or other health markers in real-time using minimal energy.[10]
  4. Security and Surveillance: The hardware can be used for facial recognition and object tracking in smart cameras, ensuring privacy by keeping data on the device rather than transmitting it.[11]

Computational Efficiency and On-Chip Learning​

A critical feature of the Akida architecture highlighted in academic literature is its ability to perform One-Shot Learning.[12] Traditional deep learning requires thousands of examples and backpropagation, which is computationally expensive. Akida utilizes biologically inspired learning rules, such as Simplified Spike-Timing-Dependent Plasticity (STDP), allowing the hardware to learn new patterns from just a few examples directly at the edge.[13] The mathematical efficiency of these operations is often represented by the reduction in Multiply-Accumulate (MAC) operations, as spiking architectures primarily use additions:
Can i ask this Tata has a payent that mentions Akida 1000 is this correct?
If not fair enough.
If it is why hasnt Tata been signed up or its common knowledge they are using our product and Royalties will come ?
 

Boab

I wish I could paint like Vincent
Forbes

From the article.

Thanks @itsol4605
Screen Shot 2026-03-15 at 3.15.13 pm.png
 
  • Like
  • Fire
  • Love
Reactions: 16 users
Not sure if anyone has posted this patent released early Feb 2026 from TATA CONSULTANCY SERVICES LTD.

Akida 1000 mentioned numerous times throughout the artwork, gee our first NSoC was/is good, and yes, I know
advancements have certainly been expanded upon and continue to be.

So why isnt there a agreement, why arent partnerships more explained of there true worth and what it represents
 

Diogenese

Top 20
Gary Marcus has been a long term critic of LLMs.

This is an interesting interview in which he addresses hallucinations (13 minute mark), among other things.

Gary Marcus on the Massive Problems Facing AI & LLM Scaling | The Real Eisman Playbook Episode 42

This reaffirms the importance of BRN's Provenance feature:

Provenance Networks: End-to-End Exemplar-Based Explainability

Provenance Networks: End-to-End Exemplar-Based Explainability

Ali Kayyam & Anusha Madan Gopal1

BrainChip Inc. 23041 Avenida De La Carlota, Suite 250 Laguna Hills, CA 92653, USA {agopal,akayyam}@brainchip.com

&M. Anthony Lewis BrainChip Inc. 23041 Avenida De La Carlota, Suite 250 Laguna Hills, CA 92653, USA tlewis@brainchip.com

We introduce provenance networks, a novel class of neural models designed to provide end-to-end, training-data-driven explainability. Unlike conventional post-hoc methods, provenance networks learn to link each prediction directly to its supporting training examples as part of the model’s normal operation, embedding interpretability into the architecture itself. Conceptually, the model operates similarly to a learned KNN, where each output is justified by concrete exemplars weighted by relevance in the feature space. This approach facilitates systematic investigations of the trade-off between memorization and generalization, enables verification of whether a given input was included in the training set, aids in the detection of mislabeled or anomalous data points, enhances resilience to input perturbations, and supports the identification of similar inputs contributing to the generation of a new data point. By jointly optimizing the primary task and the explainability objective, provenance networks offer insights into model behavior that traditional deep networks cannot provide. While the model introduces additional computational cost and currently scales to moderately sized datasets, it provides a complementary approach to existing explainability techniques. In particular, it addresses critical challenges in modern deep learning, including model opaqueness, hallucination, and the assignment of credit to data contributors, thereby improving transparency, robustness, and trustworthiness in neural models.


1773584094017.png



6Discussion and Conclusion​

Provenance networks are orthogonal to existing explainability literature. They learn a representation that not only separates classes but also distinguishes individual samples, leading to a better-organized latent space and providing transparency into model decisions.
Provenance networks are relevant to a variety of fields, from intellectual property protection and security to critical applications like healthcare. They enable the tracking of training data, which can help verify copyright, detect attacks like data poisoning, identify outliers, and ensure the reliability of AI systems. In medical imaging, such provenance could assist in identifying dataset biases—such as models relying on spurious hospital-specific artifacts rather than clinical features—though rigorous validation would be required before clinical deployment (e.g. by examining similar cases to the input). This transparency is also crucial for regulatory compliance, providing the traceable decisions and data lineage needed to audit AI systems. They also benefit research by providing insight into model behaviors such as hallucination in LLMs and can even be adapted to create faster k-nearest neighbors (KNN) algorithms (Cunningham & Delany, 2021; Zhang et al., 2017).
A key limitation is scalability: as training data grows, index head accuracy drops. This can be mitigated using carefully selected subsets, naturally clustered data, or metadata in unlabeled scenarios, as we showed. The index head also adds computational cost and may impact main-task performance, complicating multi-objective optimization. In the future, we plan to apply our approach to address the hallucination problem in LLMs, to mitigate adversarial vulnerability of neural networks, and to boost the explainability of other computer vision tasks such as image segmentation and object detection. We will also explore methods to improve the scalability of our approach to larger datasets
.

Still much work to be done, but I think that this will be a major selling point of BRN's GenAI.
 
  • Like
  • Fire
Reactions: 4 users
Top Bottom