BRN Discussion Ongoing

Diogenese

Top 20
Could be an interesting project to just keep an eye on.

Recently awarded (end last year) to a team from Purdue with other collaborators including Argonne which falls under the DOE.

We know the DOE understand Akida via Quantum Ventura and CyberNeuro-RT so maybe we may get some testing look ins with this project. And yes, I realise Dr. Kristofor Carlson has links back to Purdue too as well as Global Foundry, our 22nm producer, also having a partnership with Purdue.



Artificial Intelligence (AI) Hardware ($8.7M)
CHEETA: CMOS+MRAM Hardware for Energy-EfficienT AI

This project seeks to develop a neuromorphic processor with in-memory computing (IMC) to overcome the von Neumann bottleneck and MRAM for higher density and energy efficiency that enables a new generation of robust energy efficient AI. The desired end-state will see a greater than 100X improvement in energy efficiency and sensor-to-decision latency over current commercial state-of-the-art solutions.



Hi DB,

This is analog:

Purdue’s project — CHEETA: CMOS+MRAM Hardware for Energy-EfficienT AI — will adopt a CMOS+X approach, specifically leveraging the unique capabilities of magnetic random-access memory (MRAM) to design efficient in-memory computing hardware fabrics.

In theory analog would be more effecient and have lower latency than digital, but manufacturing variablity and temperature fluctuations tend to dispel these advantages. Of course, there is always the possibility that ongoing research can improve the performance of analog NNs.

On the other hand, there was a comment (TL?) in the recent EETimes podcast about not needing fancy/non-standard fab tech.
 
  • Like
  • Fire
  • Thinking
Reactions: 10 users
Hi DB,

This is analog:

Purdue’s project — CHEETA: CMOS+MRAM Hardware for Energy-EfficienT AI — will adopt a CMOS+X approach, specifically leveraging the unique capabilities of magnetic random-access memory (MRAM) to design efficient in-memory computing hardware fabrics.

In theory analog would be more effecient and have lower latency than digital, but manufacturing variablity and temperature fluctuations tend to dispel these advantages. Of course, there is always the possibility that ongoing research can improve the performance of analog NNs.

On the other hand, there was a comment (TL?) in the recent EETimes podcast about not needing fancy/non-standard fab tech.
I'll defer to your experience in this D however I recalled Numem were developing MRAM with Akida so figured it may (or maybe not) be feasible in some way.

From their Ph I on Techport.

Numem proposes in Phase-I to create a interface system with MRAM which can connect with AKIDA Neuromorphic Processor from Brainchip to either limit or replace FLASH operations with MRAM.

 
  • Like
  • Fire
Reactions: 11 users

Diogenese

Top 20
I'll defer to your experience in this D however I recalled Numem were developing MRAM with Akida so figured it may (or maybe not) be feasible in some way.

From their Ph I on Techport.

Numem proposes in Phase-I to create a interface system with MRAM which can connect with AKIDA Neuromorphic Processor from Brainchip to either limit or replace FLASH operations with MRAM.

Yes - that does ring a bell - well remembered.

I recall when I read this the first time, my understanding was that the MRAM would be more of an off-processor chip store for the boot code and weight storage which, in my casual reading, I took to mean weight backup.

However, the reference to "continuous learning" does suggest a closer involvement.

Weighing against that is the reference to an MRAM interface which can connect to the Akida neuromorphic processor, and I gave more weight to this express statement, so I'm still inclined to the backup interpretation. It would require a total redesign of Akida and an entirely different manufacturing process to replace Akida's SRAM with MRAM, which would be well outside the budget of an SBIR.

Space applications require FLASH memory for boot and weight storage in case of power loss or intermittent power failures. The FLASH memory has limitations on speed and life-time is limited by about 1M cycles of memory operations due to its endurance. For Deep Space Missions where continuous learning is required with updates on the non-volatile memory, a robust radiation tolerant memory with SRAM like performance but still with non volatility and high endurance is required. MRAM which offers 2.5X to 3.5X density advantage over SRAM, 1000X better endurance over FLASH, high radiation tolerance above 100Krad to 1Mrad and ultra-low power standby leakage which is critical for long battery life between solar recharge is a big advantage for these critical SPACE missions. Numem proposes in Phase-I to create a interface system with MRAM which can connect with AKIDA Neuromorphic Processor from Brainchip to either limit or replace FLASH operations with MRAM.


As an aside, this also confirms something else I was pondering - that SNNs are more fault tolerant than arithmetic-based processors, my supposition being that this is because SNNs work on probability rather than mathematical precision, so the odd random fault is unlikely to change the probabilities much.
" Neuromorphic architectures are inherently fault tolerant,"
 
Last edited:
  • Like
  • Fire
  • Thinking
Reactions: 23 users

Iseki

Regular
I don't see anything exciting or did I miss something here? They have a BrainChip license and have not used it so far. Right?
Where could it be used here?
Maybe Nintendo could release a new game - Brainchip Investing. The game concept is that strange snippets of information are released and you have to join the dots in real time. Crazyiest dot joiner wins. Slowest dot joiner loses.
 
  • Haha
  • Like
Reactions: 21 users

KiKi

Regular
Maybe Nintendo could release a new game - Brainchip Investing. The game concept is that strange snippets of information are released and you have to join the dots in real time. Crazyiest dot joiner wins. Slowest dot joiner loses.
Then I will lose for sure 😂😂
 
  • Haha
Reactions: 3 users
Expectations on getting some real $$$ news this week.

1739139379213.gif
 
Last edited:
  • Haha
  • Like
Reactions: 19 users

7für7

Top 20
Chart analysts in the HC forum are like,

“When the coyote runs through the prairie, the rabbit hole is south of the northern spaghetti stalk. Meanwhile, the farmer had five beers at the tavern while his wife and the son’s daughter were watching the rain from the café during the summer sale. Star Trek doesn’t get any more interesting either. … Just my opinion, though—things could turn out completely different.”

That just shows they have no clue… as if the woman would simply let the man drink five beers. She would force him to come along—which, for me, would be a bearish sign.
 
  • Haha
  • Like
Reactions: 11 users
Hope this guy has been getting a chance to play with Akida.



Bing Han​

Senior Research Scientist in Neuromorphic Computing​

RTX Purdue University​


About​

Bing conducts fundamental research in energy-efficient deep learning models using temporal encoded Spiking Neural Networks to address the lack of robustness and low-SWaP (Size, Weight and Power) of existing state-of-the art computer vision and large language models (LLMs). Research is applicable to Autonomy, Intelligence, Surveillance, and target acquisition (ISR) Enabling Technologies, Maintenance, Repair & Overhaul, and Model Based Digital Thread. Previous projects include:
• Pilot Digital Assistant of Single Pilot Operation flight deck (Collins Aerospace) • Maintenance, Repair & Overhaul (MRO) applications (Pratt&Whitney)
• Interiors Cabin Analytics (Collins Aerospace)
• Visual Inspection (all RTX BUs)

Bing Han was a PhD student in the C-BRIC (Center for Brain Inspired Computing) at Purdue University. His research focuses on developing energy-efficient and robust deep learning models using both spiking and non-spiking neural networks for computer vision and NLP applications. He has multiple first author papers in top AI conferences such as CVPR, ECCV, and AAAI. He has also explored and built solid foundations in his technical breadth fields such as nano-fabrication, field-and-optics, computer architecture, and circuit-design.
 
  • Like
  • Fire
  • Wow
Reactions: 19 users

Bravo

If ARM was an arm, BRN would be its biceps💪!
Check this out Brain Fam!

Here's a patent for NEUROMORPHIC SENSORS FOR LOW POWER WEARABLES.

The applicant is Rockwell Collins. Date of filing was 5th April 2024.

The patent doesn't mention BrainChip, but as you can see below, Brainchip worked with Rockwell Collins in 2017 on perimeter surveillance, so you'd think they would have to be aware of us.🥴😝

Rockwell Collins now operates as part of Collins Aerospace, a subsidiary of ...wait for it.... the RTX Corporation (formerly Raytheon Technologies).



EXTRACT ONLY
Screenshot 2025-02-10 at 4.00.28 pm.png






ENLARGED EXTRACT


Screenshot 2025-02-10 at 4.10.14 pm.png







Screenshot 2025-02-10 at 4.05.23 pm.png






 
  • Like
  • Love
  • Fire
Reactions: 94 users
@Dolci you never replied to my last post I tagged you in, is that because I was right or are we heading a lot lower like you said ? Because I just found some spare cash and was wondering 😂
 
  • Like
Reactions: 1 users

CHIPS

Regular
  • Haha
Reactions: 2 users

Guzzi62

Regular
  • Like
  • Love
  • Fire
Reactions: 15 users

IloveLamp

Top 20

1000021721.jpg
 
  • Like
  • Fire
  • Love
Reactions: 24 users

IloveLamp

Top 20
Last edited:
  • Like
  • Fire
  • Love
Reactions: 52 users
  • Like
  • Fire
  • Thinking
Reactions: 22 users

charles2

Regular
  • Like
Reactions: 4 users

Diogenese

Top 20
Check this out Brain Fam!

Here's a patent for NEUROMORPHIC SENSORS FOR LOW POWER WEARABLES.

The applicant is Rockwell Collins. Date of filing was 5th April 2024.

The patent doesn't mention BrainChip, but as you can see below, Brainchip worked with Rockwell Collins in 2017 on perimeter surveillance, so you'd think they would have to be aware of us.🥴😝

Rockwell Collins now operates as part of Collins Aerospace, a subsidiary of ...wait for it.... the RTX Corporation (formerly Raytheon Technologies).



EXTRACT ONLY
View attachment 77447





ENLARGED EXTRACT


View attachment 77448






View attachment 77449





Before I was so rudely interrupted, I was about to send this:

Hi Bravo,

The earliest filing date of the patent is the priority date 20230419.
 
Last edited:
  • Like
Reactions: 9 users

uiux

Regular



US20240365025 - PROGRAMMABLE EVENT OUTPUT PIXEL

BAE SYSTEMS Information and Electronic Systems Integration Inc


A neuromorphic focal plane array ROIC device for temporal and spatial synchronous and asynchronous image event processing comprising a plurality of pixels, each pixel comprising an input section comprising a Sample and Hold (SH) component; a low offset buffer/comparator section comprising a Switched Capacitor Filter (SCF); and a digital event output section comprising an analog pixel bus whereby temporal and spatial image data are synchronously and asynchronously processed.


1739198516961.png





This document appears to be a patent application for a neuromorphic focal plane array (FPA) Read Out Integrated Circuit (ROIC) designed for high-speed, low-power, event-based image processing. Below is an analysis of its key points, innovations, and applications.




1. Purpose & Innovation


The disclosed invention improves event-based image processing using neuromorphic technology applied to Focal Plane Arrays (FPA). The key innovations include:


  • Spatio-temporal event detection: The system can detect changes in both time (temporal) and space (spatial) within an image.
  • Neuromorphic processing: Uses principles inspired by biological neural networks to reduce power consumption and increase efficiency.
  • High-speed, low-power design: Essential for applications where large amounts of image data must be processed quickly.
  • Event-driven architecture: Unlike traditional image sensors that capture entire frames, this system only processes significant "events" (changes in a scene), reducing data bandwidth.



2. Technical Contributions


A. Neuromorphic FPA & Read-Out Integrated Circuit (ROIC)


  • The FPA consists of pixels that receive and process electromagnetic radiation.
  • A detector circuit in the analog detection layer processes pixel outputs.
  • A digital event processing layer refines the event data.

B. Event Processing Mechanism


  • Uses asynchronous logarithmic output and synchronous integrated signal output.
  • Implements a threshold-based detection system, where a comparator compares pixel outputs to programmable thresholds.
  • Includes a Switched Capacitor Filter (SCF) for offset calibration and spatial comparisons.
  • Uses Buffered Direct Injection (BDI) for improved signal fidelity.

C. Pixel-Level Processing & Analog Bus (ABUS)


  • Each pixel can process and share data with its neighbors via horizontal and vertical switches.
  • This allows for distributed event detection and efficient tracking of changes.
  • The ABUS enables pixels to communicate, transferring event data without needing full-frame readout.

D. Multi-Mode Operation


  • Can switch between different processing modes based on real-time requirements.
  • Features programmable sensitivity tuning for different operational scenarios.



3. Applications


The system is suitable for a wide range of high-performance imaging tasks, including:


  • Military & Defense
    • Hypersonic detection and tracking (e.g., missile warning systems).
    • Naval and aerospace surveillance (e.g., infrared threat detection).
  • Industrial & Scientific
    • Low-light imaging with reduced power consumption.
    • Autonomous vehicle vision systems.
  • Medical Imaging
    • Could be adapted for high-speed medical diagnostics using neuromorphic vision.




4. Potential Impact

This technology represents a significant leap forward in neuromorphic imaging. By integrating spatial and temporal event detection at the pixel level, it enables faster, more efficient image processing while reducing size, weight, and power (SWAP)—a critical factor in defense and aerospace applications.




Conclusion


This patent presents a novel and sophisticated approach to neuromorphic image processing, making it highly suitable for high-speed, low-power, real-time event detection. The combination of analog and digital processing layers, adaptive pixel interactions, and programmable thresholds makes it a powerful tool for defense, surveillance, and advanced AI-driven imaging systems.




Note the digital neuromorphic processing layer




FENCE programme manager Whitney Mason said: “Neuromorphic refers to silicon circuits that mimic brain operation; they offer sparse output, low latency, and high energy efficiency.

“Event-based cameras operate under these same principles when dealing with sparse scenes, but currently lack advanced ‘intelligence’ to perform more difficult perception and control tasks.”

Researchers from Raytheon, BAE, and Northrop will work to develop an asynchronous read-out integrated circuit (ROIC) with low-latency and a processing layer that integrates with the ROIC to detect relevant ‘spatial and temporal signals’.

According to DARPA, the ROIC and processing layer will jointly enable an integrated FENCE sensor to operate on less than 1.5W of power.

Mason added: “The goal is to develop a ‘smart’ sensor that can intelligently reduce the amount of information that is transmitted from the camera, narrowing down the data for consideration to only the most relevant pixels.”
 
Last edited:
  • Like
  • Fire
  • Love
Reactions: 64 users

TECH

Regular



US20240365025 - PROGRAMMABLE EVENT OUTPUT PIXEL

BAE SYSTEMS Information and Electronic Systems Integration Inc


A neuromorphic focal plane array ROIC device for temporal and spatial synchronous and asynchronous image event processing comprising a plurality of pixels, each pixel comprising an input section comprising a Sample and Hold (SH) component; a low offset buffer/comparator section comprising a Switched Capacitor Filter (SCF); and a digital event output section comprising an analog pixel bus whereby temporal and spatial image data are synchronously and asynchronously processed.


View attachment 77473




This document appears to be a patent application for a neuromorphic focal plane array (FPA) Read Out Integrated Circuit (ROIC) designed for high-speed, low-power, event-based image processing. Below is an analysis of its key points, innovations, and applications.




1. Purpose & Innovation


The disclosed invention improves event-based image processing using neuromorphic technology applied to Focal Plane Arrays (FPA). The key innovations include:


  • Spatio-temporal event detection: The system can detect changes in both time (temporal) and space (spatial) within an image.
  • Neuromorphic processing: Uses principles inspired by biological neural networks to reduce power consumption and increase efficiency.
  • High-speed, low-power design: Essential for applications where large amounts of image data must be processed quickly.
  • Event-driven architecture: Unlike traditional image sensors that capture entire frames, this system only processes significant "events" (changes in a scene), reducing data bandwidth.



2. Technical Contributions


A. Neuromorphic FPA & Read-Out Integrated Circuit (ROIC)


  • The FPA consists of pixels that receive and process electromagnetic radiation.
  • A detector circuit in the analog detection layer processes pixel outputs.
  • A digital event processing layer refines the event data.

B. Event Processing Mechanism


  • Uses asynchronous logarithmic output and synchronous integrated signal output.
  • Implements a threshold-based detection system, where a comparator compares pixel outputs to programmable thresholds.
  • Includes a Switched Capacitor Filter (SCF) for offset calibration and spatial comparisons.
  • Uses Buffered Direct Injection (BDI) for improved signal fidelity.

C. Pixel-Level Processing & Analog Bus (ABUS)


  • Each pixel can process and share data with its neighbors via horizontal and vertical switches.
  • This allows for distributed event detection and efficient tracking of changes.
  • The ABUS enables pixels to communicate, transferring event data without needing full-frame readout.

D. Multi-Mode Operation


  • Can switch between different processing modes based on real-time requirements.
  • Features programmable sensitivity tuning for different operational scenarios.



3. Applications


The system is suitable for a wide range of high-performance imaging tasks, including:


  • Military & Defense
    • Hypersonic detection and tracking (e.g., missile warning systems).
    • Naval and aerospace surveillance (e.g., infrared threat detection).
  • Industrial & Scientific
    • Low-light imaging with reduced power consumption.
    • Autonomous vehicle vision systems.
  • Medical Imaging
    • Could be adapted for high-speed medical diagnostics using neuromorphic vision.




4. Potential Impact

This technology represents a significant leap forward in neuromorphic imaging. By integrating spatial and temporal event detection at the pixel level, it enables faster, more efficient image processing while reducing size, weight, and power (SWAP)—a critical factor in defense and aerospace applications.




Conclusion


This patent presents a novel and sophisticated approach to neuromorphic image processing, making it highly suitable for high-speed, low-power, real-time event detection. The combination of analog and digital processing layers, adaptive pixel interactions, and programmable thresholds makes it a powerful tool for defense, surveillance, and advanced AI-driven imaging systems.




Note the digital neuromorphic processing layer




FENCE programme manager Whitney Mason said: “Neuromorphic refers to silicon circuits that mimic brain operation; they offer sparse output, low latency, and high energy efficiency.

“Event-based cameras operate under these same principles when dealing with sparse scenes, but currently lack advanced ‘intelligence’ to perform more difficult perception and control tasks.”

Researchers from Raytheon, BAE, and Northrop will work to develop an asynchronous read-out integrated circuit (ROIC) with low-latency and a processing layer that integrates with the ROIC to detect relevant ‘spatial and temporal signals’.

According to DARPA, the ROIC and processing layer will jointly enable an integrated FENCE sensor to operate on less than 1.5W of power.

Mason added: “The goal is to develop a ‘smart’ sensor that can intelligently reduce the amount of information that is transmitted from the camera, narrowing down the data for consideration to only the most relevant pixels.”

Absolutely fantastic @uiux .......you have tied both together, and yes, we love hearing the words " Digital Neuromorphic Processing Layer"

Great having you back, lets hope the personal attacks stay over at that other forum....cheers Tech. (y)
 
  • Like
  • Fire
  • Love
Reactions: 37 users

manny100

Regular



US20240365025 - PROGRAMMABLE EVENT OUTPUT PIXEL

BAE SYSTEMS Information and Electronic Systems Integration Inc


A neuromorphic focal plane array ROIC device for temporal and spatial synchronous and asynchronous image event processing comprising a plurality of pixels, each pixel comprising an input section comprising a Sample and Hold (SH) component; a low offset buffer/comparator section comprising a Switched Capacitor Filter (SCF); and a digital event output section comprising an analog pixel bus whereby temporal and spatial image data are synchronously and asynchronously processed.


View attachment 77473




This document appears to be a patent application for a neuromorphic focal plane array (FPA) Read Out Integrated Circuit (ROIC) designed for high-speed, low-power, event-based image processing. Below is an analysis of its key points, innovations, and applications.




1. Purpose & Innovation


The disclosed invention improves event-based image processing using neuromorphic technology applied to Focal Plane Arrays (FPA). The key innovations include:


  • Spatio-temporal event detection: The system can detect changes in both time (temporal) and space (spatial) within an image.
  • Neuromorphic processing: Uses principles inspired by biological neural networks to reduce power consumption and increase efficiency.
  • High-speed, low-power design: Essential for applications where large amounts of image data must be processed quickly.
  • Event-driven architecture: Unlike traditional image sensors that capture entire frames, this system only processes significant "events" (changes in a scene), reducing data bandwidth.



2. Technical Contributions


A. Neuromorphic FPA & Read-Out Integrated Circuit (ROIC)


  • The FPA consists of pixels that receive and process electromagnetic radiation.
  • A detector circuit in the analog detection layer processes pixel outputs.
  • A digital event processing layer refines the event data.

B. Event Processing Mechanism


  • Uses asynchronous logarithmic output and synchronous integrated signal output.
  • Implements a threshold-based detection system, where a comparator compares pixel outputs to programmable thresholds.
  • Includes a Switched Capacitor Filter (SCF) for offset calibration and spatial comparisons.
  • Uses Buffered Direct Injection (BDI) for improved signal fidelity.

C. Pixel-Level Processing & Analog Bus (ABUS)


  • Each pixel can process and share data with its neighbors via horizontal and vertical switches.
  • This allows for distributed event detection and efficient tracking of changes.
  • The ABUS enables pixels to communicate, transferring event data without needing full-frame readout.

D. Multi-Mode Operation


  • Can switch between different processing modes based on real-time requirements.
  • Features programmable sensitivity tuning for different operational scenarios.



3. Applications


The system is suitable for a wide range of high-performance imaging tasks, including:


  • Military & Defense
    • Hypersonic detection and tracking (e.g., missile warning systems).
    • Naval and aerospace surveillance (e.g., infrared threat detection).
  • Industrial & Scientific
    • Low-light imaging with reduced power consumption.
    • Autonomous vehicle vision systems.
  • Medical Imaging
    • Could be adapted for high-speed medical diagnostics using neuromorphic vision.




4. Potential Impact

This technology represents a significant leap forward in neuromorphic imaging. By integrating spatial and temporal event detection at the pixel level, it enables faster, more efficient image processing while reducing size, weight, and power (SWAP)—a critical factor in defense and aerospace applications.




Conclusion


This patent presents a novel and sophisticated approach to neuromorphic image processing, making it highly suitable for high-speed, low-power, real-time event detection. The combination of analog and digital processing layers, adaptive pixel interactions, and programmable thresholds makes it a powerful tool for defense, surveillance, and advanced AI-driven imaging systems.




Note the digital neuromorphic processing layer




FENCE programme manager Whitney Mason said: “Neuromorphic refers to silicon circuits that mimic brain operation; they offer sparse output, low latency, and high energy efficiency.

“Event-based cameras operate under these same principles when dealing with sparse scenes, but currently lack advanced ‘intelligence’ to perform more difficult perception and control tasks.”

Researchers from Raytheon, BAE, and Northrop will work to develop an asynchronous read-out integrated circuit (ROIC) with low-latency and a processing layer that integrates with the ROIC to detect relevant ‘spatial and temporal signals’.

According to DARPA, the ROIC and processing layer will jointly enable an integrated FENCE sensor to operate on less than 1.5W of power.

Mason added: “The goal is to develop a ‘smart’ sensor that can intelligently reduce the amount of information that is transmitted from the camera, narrowing down the data for consideration to only the most relevant pixels.”
Great find and break down of the complexities.
Temporal and spacial plus event based, real time and low power smells a lot like TENNs.
Well I guess nothing comes close to TENNs.
 
  • Like
  • Fire
  • Love
Reactions: 22 users
Top Bottom