BRN Discussion Ongoing

MrNick

Regular
ec9ae486a1d66ad804c45e669579769b99-09-dome-simpsons.h375.w560 copy.jpg
 
  • Haha
Reactions: 12 users

TECH

Regular
And yet RTX hasn't signed their subcontractor agreement to make a start on trialling akida, even though AFRL will pay them to give it a go. Have they lost it?

Honestly Eye Sea Key,

Where do you come up with this information ?

You are in fact correct in saying that, AFRL are (in effect) paying a fee as part of the parent contract, which Brainchip won (awarded) for 1.8 Million USD.......none of us really know if it was Brainchip who negotiated a third party fee of $800,000 USD or if it was imposed upon Brainchip by AFRL as part of the parent contract.

My personal opinion is that the AFRL told Brainchip who the 3rd party was going to be, to sow the entire deal together, but as I have mentioned before, the fact that Brainchip released a statement stating that we (Brainchip), were committed to pay a third party $800,000 USD, well, am I the only one on this forum who thinks the terms had already been agreed to, can you disclose a figure when the deal is still up in the air ?

Your comment that the third party is RTX......that's pure speculation, which division are you referring to Eye Sea Key ?

Collins Aerospace ?

Just the opinion of a rusted on shareholder...........Tech.
 
  • Like
  • Thinking
Reactions: 17 users

uiux

Regular
Honestly Eye Sea Key,

Where do you come up with this information ?

You are in fact correct in saying that, AFRL are (in effect) paying a fee as part of the parent contract, which Brainchip won (awarded) for 1.8 Million USD.......none of us really know if it was Brainchip who negotiated a third party fee of $800,000 USD or if it was imposed upon Brainchip by AFRL as part of the parent contract.

My personal opinion is that the AFRL told Brainchip who the 3rd party was going to be, to sow the entire deal together, but as I have mentioned before, the fact that Brainchip released a statement stating that we (Brainchip), were committed to pay a third party $800,000 USD, well, am I the only one on this forum who thinks the terms had already been agreed to, can you disclosure a figure when the deal is still up in the air ?

Your comment that the third party is RTX......that's pure speculation, which division are you referring to Eye Sea Key ?

Collins Aerospace ?

Just the opinion of a rusted on shareholder...........Tech.

It reads like Iseki is just making shit up just so he can complain about it
 
  • Like
  • Haha
  • Fire
Reactions: 47 users

Iseki

Regular
Honestly Eye Sea Key,

Where do you come up with this information ?

You are in fact correct in saying that, AFRL are (in effect) paying a fee as part of the parent contract, which Brainchip won (awarded) for 1.8 Million USD.......none of us really know if it was Brainchip who negotiated a third party fee of $800,000 USD or if it was imposed upon Brainchip by AFRL as part of the parent contract.

My personal opinion is that the AFRL told Brainchip who the 3rd party was going to be, to sow the entire deal together, but as I have mentioned before, the fact that Brainchip released a statement stating that we (Brainchip), were committed to pay a third party $800,000 USD, well, am I the only one on this forum who thinks the terms had already been agreed to, can you disclose a figure when the deal is still up in the air ?

Your comment that the third party is RTX......that's pure speculation, which division are you referring to Eye Sea Key ?

Collins Aerospace ?

Just the opinion of a rusted on shareholder...........Tech.
Well, the award went to BRN because some fabulously well heeled defense supplier had already done some work on neuromorphioc analysis of radar-doppler signals, and AFRL wants to see if it can be done in hardware. So everyone in the defense industry knows who it is.

Moreover the $1.8 mill is chicken feed, and I'm presuming at this stage RTX is refusing to sign because it is such a measly amount they quite frankly couldn't give a damn. (Quite possibly they asked for $18Mil.) They'd probably just take that $800K off the next party bar-tab and not have to do all that peski grant acquittal stuff.

So, in answer to your question, I reckon it would be the RTX petty-cash division.

TBH, I have never been so humiliated as a shareholder, at least not since the NEUROBUS debacle, which has also ground to a halt with nothing new reported in a while. Let's face it: Where are ther photos of Sean throwing his hat in the air as images of a satelite actually working is beamed to Parkes? That is some of my evidence.
 
  • Like
  • Fire
Reactions: 4 users

uiux

Regular
Well, the award went to BRN because some fabulously well heeled defense supplier had already done some work on neuromorphioc analysis of radar-doppler signals, and AFRL wants to see if it can be done in hardware. So everyone in the defense industry knows who it is.

Moreover the $1.8 mill is chicken feed, and I'm presuming at this stage RTX is refusing to sign because it is such a measly amount they quite frankly couldn't give a damn. (Quite possibly they asked for $18Mil.) They'd probably just take that $800K off the next party bar-tab and not have to do all that peski grant acquittal stuff.

So, in answer to your question, I reckon it would be the RTX petty-cash division.

TBH, I have never been so humiliated as a shareholder, at least not since the NEUROBUS debacle, which has also ground to a halt with nothing new reported in a while. Let's face it: Where are ther photos of Sean throwing his hat in the air as images of a satelite actually working is beamed to Parkes? That is some of my evidence.



So now you are just making more shit up
 
  • Haha
  • Like
  • Fire
Reactions: 26 users

Iseki

Regular
It reads like Iseki is just making shit up just so he can complain about it
TBH, you couldn't make this stuff up.

Win a grant, and the co-applicant won't sign for LOVE NOR MONEY.

Nowadays, I imagine our founder, Peter, sitting in one of those antique telephone booths ( half desk, half uncomfortable chair, with a small comapartment for the phone), talking with Sean about "shooting to the stars", while on the other end of the line, Sean is mostly focused on a new natty NVIDIA device that analyzes his golf swing and that can, at the flick of a switch, run hilights of the hit show SUCCESSION, which he will show at the upcoming board meeting.

Is it any wonder I'm complaining?!
 
  • Fire
Reactions: 1 users

uiux

Regular
TBH, you couldn't make this stuff up.

Win a grant, and the co-applicant won't sign for LOVE NOR MONEY.

Nowadays, I imagine our founder, Peter, sitting in one of those antique telephone booths ( half desk, half uncomfortable chair, with a small comapartment for the phone), talking with Sean about "shooting to the stars", while on the other end of the line, Sean is mostly focused on a new natty NVIDIA device that analyzes his golf swing and that can, at the flick of a switch, run hilights of the hit show SUCCESSION, which he will show at the upcoming board meeting.

Is it any wonder I'm complaining?!

How do you know if they have or haven't signed?
What are they exactly signing?
Who is the contractor?
Does it include an IP deal?
What instructions came from AFRL?
What radar device is it?
What does the 800k entail?


Spare me the fan fiction Iseki - you have zero idea what you should even complain about in regards to the AFRL grant. You don't even know if it relates to RTX as the contractor.


I'd be more humiliated typing the shit you have in the last few posts more than anything else
 
  • Like
  • Haha
  • Fire
Reactions: 50 users

Iseki

Regular
So now you are just making more shit up
The time is nigh to decide if your loyalty is to the founders' invention, or (exclusive) , to someone given the job of commercializing that invention.
 
  • Fire
Reactions: 1 users

Iseki

Regular
How do you know if they have or haven't signed?
What are they exactly signing?
Who is the contractor?
Does it include an IP deal?
What instructions came from AFRL?
What radar device is it?
What does the 800k entail?


Spare me the fan fiction Iseki - you have zero idea what you should even complain about in regards to the AFRL grant. You don't even know if it relates to RTX as the contractor.


I'd be more humiliated typing the shit you have in the last few posts more than anything else
Uiux, I'm not going to argue with you.

You have done more for Brainchip than Stevens and Telson combined, thou possibly not as much as Gelsinger who has sent Intel broke.

Time will tell if the AFRL deal is a right royal stuff-up, or a stroke of genius.
 
Well, the award went to BRN because some fabulously well heeled defense supplier had already done some work on neuromorphioc analysis of radar-doppler signals, and AFRL wants to see if it can be done in hardware. So everyone in the defense industry knows who it is.

Moreover the $1.8 mill is chicken feed, and I'm presuming at this stage RTX is refusing to sign because it is such a measly amount they quite frankly couldn't give a damn. (Quite possibly they asked for $18Mil.) They'd probably just take that $800K off the next party bar-tab and not have to do all that peski grant acquittal stuff.

So, in answer to your question, I reckon it would be the RTX petty-cash division.

TBH, I have never been so humiliated as a shareholder, at least not since the NEUROBUS debacle, which has also ground to a halt with nothing new reported in a while. Let's face it: Where are ther photos of Sean throwing his hat in the air as images of a satelite actually working is beamed to Parkes? That is some of my evidence.
Well sell up and

1739771917105.gif
 
  • Like
  • Haha
  • Fire
Reactions: 12 users

manny100

Regular
One thing seems certain is that the Navy is transitioning to AI at the Edge. See previously posted Bascom Hunter/Navy transitioning posts.
It's likely US defense forces and security will transition also. Space as well.
I very much doubt RTX and Lockheed-Martin will allow Bascom to eat the whole cake.
Did I hear Sean say during the podcast something about Bascom heading for great things???
 
  • Like
  • Love
Reactions: 15 users
@Dolci so glad I listened to you and brought when I did, but I wonder how many are still waiting for 0.15c

1739776441291.gif
 
  • Like
  • Haha
  • Fire
Reactions: 10 users

uiux

Regular
One thing seems certain is that the Navy is transitioning to AI at the Edge. See previously posted Bascom Hunter/Navy transitioning posts.
It's likely US defense forces and security will transition also. Space as well.
I very much doubt RTX and Lockheed-Martin will allow Bascom to eat the whole cake.
Did I hear Sean say during the podcast something about Bascom heading for great things???

The navy is transitioning to Terminator robots with neuromorphic processors:




FY 2024 Base Plans:

Cognitive Science for Human-Machine Teaming and Computational Neuroscience


Continue:
- Assess feasibility of incorporating realistic neural systems into autonomous systems for more robust on-board perception and intelligence.
- Conducting applied research on system interface designs and human-machine interaction methodologies that enable or enhance Naval Warfighter performance and human-machine teaming.
- Conduct applied research to develop agile humanoid robot teammates with enhancements including: (i) Embedding computer vision with visual-spatial reasoning; (ii) Auditory systems to enable human communication; and (iii) Neuromorphic (brain-like) processors.
- Conduct applied research to train mission-capable robots to perform complex manipulation tasks, integrated with the ability to recognize patterns and learn from data (self-learning).
- Investigate the effectiveness of incorporating vision and language processes in robots to facilitate human-robot team performance learning and communication
 
  • Like
  • Fire
  • Wow
Reactions: 35 users

manny100

Regular
The navy is transitioning to Terminator robots with neuromorphic processors:




FY 2024 Base Plans:

Cognitive Science for Human-Machine Teaming and Computational Neuroscience


Continue:
- Assess feasibility of incorporating realistic neural systems into autonomous systems for more robust on-board perception and intelligence.
- Conducting applied research on system interface designs and human-machine interaction methodologies that enable or enhance Naval Warfighter performance and human-machine teaming.
- Conduct applied research to develop agile humanoid robot teammates with enhancements including: (i) Embedding computer vision with visual-spatial reasoning; (ii) Auditory systems to enable human communication; and (iii) Neuromorphic (brain-like) processors.
- Conduct applied research to train mission-capable robots to perform complex manipulation tasks, integrated with the ability to recognize patterns and learn from data (self-learning).
- Investigate the effectiveness of incorporating vision and language processes in robots to facilitate human-robot team performance learning and communication
I clicked on the link and searched neuromorphic and it came up with 8 references a few of which i have posted below.
Appears the Navy is pretty much "all in". Included a few below but best for posters to look it up themselves.
No way is the Navy alone in the transition.
Also page 32 of 77
" Continue to explore neuromorphic spiking neuron hardware designs based on brain models that are suitable for future edge computing and signal processing in small naval platforms.- Continue to explore autonomous problem solving and curiosity driven search for robust performance under unexpected conditions.- Initiate research to identify, characterize and model adversarial AI.- Initiate research exploring theory and algorithms for learning and decision making in multi-agent systems, particularly in adversarial situations"

Also page 9 of 17
" Science of Artificial Intelligence:- Continue applied research on principled computational frameworks for integrating domain knowledge and machine learning for fast robust learning of diverse, complex concepts and tasks with minimal supervision to analyze the sparse, noise and unlabeled data of the Naval domain.- Continue the application of new brain-inspired artificial intelligence algorithms and architectures for the development of compact neuromorphic hardware suitable for edge computing and signal processing in Naval platforms.- Continue the use Artificial Intelligence (AI) for enhanced collaborative complex decision-making and human machine dialogue to increase the speed and quality of operational decisions.- Continue research on embedding AI in robotic systems to enable human-machine collaboration and robot training for hazardous missions.- Continue to integrate physical models with machine learning to enable predictive maintenance for autonomous Naval platforms and enable long duration autonomous missions.- Continue to conduct AI-based analysis of data from wearable sensors and task performance measures to monitor and optimize human performance.- Continue research on the ability to enable a humanoid robot to adapt skills learned in one environment or context, to new situations. Cues of the current context, including the environmental state or goals of the robot or its teammates, will modulate the execution of existing robotic skills, such as adjusting the robot's speed."

Also page 10 of 17
" - Initiate applied research to design embedded neuromorphic processors into intelligent autonomous systems to permit onboard analysis of target data to enable single-pass mine countermeasures missions.- Initiate applied research to validate AI algorithms to provide distributed perception in networks of interacting autonomous agents in the presence of varying levels of reliability and trust at both network and individual agent.- Initiate applied research on AI tools for multi-level optimization of shipyard maintenance scheduling to accelerate on time delivery of ships out of maintenance and improve ship availability and fleet readiness.- Initiate research techniques for training AI to perform tasks from human behavior and natural language instruction"
My bold above.
 
  • Like
  • Fire
  • Wow
Reactions: 35 users

cosors

👀
  • Like
  • Wow
  • Thinking
Reactions: 13 users

Taproot

Regular
The navy is transitioning to Terminator robots with neuromorphic processors:




FY 2024 Base Plans:

Cognitive Science for Human-Machine Teaming and Computational Neuroscience


Continue:
- Assess feasibility of incorporating realistic neural systems into autonomous systems for more robust on-board perception and intelligence.
- Conducting applied research on system interface designs and human-machine interaction methodologies that enable or enhance Naval Warfighter performance and human-machine teaming.
- Conduct applied research to develop agile humanoid robot teammates with enhancements including: (i) Embedding computer vision with visual-spatial reasoning; (ii) Auditory systems to enable human communication; and (iii) Neuromorphic (brain-like) processors.
- Conduct applied research to train mission-capable robots to perform complex manipulation tasks, integrated with the ability to recognize patterns and learn from data (self-learning).
- Investigate the effectiveness of incorporating vision and language processes in robots to facilitate human-robot team performance learning and communication
I reckon you would really enjoy this interview, ( if you haven't already seen it )
Palmer Luckey ( Anduril founder ) 32 years old.
Just took over $22 Billion IVAS contract from Microsoft 4 days ago.
Started Occulus when he was 19.
After watching this, I'm just praying that he's aware of BrainChip
Would be nice to have this bloke on side.

 
  • Like
  • Wow
  • Fire
Reactions: 11 users

uiux

Regular
I reckon you would really enjoy this interview, ( if you haven't already seen it )
Palmer Luckey ( Anduril founder ) 32 years old.
Just took over $22 Billion IVAS contract from Microsoft 4 days ago.
Started Occulus when he was 19.
After watching this, I'm just praying that he's aware of BrainChip
Would be nice to have this bloke on side.





US20230007162 - EVENT-BASED COMPUTATIONAL PIXEL IMAGERS

Applicants
Anduril Industries, Inc.

A computational pixel imaging device that includes an array of pixel integrated circuits for event-based detection and imaging. Each pixel may include a digital counter that accumulates a digital number, which indicates whether a change is detected by the pixel. The counter may count in one direction for a portion of an exposure and count in an opposite direction for another portion of the exposure. The imaging device may be configured to collect and transmit key frames at a lower rate, and collect and transmit delta or event frames at a higher rate. The key frames may include a full image of a scene, captured by the pixel array. The delta frames may include sparse data, captured by pixels that have detected meaningful changes in received light intensity. High speed, low transmission bandwidth motion image video can be reconstructed using the key frames and the delta frames.

1739793700732.png


From GPR:

The patent US20230007162 - Event-Based Computational Pixel Imagers describes an imaging system that integrates in-pixel signal processing, event-based detection, and multi-threaded computation. Unlike traditional CMOS or CCD image sensors that capture full frames at fixed intervals, this system detects and processes changes in light intensity at the pixel level. Each pixel contains a counter that increments and decrements based on detected intensity changes, allowing it to identify significant events while filtering out noise. The system employs a dual-frame acquisition strategy, capturing key frames at a lower frequency and delta frames at a higher frequency to efficiently track motion. Additionally, it incorporates infinite dynamic range counters, which utilize most significant bit (MSB) readout techniques to extend dynamic range without increasing counter bit-depth. This reduces data transmission requirements and enhances computational efficiency by prioritizing meaningful image data.


The architecture supports distributed in-pixel computation, including local filtering, thresholding, and adaptive exposure adjustments. Pixels can share data through orthogonal transfer mechanisms, enabling on-chip signal processing functions such as edge detection, temporal filtering, and convolution. The design also incorporates multi-threaded execution within the pixel array, allowing different processing tasks to run concurrently. The system's event-driven nature reduces power consumption and bandwidth usage by transmitting only data from pixels that detect significant intensity changes. These features align with neuromorphic imaging principles, as the system mimics biological vision by emphasizing changes in a scene rather than static information. However, the patent does not explicitly reference neuromorphic computing or spiking neural networks, instead focusing on the efficiency of event-based imaging and in-pixel digital processing.
 
  • Like
  • Fire
  • Love
Reactions: 19 users

FuzM

Member

US20230007162 - EVENT-BASED COMPUTATIONAL PIXEL IMAGERS

Applicants
Anduril Industries, Inc.

A computational pixel imaging device that includes an array of pixel integrated circuits for event-based detection and imaging. Each pixel may include a digital counter that accumulates a digital number, which indicates whether a change is detected by the pixel. The counter may count in one direction for a portion of an exposure and count in an opposite direction for another portion of the exposure. The imaging device may be configured to collect and transmit key frames at a lower rate, and collect and transmit delta or event frames at a higher rate. The key frames may include a full image of a scene, captured by the pixel array. The delta frames may include sparse data, captured by pixels that have detected meaningful changes in received light intensity. High speed, low transmission bandwidth motion image video can be reconstructed using the key frames and the delta frames.

View attachment 77688

From GPR:

The patent US20230007162 - Event-Based Computational Pixel Imagers describes an imaging system that integrates in-pixel signal processing, event-based detection, and multi-threaded computation. Unlike traditional CMOS or CCD image sensors that capture full frames at fixed intervals, this system detects and processes changes in light intensity at the pixel level. Each pixel contains a counter that increments and decrements based on detected intensity changes, allowing it to identify significant events while filtering out noise. The system employs a dual-frame acquisition strategy, capturing key frames at a lower frequency and delta frames at a higher frequency to efficiently track motion. Additionally, it incorporates infinite dynamic range counters, which utilize most significant bit (MSB) readout techniques to extend dynamic range without increasing counter bit-depth. This reduces data transmission requirements and enhances computational efficiency by prioritizing meaningful image data.


The architecture supports distributed in-pixel computation, including local filtering, thresholding, and adaptive exposure adjustments. Pixels can share data through orthogonal transfer mechanisms, enabling on-chip signal processing functions such as edge detection, temporal filtering, and convolution. The design also incorporates multi-threaded execution within the pixel array, allowing different processing tasks to run concurrently. The system's event-driven nature reduces power consumption and bandwidth usage by transmitting only data from pixels that detect significant intensity changes. These features align with neuromorphic imaging principles, as the system mimics biological vision by emphasizing changes in a scene rather than static information. However, the patent does not explicitly reference neuromorphic computing or spiking neural networks, instead focusing on the efficiency of event-based imaging and in-pixel digital processing.

Any chance "TENNs-PLEIADES: Building Temporal Kernels with Orthogonal Polynomials" be in play here?
 

Attachments

  • TENNs PLEIADES.pdf
    632 KB · Views: 18
  • Like
Reactions: 2 users

Frangipani

Regular
Uni Luxembourg’s SIGCOM (Signal Processing and Communications) research group, which is part of SnT (Interdisciplinary Centre for Security, Reliability and Trust), has a new lab, which their Akida Shuttle PC now calls home: the TelecomAI-Lab, led by Flor Ortiz.

“Unlike traditional communication laboratories, TelecomAI-Lab focuses on AI-native solutions for signal processing and network optimisation.”

There is also an interesting section on “Services offered”.


View attachment 77272






View attachment 77271
View attachment 77273
View attachment 77274

Earlier today, Flor Ortiz 👆🏻 posted an early-access link to an article called “Artificial Intelligence for Satellite Communication: A Survey”, co-authored by 23 SATCOM experts, the majority of whom - like herself - are affiliated with Uni Luxembourg’s Interdisciplinary Centre for Security, Reliability, and Trust (SnT).

And of course NC gets covered as well:

“Furthermore, the document examines the role of neuromorphic computing and COTS (Commercial Off-The-Shelf) devices in facilitating AI applications in space environments.”

Unfortunately, the survey is only accessible with IEEE Xplore member or institutional login credentials. Anyone?



3370F225-CABB-4AD7-8291-4D3390479763.jpeg




4DD7E828-E093-4647-AC8C-375115165AEE.jpeg
 
Last edited:
  • Like
  • Love
  • Fire
Reactions: 12 users

BrainShit

Regular
To my mind, Sean's anecdote about walking the floor of CES25 and talking to exhibitors reinforces the hypothesis that an IP only strategy excludes the small business market sector. They are using COTS NNs, but would prefer to use custom NNs if they could aford it. The thing about Akida is that it is customizabel in both size (number of nodes/NPUs) and application (model development, particularly with our cooperation with Edge Impulse).

Leaving the TENNs algorithm product aside for the moment, a major licence with large customer with near-term production objectives would secure the share price, and clearly, BRN cannot afford to be a chip maker at this point in time. I think this illustrates the need to get a chip maker tied up (as in licensed, not bondage) to produce the various flavours of Akida/TENNs COTS chips to capture the smaller users who cannot afford an IP licence.

A major problem is that most major chip designers have their own in-house AI. Even our early licencee Renesas has its own DRP-AI which it has refined in the interim with N:M coding which may have taken inspiration from Akida. Qualcomm, ARM and Intel each have their own AI implementations.

We seem to have gone cool on the original Socionext/TSMC association, at least for the time being.

So, of our known associates, Global Foundries could be a candidate to produce the Akida/TENNs COTS range, and I wouldn't write off Megachips yet.

Fully agreed...

A major problem is that most major chip designers have their own in-house AI. Even our early licencee Renesas has its own DRP-AI which it has refined in the interim with N:M coding which may have taken inspiration from Akida. Qualcomm, ARM and Intel each have their own AI implementations.
 
  • Like
Reactions: 3 users
Top Bottom