And yet RTX hasn't signed their subcontractor agreement to make a start on trialling akida, even though AFRL will pay them to give it a go. Have they lost it?
Honestly Eye Sea Key,
Where do you come up with this information ?
You are in fact correct in saying that, AFRL are (in effect) paying a fee as part of the parent contract, which Brainchip won (awarded) for 1.8 Million USD.......none of us really know if it was Brainchip who negotiated a third party fee of $800,000 USD or if it was imposed upon Brainchip by AFRL as part of the parent contract.
My personal opinion is that the AFRL told Brainchip who the 3rd party was going to be, to sow the entire deal together, but as I have mentioned before, the fact that Brainchip released a statement stating that we (Brainchip), were committed to pay a third party $800,000 USD, well, am I the only one on this forum who thinks the terms had already been agreed to, can you disclosure a figure when the deal is still up in the air ?
Your comment that the third party is RTX......that's pure speculation, which division are you referring to Eye Sea Key ?
Collins Aerospace ?
Just the opinion of a rusted on shareholder...........Tech.
Well, the award went to BRN because some fabulously well heeled defense supplier had already done some work on neuromorphioc analysis of radar-doppler signals, and AFRL wants to see if it can be done in hardware. So everyone in the defense industry knows who it is.Honestly Eye Sea Key,
Where do you come up with this information ?
You are in fact correct in saying that, AFRL are (in effect) paying a fee as part of the parent contract, which Brainchip won (awarded) for 1.8 Million USD.......none of us really know if it was Brainchip who negotiated a third party fee of $800,000 USD or if it was imposed upon Brainchip by AFRL as part of the parent contract.
My personal opinion is that the AFRL told Brainchip who the 3rd party was going to be, to sow the entire deal together, but as I have mentioned before, the fact that Brainchip released a statement stating that we (Brainchip), were committed to pay a third party $800,000 USD, well, am I the only one on this forum who thinks the terms had already been agreed to, can you disclose a figure when the deal is still up in the air ?
Your comment that the third party is RTX......that's pure speculation, which division are you referring to Eye Sea Key ?
Collins Aerospace ?
Just the opinion of a rusted on shareholder...........Tech.
Well, the award went to BRN because some fabulously well heeled defense supplier had already done some work on neuromorphioc analysis of radar-doppler signals, and AFRL wants to see if it can be done in hardware. So everyone in the defense industry knows who it is.
Moreover the $1.8 mill is chicken feed, and I'm presuming at this stage RTX is refusing to sign because it is such a measly amount they quite frankly couldn't give a damn. (Quite possibly they asked for $18Mil.) They'd probably just take that $800K off the next party bar-tab and not have to do all that peski grant acquittal stuff.
So, in answer to your question, I reckon it would be the RTX petty-cash division.
TBH, I have never been so humiliated as a shareholder, at least not since the NEUROBUS debacle, which has also ground to a halt with nothing new reported in a while. Let's face it: Where are ther photos of Sean throwing his hat in the air as images of a satelite actually working is beamed to Parkes? That is some of my evidence.
TBH, you couldn't make this stuff up.It reads like Iseki is just making shit up just so he can complain about it
TBH, you couldn't make this stuff up.
Win a grant, and the co-applicant won't sign for LOVE NOR MONEY.
Nowadays, I imagine our founder, Peter, sitting in one of those antique telephone booths ( half desk, half uncomfortable chair, with a small comapartment for the phone), talking with Sean about "shooting to the stars", while on the other end of the line, Sean is mostly focused on a new natty NVIDIA device that analyzes his golf swing and that can, at the flick of a switch, run hilights of the hit show SUCCESSION, which he will show at the upcoming board meeting.
Is it any wonder I'm complaining?!
The time is nigh to decide if your loyalty is to the founders' invention, or (exclusive) , to someone given the job of commercializing that invention.So now you are just making more shit up
Uiux, I'm not going to argue with you.How do you know if they have or haven't signed?
What are they exactly signing?
Who is the contractor?
Does it include an IP deal?
What instructions came from AFRL?
What radar device is it?
What does the 800k entail?
Spare me the fan fiction Iseki - you have zero idea what you should even complain about in regards to the AFRL grant. You don't even know if it relates to RTX as the contractor.
I'd be more humiliated typing the shit you have in the last few posts more than anything else
Well sell up andWell, the award went to BRN because some fabulously well heeled defense supplier had already done some work on neuromorphioc analysis of radar-doppler signals, and AFRL wants to see if it can be done in hardware. So everyone in the defense industry knows who it is.
Moreover the $1.8 mill is chicken feed, and I'm presuming at this stage RTX is refusing to sign because it is such a measly amount they quite frankly couldn't give a damn. (Quite possibly they asked for $18Mil.) They'd probably just take that $800K off the next party bar-tab and not have to do all that peski grant acquittal stuff.
So, in answer to your question, I reckon it would be the RTX petty-cash division.
TBH, I have never been so humiliated as a shareholder, at least not since the NEUROBUS debacle, which has also ground to a halt with nothing new reported in a while. Let's face it: Where are ther photos of Sean throwing his hat in the air as images of a satelite actually working is beamed to Parkes? That is some of my evidence.
One thing seems certain is that the Navy is transitioning to AI at the Edge. See previously posted Bascom Hunter/Navy transitioning posts.
It's likely US defense forces and security will transition also. Space as well.
I very much doubt RTX and Lockheed-Martin will allow Bascom to eat the whole cake.
Did I hear Sean say during the podcast something about Bascom heading for great things???
I clicked on the link and searched neuromorphic and it came up with 8 references a few of which i have posted below.The navy is transitioning to Terminator robots with neuromorphic processors:
FY 2024 Base Plans:
Cognitive Science for Human-Machine Teaming and Computational Neuroscience
Continue:
- Assess feasibility of incorporating realistic neural systems into autonomous systems for more robust on-board perception and intelligence.
- Conducting applied research on system interface designs and human-machine interaction methodologies that enable or enhance Naval Warfighter performance and human-machine teaming.
- Conduct applied research to develop agile humanoid robot teammates with enhancements including: (i) Embedding computer vision with visual-spatial reasoning; (ii) Auditory systems to enable human communication; and (iii) Neuromorphic (brain-like) processors.
- Conduct applied research to train mission-capable robots to perform complex manipulation tasks, integrated with the ability to recognize patterns and learn from data (self-learning).
- Investigate the effectiveness of incorporating vision and language processes in robots to facilitate human-robot team performance learning and communication
I reckon you would really enjoy this interview, ( if you haven't already seen it )The navy is transitioning to Terminator robots with neuromorphic processors:
FY 2024 Base Plans:
Cognitive Science for Human-Machine Teaming and Computational Neuroscience
Continue:
- Assess feasibility of incorporating realistic neural systems into autonomous systems for more robust on-board perception and intelligence.
- Conducting applied research on system interface designs and human-machine interaction methodologies that enable or enhance Naval Warfighter performance and human-machine teaming.
- Conduct applied research to develop agile humanoid robot teammates with enhancements including: (i) Embedding computer vision with visual-spatial reasoning; (ii) Auditory systems to enable human communication; and (iii) Neuromorphic (brain-like) processors.
- Conduct applied research to train mission-capable robots to perform complex manipulation tasks, integrated with the ability to recognize patterns and learn from data (self-learning).
- Investigate the effectiveness of incorporating vision and language processes in robots to facilitate human-robot team performance learning and communication
I reckon you would really enjoy this interview, ( if you haven't already seen it )
Palmer Luckey ( Anduril founder ) 32 years old.
Just took over $22 Billion IVAS contract from Microsoft 4 days ago.
Started Occulus when he was 19.
After watching this, I'm just praying that he's aware of BrainChip
Would be nice to have this bloke on side.
US20230007162 - EVENT-BASED COMPUTATIONAL PIXEL IMAGERS
Applicants
Anduril Industries, Inc.
A computational pixel imaging device that includes an array of pixel integrated circuits for event-based detection and imaging. Each pixel may include a digital counter that accumulates a digital number, which indicates whether a change is detected by the pixel. The counter may count in one direction for a portion of an exposure and count in an opposite direction for another portion of the exposure. The imaging device may be configured to collect and transmit key frames at a lower rate, and collect and transmit delta or event frames at a higher rate. The key frames may include a full image of a scene, captured by the pixel array. The delta frames may include sparse data, captured by pixels that have detected meaningful changes in received light intensity. High speed, low transmission bandwidth motion image video can be reconstructed using the key frames and the delta frames.
View attachment 77688
From GPR:
The patent US20230007162 - Event-Based Computational Pixel Imagers describes an imaging system that integrates in-pixel signal processing, event-based detection, and multi-threaded computation. Unlike traditional CMOS or CCD image sensors that capture full frames at fixed intervals, this system detects and processes changes in light intensity at the pixel level. Each pixel contains a counter that increments and decrements based on detected intensity changes, allowing it to identify significant events while filtering out noise. The system employs a dual-frame acquisition strategy, capturing key frames at a lower frequency and delta frames at a higher frequency to efficiently track motion. Additionally, it incorporates infinite dynamic range counters, which utilize most significant bit (MSB) readout techniques to extend dynamic range without increasing counter bit-depth. This reduces data transmission requirements and enhances computational efficiency by prioritizing meaningful image data.
The architecture supports distributed in-pixel computation, including local filtering, thresholding, and adaptive exposure adjustments. Pixels can share data through orthogonal transfer mechanisms, enabling on-chip signal processing functions such as edge detection, temporal filtering, and convolution. The design also incorporates multi-threaded execution within the pixel array, allowing different processing tasks to run concurrently. The system's event-driven nature reduces power consumption and bandwidth usage by transmitting only data from pixels that detect significant intensity changes. These features align with neuromorphic imaging principles, as the system mimics biological vision by emphasizing changes in a scene rather than static information. However, the patent does not explicitly reference neuromorphic computing or spiking neural networks, instead focusing on the efficiency of event-based imaging and in-pixel digital processing.
Uni Luxembourg’s SIGCOM (Signal Processing and Communications) research group, which is part of SnT (Interdisciplinary Centre for Security, Reliability and Trust), has a new lab, which their Akida Shuttle PC now calls home: the TelecomAI-Lab, led by Flor Ortiz.
“Unlike traditional communication laboratories, TelecomAI-Lab focuses on AI-native solutions for signal processing and network optimisation.”
There is also an interesting section on “Services offered”.
![]()
Symeon Chatzinotas on LinkedIn: TelecomAI - Lab
New HW Lab at SIGCOM, aggregating our capabilities on AI accelerators and Neuromorphic processors.www.linkedin.com
View attachment 77272
![]()
TelecomAI - Lab
The TelecomAI at the University of Luxembourg is used for research and testing of signal processing techniques for satellite and terrestrial communication systems.www.uni.lu
View attachment 77271
View attachment 77273
View attachment 77274
To my mind, Sean's anecdote about walking the floor of CES25 and talking to exhibitors reinforces the hypothesis that an IP only strategy excludes the small business market sector. They are using COTS NNs, but would prefer to use custom NNs if they could aford it. The thing about Akida is that it is customizabel in both size (number of nodes/NPUs) and application (model development, particularly with our cooperation with Edge Impulse).
Leaving the TENNs algorithm product aside for the moment, a major licence with large customer with near-term production objectives would secure the share price, and clearly, BRN cannot afford to be a chip maker at this point in time. I think this illustrates the need to get a chip maker tied up (as in licensed, not bondage) to produce the various flavours of Akida/TENNs COTS chips to capture the smaller users who cannot afford an IP licence.
A major problem is that most major chip designers have their own in-house AI. Even our early licencee Renesas has its own DRP-AI which it has refined in the interim with N:M coding which may have taken inspiration from Akida. Qualcomm, ARM and Intel each have their own AI implementations.
We seem to have gone cool on the original Socionext/TSMC association, at least for the time being.
So, of our known associates, Global Foundries could be a candidate to produce the Akida/TENNs COTS range, and I wouldn't write off Megachips yet.
A major problem is that most major chip designers have their own in-house AI. Even our early licencee Renesas has its own DRP-AI which it has refined in the interim with N:M coding which may have taken inspiration from Akida. Qualcomm, ARM and Intel each have their own AI implementations.