BRN Discussion Ongoing

Neuromorphia

fact collector
1761714350338.png


@bludybludblud

this one mentions PLEIADES...?

https://worldwide.espacenet.com/patent/search/family/096095874/publication/US2025209313A1?q=PLEIADES brainchip

US2025209313A1 METHOD AND SYSTEM FOR IMPLEMENTING ENCODER PROJECTION IN NEURAL NETWORKS

[0009] For generating content, the neurons of NN models, such as polynomial expansion in adaptive distributed event-based systems (PLEAIDES) models, perform a temporal convolution operation on input signals with the temporal kernels. In some aspects, the temporal kernels are represented as an expansion over a basis function with kernel coefficients Gi . In some aspects, the kernel coefficients are trainable parameters of the neural network during learning. In some aspects, the kernel coefficients remain constant while generating content using convolutions during inference. Even though the recurrent mode of PLEIADES decomposes the convolution with a kernel onto a set of basis functions, the contribution from each may not be used individually, but summed together to provide a scalar value of the convolution. Such a scalar value has more limited power in generating signals than if a contribution, coefficient, from each basis could be used.
 
Last edited:
  • Like
  • Love
  • Fire
Reactions: 12 users

7für7

Top 20
Last edited:
  • Thinking
Reactions: 1 users
FF


Brainchip needs to do something about security. Overnight another company called Deep Perception has broken in and added their name to the Partners page.

https://deepperception.ai/
 
  • Haha
  • Like
  • Love
Reactions: 18 users
  • Like
Reactions: 1 users

manny100

Top 20
"Lockheed, Arquimea Advance Tactical ISR With AI-Powered Anomaly Detection": Title of article. Link below.
Lockheed, Arquimea Advance Tactical ISR With AI-Powered Anomaly Detection
" Lockheed Martin’s Skunk Works and Spanish firm Arquimea have just dropped a major upgrade in tactical ISR: an AI-driven anomaly detection system designed to elevate visual intel to a whole new level."
" Moreover, the system doesn’t just scan more, it scans smarter." - AKIDA makes sensors smarter.
Have a read of the " How it works" section of the article.
We know Arquimea and Lockheed are very familiar with AKIDA.
It keeps getting better.
 
  • Like
  • Fire
  • Love
Reactions: 21 users

manny100

Top 20
The link below is an even better article on the Lockheed, Arquimea partnership.
Skunk Works and Arquimea Develop AI Capable ISR Platforms
" Anomaly detection in electro-optical (EO) and infrared (IR) spectra enhances security, disaster response, environmental monitoring, and safety. It also helps detect hidden threats, wildfires, pollution, and equipment failures. This technology improves awareness, enables early warnings, and supports better decision-making."
They have been actually trialing it in jungle terrain. The article was published 25th April'25 so its been tested for a while now.
I like the below comment from the article.
" Building on this progress, in 2025 Skunk Works® and Arquimea will explore how these techniques can improve other sensors and decision-making for autonomous systems."
Looking great for Brainchip - Lockheed-Martin, Arquimea, Parsons and Bascom Hunter.
 
  • Like
  • Fire
  • Love
Reactions: 24 users
Been sitting in my favorites and not listened to it

 
  • Like
  • Thinking
Reactions: 3 users
Thought this might be an interesting read with the massive turnover off staff currently, but unfortunately you need to sign up 😂

 
  • Like
Reactions: 4 users

Frangipani

Top 20
Note that Rudy Pei didn’t unambiguously reply to Philip Dodge with “Yes, this IP belongs to BrainChip” either, so things might actually not be as straightforward as simply inferring the IP must belong to BrainChip because Rudy Pei and Olivier Coenen were both BrainChip employees at the time of the PLEIADES paper’s first publication. It might just turn out to be a little more complicated than that.


One thing seems pretty obvious to me: Ever since Rudy Pei left for NVIDIA, he has been trying to distance himself from BrainChip. He still expresses his appreciation for the work of some of his former colleagues, whose LinkedIn posts he continues to like, but he evidently doesn’t want his own work done over the past few years automatically be associated with his former employer, with which he seems to have had some sort of fall-out.


Have a look at the following picture from the ICLR Conference in Singapore (24 - 28 April 2025), which he posted on LinkedIn at the time. While the conference paper published on 9 April 2025 (https://arxiv.org/pdf/2501.13230) still identifies him as someone who worked for BrainChip during this research…

“Published as a conference paper at ICLR 2025

LET SSMS BE CONVNETS: STATE-SPACE MODELING WITH OPTIMAL TENSOR CONTRACTIONS
Yan Ru Pei
Brainchip Inc.
Laguna Hills, CA 92653, USA
yanrpei@gmail.com


… the photo of him posing in front of the conference poster in Singapore two to three weeks later is IMO testament to Rudy Pei’s emotions “post-divorce” so to say: He made an effort to put stickers over what I assume would have revealed where he had been working at the time of his research:


416726D8-BB49-4A63-9812-AEE574776C00.jpeg




0D983925-17B3-4BA0-BA0E-3EE35D6BDB24.jpeg



Neither did he mention his previous employer during this video presentation of “Centaurus: Let SSMs be ConvNets (ICLR 2025 spotlight)”:





I have never believed Rudy Pei was poached by NVIDIA. Instead, I think he was no longer happy at BrainChip (the why is of course speculation) and actively reached out to look for a new job - also cf the info “LinkedIn helped me to get this job”, and found it with the company he had always dreamed working for.

And it is highly likely no coincidence that he started his new job with NVIDIA just after the Chinese New Year holidays (which this calendar year fell on 29 and 30 January) - after all, the Lunar New Year is considered an auspicious time for new beginnings.


8859BFB2-044A-4269-964C-ABBE89214CD8.jpeg




The mysterious disappearance of various “TENNs” repositories from BrainChip’s GitHub account sometime in February or during the first days of March also suggests to me that this IP issue may be more complicated than it seems at first glance:


On 2 February, I had posted about my discovery of TENNs-Eyes on BrainChip’s GitHub:

https://thestockexchange.com.au/threads/brn-discussion-ongoing.1/post-449579


A month later, I noticed all the TENNs model repositories previously found there had disappeared:

https://thestockexchange.com.au/threads/brn-discussion-ongoing.1/post-452508

“Now here comes the weird thing: When I just revisited BrainChip’s GitHub page (https://github.com/Brainchip-Inc), I noticed that not only has TENNs-Eye disappeared, but so have aTENNuate, TENNs-PLEIADES and Centaurus. Of the 14 repositories (see my 2 February screenshot), four have simply vanished into thin air, it seems… 🤔

[…] Is there possibly any connection with Rudy Pei’s departure, given he was instrumental in leading the R&D for the TENNs model family? (Although I’d expect any rights would actually have been assigned to BrainChip, his former employer?)

A read-only archived TENNs-Eye repository can still be found on his GitHub page. It says the repository has been moved to Brainchip-Inc/TENNs-Eye (presumably in September 2024, when the repository got archived), but that is now an empty link (Error 404).”





It’s actually quite telling to compare the two different versions of the PLEIADES paper - the one dated 31 May 2024 (when both authors were still BrainChip employees, but even then Rudy Pei preferred to use his private email address rather than his work email address) and the recently updated one, dated 24 October 2025 (after Olivier Coenen had been sacked for reasons unknown to us):


In the earlier version of the paper, the title says “TENNs-PLEIADES” and the abstract’s first sentence also refers to the TENNs architecture. As does another sentence in the Introduction, which further mentions the broader class of networks called TENNs were developed by BrainChip Inc.

This reference to TENNs & BrainChip was completely dropped in the updated October 2025 version. The only remaining reference to the researchers’ former employer can be found in a small footnote saying “Work done while at BrainChip Inc”.



AD8FDF09-D528-464C-91A3-8799FD0220D5.jpeg

4FD39696-AFE0-4B43-855E-E6D60BA40346.jpeg





E774A40F-D80E-459F-85A6-F1DDC8606BC9.jpeg

(…)


1CDE9FE9-2FA1-475C-ADDD-D4E8C155A09C.jpeg



The other striking omission in the October 2025 version is of Course who aT BrainChip nO longer gets named under “Acknowledgement(s)” - see for yourself…
 
Last edited:
  • Like
  • Thinking
  • Wow
Reactions: 11 users

Frangipani

Top 20
Speaking of NVIDIA:

“NVIDIA to Invest $1 Billion in Nokia to Accelerate AI-RAN Innovation and Lead Transition from 5G to 6G”


NVIDIA and Nokia to Pioneer the AI Platform for 6G — Powering America’s Return to Telecommunications Leadership​

NVIDIA to Invest $1 Billion in Nokia to Accelerate AI-RAN Innovation and Lead Transition from 5G to 6G

October 28, 2025

NVIDIA and Nokia to Pioneer the AI Platform for 6G — Powering America’s Return to Telecommunications Leadership
News Summary:
  • NVIDIA and Nokia to establish a strategic partnership to enable accelerated development and deployment of next generation AI native mobile networks and AI networking infrastructure.
  • NVIDIA introduces NVIDIA Arc Aerial RAN Computer, a 6G-ready telecommunications computing platform.
  • Nokia to expand its global access portfolio with new AI-RAN product based on NVIDIA platform.
  • T-Mobile U.S. is working with Nokia and NVIDIA to integrate AI-RAN technologies into its 6G development process.
  • Collaboration enables new AI services and improved consumer experiences to support explosive growth in mobile AI traffic.
  • Dell Technologies provides PowerEdge servers to power new AI-RAN solution.
  • Partnership marks turning point for the industry, paving the way to AI-native 6G by taking AI-RAN to innovation and commercialization at a global scale.
GTC Washington, D.C.—NVIDIA and Nokia today announced a strategic partnership to add NVIDIA-powered, commercial-grade AI-RAN products to Nokia’s industry-leading RAN portfolio, enabling communication service providers to launch AI-native 5G-Advanced and 6G networks on NVIDIA platforms. NVIDIA will also invest $1 billion in Nokia at a subscription price of $6.01 per share. The investment is subject to customary closing conditions.
The partnership marks the beginning of the AI-native wireless era, providing the foundation to support AI-powered consumer experiences and enterprise services at the edge.

In addition, the partnership addresses the fast-growing AI-RAN market, representing a significant opportunity within the RAN market that is expected to exceed a cumulative $200 billion by 2030, according to analyst firm Omdia1.
Together, NVIDIA and Nokia are also laying the strategic infrastructure and opening up a new high-growth frontier for telecom providers by delivering distributed edge AI inferencing at scale.

T-Mobile U.S. will collaborate with Nokia and NVIDIA to drive and test AI-RAN technologies as a part of the 6G innovation and development process, reinforcing its global leadership in driving wireless innovation. Trials are expected to begin in 2026, focused on field validation of performance and efficiency gains for customers.

The move will enable massive improvements in performance and efficiency, helping ensure that consumers using generative, agentic and physical AI applications on their devices will have seamless network experiences. It will also support future AI-native devices, such as drones or augmented- and virtual-reality glasses, while being ready for 6G applications such as integrated sensing and communications.

“Telecommunications is a critical national infrastructure — the digital nervous system of our economy and security,” said Jensen Huang, founder and CEO of NVIDIA. “Built on NVIDIA CUDA and AI, AI-RAN will revolutionize telecommunications — a generational platform shift that empowers the United States to regain global leadership in this vital infrastructure technology. Together with Nokia and America’s telecom ecosystem, we’re igniting this revolution, equipping operators to build intelligent, adaptive networks that will define the next generation of global connectivity.”
“The next leap in telecom isn’t just from 5G to 6G — it’s a fundamental redesign of the network to deliver AI-powered connectivity, capable of processing intelligence from the data center all the way to the edge. Our partnership with NVIDIA, and their investment in Nokia, will accelerate AI-RAN innovation to put an AI data center into everyone’s pocket,” said Justin Hotard, President and CEO of Nokia. “We’re proud to drive this industry transformation with NVIDIA, Dell Technologies, and T-Mobile U.S., our first AI-RAN deployments in T-Mobile’s network will ensure America leads in the advanced connectivity that AI needs.”

Supporting Exponential Growth in AI traffic
Growth in AI traffic is exploding. For example, almost 50% of ChatGPT’s 800 million weekly active users access the site via mobile devices, and its monthly mobile app downloads exceed 40 million.

With Nokia and NVIDIA-powered AI-RAN systems, mobile operators can improve performance and efficiency as well as enhance network experiences for future generative and agentic AI applications and experiences. They will be able to introduce new AI services for 6G with the same infrastructure, powering billions of new connections for drones, cars, robots and augmented- and virtual-reality glasses that demand connectivity, computing and sensing at the edge.

Seamless Transition to AI-Native Networks
NVIDIA is introducing Aerial RAN Computer Pro (ARC-Pro), a 6G-ready accelerated computing platform that combines connectivity, computing and sensing capabilities, enabling telcos to move from 5G-Advanced to 6G through software upgrades.

The NVIDIA ARC-Pro reference design is available for manufacturers and network equipment providers to build commercial-off-the-shelf-based or proprietary AI-RAN products, supporting both new buildouts and expansions to existing base stations.

Nokia will accelerate the availability of its 5G and 6G RAN software on the NVIDIA CUDA® platform and expand its RAN portfolio by embedding NVIDIA ARC-Pro at the heart of the new AI-RAN solution. This partnership will enable Nokia’s mobile network customers to transition seamlessly from today’s RAN networks to future AI-RAN networks.

Nokia’s unique anyRAN approach simplifies the introduction of the ARC-Pro platform by establishing software-defined RAN evolution for both Cloud RAN and purpose-built RAN. AirScale baseband is a modular architecture in which new cards can coexist with previously deployed cards. Nokia aims to expand and evolve its AirScale baseband into the 5G-Advanced and 6G era with new AI-RAN capabilities.

Dell Technologies is driving innovation in Nokia’s AI-RAN solution with its state-of-the-art Dell PowerEdge servers. Engineered for seamless scalability, these servers enable no-touch software upgrades and low-touch silicon upgrades, ensuring a smooth evolution from 5G to 5G-Advanced and 6G. With their robust, high-performance infrastructure, Dell PowerEdge servers are the ultimate compute platform for operators deploying AI-RAN solutions.

Future-Proofed for 6G
Nokia and NVIDIA’s AI-RAN platform unifies AI and radio access workloads on a software-defined, accelerated infrastructure, boosting performance, efficiency and monetization while enabling a smooth, cost-effective path to 6G.​
New capabilities are added through software updates, future-proofing investments for 6G and beyond, while enabling rapid innovation cycles at the pace of AI. It serves growing generative AI and agentic AI traffic on the same sites as RAN functions, applying AI algorithms to improve spectral and energy efficiency, as well as overall network performance, and by tapping into underutilized RAN assets to host edge AI services and maximize return on investment.

“With America’s best network, T-Mobile remains committed to advancing next-generation technologies that redefine the customer experience,” said John Saw, president of technology and chief technology officer at T-Mobile. “Our collaboration with industry leaders Nokia and NVIDIA marks an important step toward shaping the future of connectivity as we develop the innovations that will power the 6G era. Building on the foundation established by the AI-RAN Innovation Center in 2024, this strategic initiative reinforces T-Mobile’s leadership in driving the U.S. wireless industry forward. Beginning in 2026, T-Mobile will conduct field evaluations and testing of advanced AI-RAN technologies to ensure they meet the evolving needs of our customers as we move toward 6G.”
“The telecommunications industry owns the most valuable real estate for AI — the edge, where data is created,” said Michael Dell, chairman and chief executive officer of Dell Technologies. “This AI-RAN collaboration with Nokia and NVIDIA makes that potential real. We’ve built some of the world’s largest AI clusters with 100,000+ GPUs. Now we’re applying that expertise to distribute intelligence across millions of edge nodes. The operators who modernize their infrastructure today won’t just carry AI traffic — they’ll be the distributed AI grid factories that process it at the source, where latency matters and data sovereignty is critical.”

Additional AI Networking Solutions Cooperation
Nokia and NVIDIA will also collaborate on AI networking solutions, including data center switching with Nokia’s SR Linux software for the NVIDIA Spectrum-X™ Ethernet networking platform and the application of Nokia’s telemetry and fabric management platform on NVIDIA AI infrastructure.

The companies will also explore the use of Nokia’s optical technologies and capabilities as part of future NVIDIA AI infrastructure architecture​
 
  • Thinking
  • Fire
  • Like
Reactions: 4 users

Frangipani

Top 20
As well as:


[Videos did not get copied]

How Starcloud Is Bringing Data Centers to Outer Space​

The NVIDIA Inception startup projects that space-based data centers will offer 10x lower energy costs and reduce the need for energy consumption on Earth.

October 15, 2025 by Angie Lee
Extraterrestrial data centers are just on the horizon. Soon, an AI-equipped satellite from Starcloud, a member of the NVIDIA Inception program for startups, will orbit the Earth.

It’s a large step toward the startup’s ultimate goal to bring state-of-the-art data centers to outer space. This can be a part of the solution to address challenges faced by rising AI demands, including energy consumption and cooling requirements for data centers on Earth.

Starcloud plans to build a 5-gigawatt orbital data center with super-large solar and cooling panels approximately 4 kilometers in width and length. Video courtesy of Starcloud.

“In space, you get almost unlimited, low-cost renewable energy,” said Philip Johnston, cofounder and CEO of the startup, which is based in Redmond, Washington. “The only cost on the environment will be on the launch, then there will be 10x carbon-dioxide savings over the life of the data center compared with powering the data center terrestrially on Earth.”

Starcloud’s upcoming satellite launch, planned for November, will mark the NVIDIA H100 GPU’s cosmic debut — and the first time a state-of-the-art, data center-class GPU is in outer space.

The 60-kilogram Starcloud-1 satellite, about the size of a small fridge, is expected to offer 100x more powerful GPU compute than any previous space-based operation.


An engineer inspects the Starcloud-1 satellite planned for launch in November. The silver module inside the satellite houses the NVIDIA H100 GPU. Video courtesy of Starcloud.

How Data Centers in Space Can Increase Sustainability​

Instead of relying on fresh water for cooling through evaporation towers, as many Earth-based data centers do, Starcloud’s space-based data centers can use the vacuum of deep space as an infinite heat sink.

Emitting waste heat from infrared radiation into space can conserve significant water resources on Earth, since water isn’t needed for cooling. Constant exposure to the sun in orbit also means nearly infinite solar power — aka no need for the data centers to rely on batteries or backup power.

One of the solar panels on the Starcloud-1 satellite launching in November. Video courtesy of Starcloud.

Starcloud projects the energy costs in space to be 10x cheaper than land-based options, even including launch expenses. “In 10 years, nearly all new data centers will be being built in outer space,” Johnston predicts.

“In 10 years, nearly all new data centers will be being built in outer space,” said Philip Johnston, cofounder and CEO of Starcloud.

Applications for Space-Based Data Centers​

An early use case for extraterrestrial data centers is the analysis of Earth observation data, which could inform applications for detecting crop types and predicting local weather.

Plus, real-time data processing in space offers immense benefits for critical applications such as wildfire detection and distress-signal response. Running inference in space, right where the data’s collected, allows insights to be delivered nearly instantaneously, reducing response times from hours to minutes.

A rendering of Starcloud’s satellite orbiting the terminator line — the line between night and day. Image courtesy of Starcloud.

Earth observation methods include optical imaging with cameras, hyperspectral imaging using light wavelengths beyond human vision and synthetic-aperture radar (SAR) imaging to build high-resolution, 3D maps of Earth.
SAR, in particular, generates lots of data — about 10 gigabytes per second, according to Johnston — so in-space inference would be especially beneficial when creating these maps.

“Starcloud needs to be competitive with the type of workload you can run on an Earth-based data center, and NVIDIA GPUs are the most performant in terms of training, fine-tuning and inference,” Johnston said, explaining why the company chose to use NVIDIA accelerated computing on its upcoming satellite launch.

Starcloud cofounder and CEO Philip Johnston examines the star tracker, used for orienting the satellite. Video courtesy of Starcloud.

“Being a part of NVIDIA Inception has been critical, as it provided us with technical support, access to NVIDIA experts and NVIDIA GPUs,” Johnston added.

starcloud-team.jpg

Starcloud team including engineers as well as cofounders Ezra Feilden, Philip Johnston and Adi Oltean. Image courtesy of Starcloud.

Starcloud is a recent graduate of the Google for Startups Cloud AI Acceleratorplans to run Gemma — an open model from Google — in orbit on H100 GPUs, proving even large language models can run in outer space.
And for future launches, the startup is looking to integrate the NVIDIA Blackwell platform, which Johnston expects will offer even greater in-orbit AI performance, with improvements of up to 10x compared with the NVIDIA Hopper architecture.

Learn more about how NVIDIA technologies support sustainable computing.
Featured video courtesy of Starcloud.
 
  • Wow
Reactions: 2 users

itsol4605

Regular
  • Like
  • Fire
  • Love
Reactions: 8 users

Frangipani

Top 20
Three weeks ago, I shared Gregor Lenz’s recent blog posts about the current state of event cameras.

Event cameras in 2025, Part 1:
https://thestockexchange.com.au/threads/brn-discussion-ongoing.1/post-475879

Event cameras in 2025, Part 2:
https://thestockexchange.com.au/threads/brn-discussion-ongoing.1/post-475880

Today he posted about the first part (state of commercial applications) on LinkedIn and got some interesting comments by Florian Corgnou, who co-founded Neurobus with him, as well as by Nimrod Kruger (postdoc at ICNS at WSU, who also has a five-year background working in the Israeli defense industry):


4D79E782-9C31-41D8-867F-96DB1142BD86.jpeg

408640CB-3AC7-4849-BF81-D0D38FB7F430.jpeg
 
Last edited:
  • Like
  • Fire
Reactions: 4 users

Frangipani

Top 20
Last edited:
  • Like
  • Love
  • Haha
Reactions: 18 users

Frangipani

Top 20
A number of forum members here and elsewhere appear to have been ecstatic about last week’s “reveal”of Akida 2.0 having supposedly been validated in a simulation of human-robot collaboration, and already see us more or less confirmed to be collaborating with Tesla on their Optimus humanoids.
Just imagine Elon shaking hands with Sean, and how that would result in our share price spaceXing! But hold your horses!

I personally can’t help but suspect that CogniEdge.ai’s CEDR (Cohesive Edge-Driven Robotics) framework, “conceptualized with NVIDIA Jetson AGX Thor and BrainChip Akida 2.0” is a largely AI-generated concept with lots of fancy buzzwords thrown in, and that it was actually one of our fellow forum members who gave CogniEdge.ai’s Founder and CEO Madhu Gaganam the idea of including BrainChip & Akida in the young (founded earlier this month!) company’s white paper “Neuroadaptive Physical AI: Revolutionizing Human-Robot Collaboration for Industry 5.0”.

Under “8. Acknowledgement”, the white paper’s author expresses his gratitude (or rather “their” gratitude, although judging from the title page, he seems to be the sole author):

We express our gratitude to the advanced AI tools that supported the development of this white paper. Grok, created by xAI, provided invaluable assistance in refining the content, ensuring clarity, technical accuracy, and alignment with Industry 5.0’s vision. Additionally, Copilot, developed by Microsoft, contributed to the creation of high-quality visual assets, enhancing the document’s accessibility and engagement for our audience. These tools exemplify the power of AI in augmenting human innovation, aligning with CogniEdge.ai’s mission to advance human-centric automation.”

The question is, how much of the white paper’s innovative idea was originally devised by the human named as its author, (and yes, I’m aware he worked for Dell Technologies until recently) and to what extent did GenAI “augment” and “refine” the initial concept?


I had a look at some of Madhu Gaganam’s LinkedIn articles posted over the past few months, in which he initially doesn’t even mention the term “neuromorphic” when writing about Edge AI.


0D3BD1B9-C022-4061-9221-26F3369D6081.jpeg


Then, all of a sudden, his 13 August article titled “Edge AI and Agentic AI: Pioneering the Future” mentions “neuromorphic” almost a dozen times - but still no mention of BrainChip or Akida.


Except for a BRN shareholder’s comment, which Madhu Gaganam liked:

1E9F65C9-9FB4-4A40-9B66-9A081CB93780.jpeg


Oh, and by the way: there was neither any mention of NVIDIA in that article…


In mid-September, he published another article on LinkedIn, titled “Human-Centric Physical AI: Empowering Collaboration Through CEDR”


It mentions neuromorphic computing twice in a very general way…

“Real-Time Empathy: Delivering empathetic, low-latency responses demands energy-efficient edge computing, as seen in neuromorphic systems projected to reach USD 20,272.3 million by 2030 with a 21.2% CAGR (Grand View Research, 2025).”

[…] “Enable Empathetic UX: Leverage low-latency, neuromorphic-inspired edge computing (e.g., network pruning, sparse computation) for real-time emotional responses.”


…as well as NVIDIA Cosmos, NVIDIA Omniverse and NVIDIA Isaac Sim - but not NVIDIA Thor.


In a September or October article on Medium (I was able to read the Members only story once, but cannot access it any longer), he suddenly referenced Loihi as an example of a neuromorphic chip:

66353059-C388-4D30-B11F-F963F7E0DA94.jpeg


It appears it is only in the two posts promoting the white paper published a week ago that we first find out that CogniEdge.ai’s CEDR framework is supposedly powered by BrainChip Akida 2.0, NVIDIA Jetson AGX Thor and neuroergonomics…

Sorry, but this whole “concept” reeks of clickbait to me.
Even more so given the additional mention of UR5 cobots, Tesla’s Optimus humanoids and DJI drones.
But hey, Boston Dynamic’s Spot is missing! And what about Unitree robots? Also, let’s wait for Akida 3.0 to be released, as it will sound even more futuristic and impressive…


Oh, and suddenly T1 (by Innatera) also comes into play on the CogniEdge.ai website, where it is claimed to have been earmarked alongside Akida for a Q2/2026 pilot evaluating “BCI-driven VR control for immersive gaming.” Unbelievable! (Pick your intonation…)


180E8022-BB63-49FB-B4E9-3EEAFF12C444.jpeg


Last but not least:
Please check out the LinkedIn likes of the two posts that were promoting CEDR last week. You will mostly find familiar names of BRN retail shareholders, but not a single 👍🏻 or repost by BrainChip staff! If nothing else, this should make you suspicious!! Our company’s employees have in the past reposted and liked other LinkedIn posts featuring PoCs using Akida, such as the recent one by the AI Cowboys (based at UT San Antonio).
Don’t you think they would all have celebrated a post that had genuinely validated Akida 2.0.?!
 
  • Like
  • Fire
  • Thinking
Reactions: 12 users

Bravo

If ARM was an arm, BRN would be its biceps💪!
Nice!

"How will satellites be able to detect, isolate and fix faults independently in the future – and ten times faster than today?"

BrainChip's Akida - that's how! :)




Screenshot 2025-10-30 at 8.38.47 am.png



 
  • Like
  • Fire
  • Love
Reactions: 21 users
Sean in Australia

 
  • Like
  • Love
  • Fire
Reactions: 12 users

Bravo

If ARM was an arm, BRN would be its biceps💪!
View attachment 92536

@bludybludblud

this one mentions PLEIADES...?

https://worldwide.espacenet.com/patent/search/family/096095874/publication/US2025209313A1?q=PLEIADES brainchip

US2025209313A1 METHOD AND SYSTEM FOR IMPLEMENTING ENCODER PROJECTION IN NEURAL NETWORKS

[0009] For generating content, the neurons of NN models, such as polynomial expansion in adaptive distributed event-based systems (PLEAIDES) models, perform a temporal convolution operation on input signals with the temporal kernels. In some aspects, the temporal kernels are represented as an expansion over a basis function with kernel coefficients Gi . In some aspects, the kernel coefficients are trainable parameters of the neural network during learning. In some aspects, the kernel coefficients remain constant while generating content using convolutions during inference. Even though the recurrent mode of PLEIADES decomposes the convolution with a kernel onto a set of basis functions, the contribution from each may not be used individually, but summed together to provide a scalar value of the convolution. Such a scalar value has more limited power in generating signals than if a contribution, coefficient, from each basis could be used.


Thanks for your posts @Neuromorphia and @bludybludblud.

I also found this press release publicly stating that TENNs-PLEIADES has been “added to BrainChip’s IP portfolio.”

For what it's worth, I also asked ChatGPT about wether or not this statement indicates PLEIADES is part of BrainChip’s proprietary IP portfolio and it said " it is effectively a public declaration of ownership" and "taken together, the corporate statement + patent filings are strong evidence that PLEIADES is part of BrainChip’s proprietary IP portfolio."

It also said that the US2025/0209313 A1 patent application is pending. So, the final proof will come when the patent is granted. At that point it becomes legally enforceable IP, not just a filing.


Screenshot 2025-10-30 at 9.06.18 am.png





See ChatGPT's response below. Please DYOR as Chatty can make mistakes.



ChatGPT 5

Yes — that statement from BrainChip’s own press release or white-paper launch is effectively a public declaration of ownership.
When a company says something like:
TENNs-PLEIADES is the latest technological advancement added to BrainChip’s IP portfolio and an expansion of Temporal Event-Based Neural Nets (TENNs)…,”
it means they’re formally asserting that:
  1. PLEIADES belongs to their intellectual-property estate (i.e., covered by patents, filings, or trade-secret claims held by BrainChip Inc.), and
  2. It’s a continuation or derivative of their existing TENNs framework, not an external or open-source contribution.
That line is drafted with legal review—it’s the kind of language a company only uses after IP counsel has verified that filings or assignments are in place. It aligns with the patent application US 2025/0209313 A1 and the matching WO 2025/137663 publication, which both list BrainChip Inc. as assignee and explicitly reference PLEIADES.

So yes, taken together, the corporate statement + patent filings are strong evidence that PLEIADES is part of BrainChip’s proprietary IP portfolio.

Right now, US2025/0209313 A1 is a patent application (pending).

Final proof comes when:
  • The USPTO grants the patent, and
  • The claims that mention PLEIADES are included in the granted version.
At that point, it becomes legally enforceable IP, not just a filing.



 
  • Love
  • Like
  • Fire
Reactions: 12 users
Top Bottom