BRN Discussion Ongoing

rgupta

Regular
So the Pitt st conference at the state library just the other day
Did Sean H say that they had sold tens of thousands of chips in his address
Can anyone back this up with proof
If BRN has sold these chips why no announcement
So the Pitt st conference at the state library just the other day
Did Sean H say that they had sold tens of thousands of chips in his address
Can anyone back this up with proof
If BRN has sold these chips why no announcement
I think the same was declared in last 4c report.
Parsons bought a few, and a few other orders. Because the amount is not too much so in total sales achievement is considered zero against a target of 9 million.
Dyor
 
  • Like
Reactions: 1 users

jrp173

Regular
To announce on ASX it has to be material. That is my understanding. What is material? Don't really know for sure but it has been mentioned on here or the other place that shall not be mentioned that it's about $5M. Proof, got none, just going by past discussions on the forum.

SC


In terms of material there is no magical number....so with all respect, I disagree with you number of $5M (that you've referenced from here and the other place).

This is a direct cut from the ASX website..
1774522362290.png


BRN themselves have made price sensitive announcements (Dec 2024), where the values was well below $5M.

You can find further explanations under listing rules if interested.

The fact is that every announcement does not have to price sensitive... BrainChip COULD put out announcements with regards to partnerships etc that are not price sensitive. There is no excuse. Many other companies on the ASX make price sensitive and non price sensitive announcements within ASX guidelines...keeipng shareholders and the market up to date with company progress and going ons.

Whilst we are on announcements, would it be great to see BrainChip put something out about all the testing that Kevin is completing. Non price sensitive, non-ramping, and just explaining what Kevin is doing, and perhaps explaining how this relates to real life scenarios.

Such a missed opportunity IMO.




Further from ASX:: -
1774522892951.png
 
  • Love
  • Fire
  • Like
Reactions: 4 users

Frangipani

Top 20
A recent journal paper by twelve Politecnico di Torino researchers was uploaded to their university website today: “The inNuCE Research Infrastructure and the Neuromorphic MLOps for AIoT prototyping”.

Seven of the co-authors are from the Interuniversity Department of Regional and Urban Studies and Planning (DIST) and the remaining five are from the Department of Control and Computer Engineering (DAUIN).

The inNuCE Research Infrastructure (RI) hardware includes AKD1000.



“A complementary contribution proposed in this paper is the inNuCE Research Infrastructure (inNuCE RI), a two-pillar infrastructure that instantiates NMLOps in practice for neuromorphic AIoT prototyping. The name inNuCE is derived from the Latin in nuce (“in the shell”, “in embryo”), reflecting the mission to enable developers to create draft prototypes rapidly and then transition them to engineered products once feasibility is established. The first pillar is the Laboratory (inNuCE Lab), a physical facility housing event-based sensors, edge devices, and neuromorphic/digital boards for hands-on experimentation. Pillar two is the Heterogeneous Prototyping Platform (inNuCE HPP), a Platform-as-a-Service (PaaS) that virtualizes heterogeneous HW and enforces reproducibility via containerized toolchains (…).


VI. CONCLUSIONS
This paper presents the NMLOps process, an evolution of MLOps for the integration of neuromorphic technology in AIoT applications, and the inNuCE RI, a research infrastructure that operationalizes NMLOps on a cloud-native, heterogeneous prototyping platform that is tightly coupled with a physical laboratory. This enables end-to-end, reproducible prototyping and benchmarking across neuromorphic and digital substrates, as well as the validation of represen- tative on-edge AIoT applications such as HAR, Braille reading, navigation tracking, and constraint satisfaction problems. By consolidating toolchains, orchestrating heterogeneous HW with Kubernetes and Slurm, and enforcing rigorous versioning of data, models, and artifacts, inNuCE RI lowers adoption barriers and shortens the path from prototype to engineered system. The utilization of standard storage and versioning systems, in conjunction with the complete accessibility of data and tools within containerized environments, ensures the adherence of the service to the FAIR (Findable, Accessible, Interoperable, Reusable) principles. This facilitates the utilization of the platform for scientific studies, where the rigorous management of data is crucial. More broadly, virtual prototyping platforms such as inNuCE RI offer a low- cost, low-risk environment in which to explore designs and deployment options before committing to HW. Beyond neuromorphic workflows, the infrastructure also supports AIoT use cases that target standard and emerging digital technologies (CPU/GPU/TPU, MCU/TinyML, FPGAs, and other accelerators) through the same NMLOps procedures. This reduces development costs, shortens time-to-market, and broadens the scope of feasible heterogeneous AIoT use cases.

Our major achievement is turning NMLOps from a conceptual adaptation of MLOps into an operational practice, with browser workspaces, a multi-board execution backend, and harmonized evaluation, making cross-target compar- isons and iteration routine. In summary, a cloud-native, NMLOps-driven prototyping infrastructure is an effective catalyst for neuromorphic AIoT, as it preserves reproducibility, reduces risk and cost, and makes heterogeneous HW usable for real applications. Future work will evaluate and, where appropriate, implement federation with other complementary research infrastructures, as well as formalizing a broader NEP that embeds NMLOps into service-oriented workflows for multi-stakeholder system integration.”




E66C8B46-B8CE-43EA-8F97-739410CB62E3.jpeg


EB44FA5C-11B2-4289-A685-257ECFB1759F.jpeg
47E6D0DF-25D1-4485-BB41-79403AB7A7DA.jpeg


(…)

A16A35BA-644D-4B42-9EB3-D9F16FB9AB63.jpeg
5A584E78-0555-47E4-8A3B-AEE3CF3665FE.jpeg
7A77B5FD-735B-42F3-AAA9-CA603BF551A2.jpeg


(…)

ed2155ee-9112-41b6-88d7-ec3792315645-jpeg.96608

82AA6A9E-BCA6-4105-870E-72738783FBCA.jpeg
 

Attachments

  • ED2155EE-9112-41B6-88D7-EC3792315645.jpeg
    ED2155EE-9112-41B6-88D7-EC3792315645.jpeg
    703.1 KB · Views: 248
Last edited:
  • Like
  • Love
  • Fire
Reactions: 16 users
I recorded the audio of this event, and I believe it will also be released by Pitt Street very soon.

Here is the transcript from the recording...We've already sold several 1000s and 10s of 1000s of those, and we have, um, there will be available this summer for others and we're taking orders on those right now.


But this was just Sean talking on the fly, and in my opinion if not released officially via ASX, then I'm not taking that as fact!

And herein lies a the problem with BrainChip and lack of ASX announcements!
I was of the beliefs that CEO are not allowed to say inflammatory things and it’s against Australian law and come with a punishment of up to 15 years in jail
 
  • Wow
  • Thinking
  • Like
Reactions: 3 users

7For7

Regular
From the crapper…. Why is Brainchip recently a Nr. 1 topic with AI created stuff?



IMG_0964.jpeg
 
  • Like
  • Fire
Reactions: 2 users

Frangipani

Top 20

15EFE932-9B67-49A0-BF61-5385292D1DFF.jpeg



Links to the 18 March Electronic Specifier article @ChrisBRN already shared a week ago:

https://thestockexchange.com.au/threads/brn-discussion-ongoing.1/post-484868

Here's the full-length article:



BrainChip launches AKD1500 Edge AI co-processor​

18 March 2026
byNews Desk
BrainChip Launches AKD1500 Edge AI Co-Processor


The AKD1500 is a neuromorphic Edge AI accelerator co-processor chip designed to deliver exceptional performance with minimal power consumption, achieving 800 giga operations per second (GOPS) while operating under 300 milliwatts.

The new AI co-processor is optimised for battery-powered wearables and smart remote sensors, providing essential efficiency for heat-constrained environments.

By upgrading SoCs and microcontrollers without a total redesign, the AKD1500 provides an efficient AI path for industrial, consumer, and medical applications.

In an exclusive interview for Electronic Specifier, Steven Brightfield, CMO at BrainChip, emphasised that the MetaTF environment enables easy conversion, quantisation, and compilation of industry-standard models that accelerates AKD1500 integration for faster time-to-market.

Picture1.jpg
Steven Brightfield, CMO at BrainChip

Outstanding features for performance and efficiency​

The AKD1500 achieves a breakthrough in efficiency for low power Edge applications by delivering slightly under 1 tera operations per second (TOPS) while consuming less than 200mW of power in serial mode and 300mW in PCIe mode.

“Its outstanding performance is driven by a purely digital, event-based neuromorphic architecture that processes data only when ‘spikes’ or events occur, avoiding the wasteful energy consumption of always-on compute cycles found in traditional AI accelerators,” Brightfield said.

By processing layers directly on the Akida fabric, it also minimoses data movement to off-chip memory, which is a primary source of power drain in Edge devices.

Ideal use cases: battery and thermal constraints​

The AKD1500 is specifically designed for environments where battery life and thermal limits are critical, such as:

  • Battery-powered wearables: devices that must monitor health vitals or detect seizures continuously on a single charge
  • Smart sensors: Industrial IoT sensors for predictive maintenance in heat-sensitive or remote locations
  • Austere defence environments: tactical Edge devices where fan-less cooling and minimal power draw (SWaP requirements) are essential for operational success
  • Industrial sensing applications that are remote and battery powered, such as detecting presence or anomaly detection before equipment fails that must be operational 24/7

Integration with x86, ARM, and RISC-V​

This Edge AI co-processor offers seamless integration across all major host processing platforms via standard PCIe or low-power Serial (SPI) interfaces.

Brightfield added: “This flexibility allows developers to add neuromorphic processing to existing x86, ARM, or RISC-V systems without requiring a complete hardware platform overhaul, enabling a rapid path to market for intelligent Edge applications.”

On the software side, plug-n-play drivers and ONNX execution run-times that can be hosted on any ISA supports simple software migration and support.

Screenshot-2026-03-18-152819.jpg
Two popular configurations for the AKD1500 co-processor

Upgrading multiprocessor SoCs in professional environments​

In defence, industrial, and enterprise settings, the AKD1500 acts as a specialised “offload engine” for multiprocessor SoCs. By handling real-time pattern recognition and adaptive signal analysis locally, it upgrades the overall system capability without a redesign.

“This allows larger system processors to remain in low-power states or focus on high-level mission logic, significantly improving the overall energy efficiency of the platform,” Brightfield explained.

Even with multi-processor SoCs that contain neural processor units, these often are deeply integrated and require the entire platform to be powered up, even for a simple detection. For example, a NVIDIA Jetson platform may need over 10 watts just to stay active with minimal processing.

The AKD1500 can act as an Edge “sentry-mode” device that is always-on and can wake-up the larger SoC platform after identifying a detection event simply by adding a low cost M.2 card to the system.

Contribution to healthcare and consumer microcontrollers​

For lower end systems, the AKD1500 significantly enhances embedded microcontrollers (MCUs) by providing them with the intelligence of a high-end AI processor at a fraction of the power.

“In healthcare and wearables, it off-loads simple MCUs to perform complex tasks like real-time seizure prediction in medical or anomaly detection in industrial PCs, often done today in the Cloud due to power and area constraints,” Brightfield said.

The MCU can easily integrate the AKD1500 using a standard serial port and operate without any heat sinks or temperature issues in a wearable or remotely deployed device.

AI-enabled sensing in medical and defence​

The AKD1500 has already been designed into and delivered for high-stakes solutions, including:

  • Defence: partnerships with Parsons and Bascom Hunter for real-time signal analysis and adaptive defence platforms
  • Medical: integration into Nexa smart glasses by Onsor Technologies for the low-power prediction of epileptic seizures, directly improving patient quality of life

The next wave of smart AIoT devices​

As a catalyst for the next generation of AIoT, the AKD1500 enables “sovereign AI” – intelligence that is completely independent of the Cloud.

“Its compact, cost-effective package ensures that AI can be embedded in everything from smart doorbells and appliances to industrial factory sensors, making intelligent decision-making ubiquitous in everyday objects,” Brightfield commented.

Picture2.png
BrainChip’s AKD1500 Edge AI co-processor

Advantages of adaptive learning at the Edge​

Developers gain a major competitive advantage through on-chip learning, which allows devices to adapt to new data patterns or personalise themselves for a specific user in real time.

Brightfield added: “Unlike conventional AI that requires retraining in the cloud and expensive data transfers, the AKD1500 learns locally, which drastically reduces latency and ensures absolute data privacy for the end-user.”

GlobalFoundries 22FDX integration​

The integration of BrainChip’s neuromorphic architecture into the GlobalFoundries 22FDX platform creates a solution with superior compute and memory efficiency.

“The 22nm FD-SOI process is known for its ultra-low leakage, which complements the AKD1500’s event-based architecture to provide an ideal performance-per-watt envelope for the smallest edge devices,” Brightfield explained.

Leveraging MetaTF for machine learning​

Machine learning engineers utilise the MetaTF software environment to bridge the gap between traditional AI development and neuromorphic hardware.

“It allows them to easily convert, quantise, and compile models created in standard frameworks like TensorFlow/Keras and PyTorch, reducing development costs and ensuring that existing AI expertise can be immediately applied to Akida-based hardware,” Brightfield underlined.

Benefits of neuromorphic on-chip learning​

The Akida neuromorphic architecture mimics the human brain’s efficiency by mimicking Spiking Neural Networks (SNNs) in digital logic.

Brightfield added: “This allows the AKD1500 to perform “one-shot” or incremental learning, enabling the chip to recognise a new signature or face after seeing it only a few times – a feat that traditional AI accelerators require a full, Cloud-based retraining cycle.”

The road ahead​

To round out the article, it is worth noting that volume production for the AKD1500 is scheduled for Q3 2026, marking a critical transition from research and development to large-scale commercial availability.

“The low power and cost will enable many consumer and industrial products that have eBOM and power limitations to add AI to their legacy designs without a complete platform redesign,” Brightfield explained.

The AKD1500 will be complemented with a roadmap of new chips that will further the performance, accuracy, and power efficiency of Akida.

“Additionally, BrainChip continues to expand its reach through partnerships with companies like Nex Novus and Unigen (OEM hardware), AILabs, BeEmotion, Digirum, Vedya, MultiCoreWare (AI models), Spanidea (firmware) and EdgeImpulse (AI Tools) ensuring that the AKD1500 ecosystem is robust and ready for global deployment,” Brightfield concluded.

About the author:

Diego.jpg


Diego de Azcuénaga, Contributing Writer
 
Last edited:
  • Fire
Reactions: 1 users
Top Bottom