BRN Discussion Ongoing

7für7

Top 20
If Akida was involved, it would be huge!

For a shareholder of BrainChip (ASX:BRN), the technical "fingerprints" of Akida found within the IXI Eyewear specifications are highly compelling. While a formal partnership has not been publicly announced as of January 2026, the alignment of their 2025/2026 data points suggests a significant technological overlap.
Here is a side-by-side analysis of the technical parameters from BrainChip’s 2025 White Papers and IXI’s CES 2026 disclosures:

1. The "4 Milliwatt" Energy Signature​

This is the most significant clue for investors.
  • IXI Specification: At CES 2026, IXI revealed that its entire eye-tracking and processing system consumes only 4 milliwatts (mW) of power.
  • BrainChip Parameter: In their 2025 "Ready-to-Use Models" library, BrainChip listed the AkidaNet TENNs eye-tracking model as an "ultra-low power solution" designed to run in the single-digit milliwatt range on Akida Pico hardware.
  • Investor Insight: Achieving high-accuracy AI tracking at 4mW is nearly impossible with traditional "Instruction Set" architectures (like ARM or standard NPU). This specific power envelope is a hallmark of BrainChip's Event-Based (SNN) processing.

2. "Event-Based" Photodiode Sensing​

  • IXI Specification: IXI explicitly states they use cameraless eye tracking with a handful of analog channels (LEDs/Photodiodes) rather than a camera sensor.
  • BrainChip Parameter: BrainChip’s 2025 TENNs-PLEIADES white paper focuses specifically on "Spatio-temporal classification from event-based sensors." It highlights that their kernels are optimized for signal reflections (like those from IXI’s LEDs) rather than pixel grids.
  • The Match: IXI’s system tracks "subtle convergence" and "blinking" by measuring light reflection pulses—exactly the kind of "Temporal Event" data that BrainChip's TENNs were designed to process.

3. Model Size and "On-Chip" Independence​

  • BrainChip Parameter: The Akida 2 Model Zoo (Dec 2025) shows their state-of-the-art eye-tracking model uses only 220,000 parameters (220K). This is small enough to fit entirely within the embedded SRAM of a tiny Akida node without needing external DRAM.
  • IXI Specification: IXI's electronics are housed entirely in a 22-gram frame, leaving no room for bulky memory chips or large processors. The "Instant distance change" feature ($<200ms$ latency) requires the AI to live right next to the sensor—a core value proposition of the Akida IP.

Technical Comparison Table (2025-2026 Data)​

MetricBrainChip Akida (TENNs 2025)IXI Eyewear (CES 2026)Match Level
Power Consumption<5 mW (Pico/TENNs config)4 mW (Full tracking system)🎯 High
Sensor InputEvent-based (Non-camera)Photodiode/LED (Cameraless)🎯 High
LatencyUltra-low (Event-triggered)Instantly (Focus shift < 0.2s)✅ Consistent
Model Size220K ParametersUltra-integrated (in 22g frame)✅ Consistent
Key AI MethodSpatio-temporal (TENNs)Convergence detection🎯 High

4. What this means for BRN Shareholders​

If IXI is indeed using Akida (either as a chip or IP license), it represents the "Holy Grail" for BrainChip:
  • Mass Market Consumer Use-Case: Corrective eyewear is a multi-billion dollar industry.
  • Validation of Akida 2 / TENNs: It proves that "Temporal Event-Based Neural Nets" are superior for wearable biometrics where power is the ultimate constraint.
  • IP Royalty Potential: If IXI achieves its goal of replacing traditional progressive lenses, the volume of IP royalties for BrainChip could be substantial.

Summary​

The 4mW power consumption and 220K parameter model cited in BrainChip's 2025 technical docs are the closest "smoking gun" we have connecting them to IXI’s 2026 hardware. In the semiconductor world, it is very rare for two unrelated technologies to arrive at the exact same ultra-niche power and performance figures by coincidence.
Would you like me to monitor for any "Teardown" reports or "Patent Assignments" that might officially link BrainChip’s IP to IXI’s hardware as they move toward the late 2026 launch?

If this is really that big and could move the share price…especially if we’re involved or major revenues are expected…shouldn’t there be an official announcement? Or is there something I’m missing?
 
  • Like
Reactions: 4 users

Diogenese

Top 20
If Akida was involved, it would be huge!

For a shareholder of BrainChip (ASX:BRN), the technical "fingerprints" of Akida found within the IXI Eyewear specifications are highly compelling. While a formal partnership has not been publicly announced as of January 2026, the alignment of their 2025/2026 data points suggests a significant technological overlap.
Here is a side-by-side analysis of the technical parameters from BrainChip’s 2025 White Papers and IXI’s CES 2026 disclosures:

1. The "4 Milliwatt" Energy Signature​

This is the most significant clue for investors.
  • IXI Specification: At CES 2026, IXI revealed that its entire eye-tracking and processing system consumes only 4 milliwatts (mW) of power.
  • BrainChip Parameter: In their 2025 "Ready-to-Use Models" library, BrainChip listed the AkidaNet TENNs eye-tracking model as an "ultra-low power solution" designed to run in the single-digit milliwatt range on Akida Pico hardware.
  • Investor Insight: Achieving high-accuracy AI tracking at 4mW is nearly impossible with traditional "Instruction Set" architectures (like ARM or standard NPU). This specific power envelope is a hallmark of BrainChip's Event-Based (SNN) processing.

2. "Event-Based" Photodiode Sensing​

  • IXI Specification: IXI explicitly states they use cameraless eye tracking with a handful of analog channels (LEDs/Photodiodes) rather than a camera sensor.
  • BrainChip Parameter: BrainChip’s 2025 TENNs-PLEIADES white paper focuses specifically on "Spatio-temporal classification from event-based sensors." It highlights that their kernels are optimized for signal reflections (like those from IXI’s LEDs) rather than pixel grids.
  • The Match: IXI’s system tracks "subtle convergence" and "blinking" by measuring light reflection pulses—exactly the kind of "Temporal Event" data that BrainChip's TENNs were designed to process.

3. Model Size and "On-Chip" Independence​

  • BrainChip Parameter: The Akida 2 Model Zoo (Dec 2025) shows their state-of-the-art eye-tracking model uses only 220,000 parameters (220K). This is small enough to fit entirely within the embedded SRAM of a tiny Akida node without needing external DRAM.
  • IXI Specification: IXI's electronics are housed entirely in a 22-gram frame, leaving no room for bulky memory chips or large processors. The "Instant distance change" feature ($<200ms$ latency) requires the AI to live right next to the sensor—a core value proposition of the Akida IP.

Technical Comparison Table (2025-2026 Data)​

MetricBrainChip Akida (TENNs 2025)IXI Eyewear (CES 2026)Match Level
Power Consumption<5 mW (Pico/TENNs config)4 mW (Full tracking system)🎯 High
Sensor InputEvent-based (Non-camera)Photodiode/LED (Cameraless)🎯 High
LatencyUltra-low (Event-triggered)Instantly (Focus shift < 0.2s)✅ Consistent
Model Size220K ParametersUltra-integrated (in 22g frame)✅ Consistent
Key AI MethodSpatio-temporal (TENNs)Convergence detection🎯 High

4. What this means for BRN Shareholders​

If IXI is indeed using Akida (either as a chip or IP license), it represents the "Holy Grail" for BrainChip:
  • Mass Market Consumer Use-Case: Corrective eyewear is a multi-billion dollar industry.
  • Validation of Akida 2 / TENNs: It proves that "Temporal Event-Based Neural Nets" are superior for wearable biometrics where power is the ultimate constraint.
  • IP Royalty Potential: If IXI achieves its goal of replacing traditional progressive lenses, the volume of IP royalties for BrainChip could be substantial.

Summary​

The 4mW power consumption and 220K parameter model cited in BrainChip's 2025 technical docs are the closest "smoking gun" we have connecting them to IXI’s 2026 hardware. In the semiconductor world, it is very rare for two unrelated technologies to arrive at the exact same ultra-niche power and performance figures by coincidence.
Would you like me to monitor for any "Teardown" reports or "Patent Assignments" that might officially link BrainChip’s IP to IXI’s hardware as they move toward the late 2026 launch?
Hi Shaun,

Looks like Chatty has its rose coloured glasses on.

One of the IXI co-founders is Willi (Villi?) Mirttinen, and it looks like he was an inventor of this patent from 2018:

US11099381B2 Synchronizing light sources and optics in display apparatuses 20180810

Applicants: VARJO TECH OY [FI]

Inventors: PEUHKURINEN ARI ANTTI ERIK [FI]; MIETTINEN VILLE ILMARI [FI]

1768279496675.png


A display apparatus, communicably coupled with a server arrangement via a data communication network, comprising:
means for tracking a user's gaze;
means for tracking a pose of the display apparatus;
at least one light source;
at least one optical element; and
a processor configured to:
process gaze-tracking data, collected by the means for tracking the user's gaze, to determine a gaze position, a gaze direction, a gaze velocity and a gaze acceleration per eye of the user;
process pose-tracking data, collected by the means for tracking the pose of the display apparatus, to determine a position, an orientation, a velocity and an acceleration of the display apparatus;
send, to the server arrangement, gaze information indicative of the gaze position, the gaze direction, the gaze velocity and the gaze acceleration of the user, and apparatus information indicative of the position, the orientation, the velocity and the acceleration of the display apparatus, wherein the server arrangement is configured to predict a gaze position, a gaze direction, a gaze velocity and a gaze acceleration per eye of the user based on the gaze information, predict a position, an orientation, a velocity and an acceleration of the display apparatus based on the apparatus information, and process an input image to generate at least one image, based on the predicted gaze position, the predicted gaze direction, the predicted gaze velocity and the predicted gaze acceleration, and the predicted position, the predicted orientation, the predicted velocity and the predicted acceleration of the display apparatus;
receive, from the server arrangement, the at least one image, predicted gaze information indicative of the predicted gaze position, the predicted gaze direction, the predicted gaze velocity and the predicted gaze acceleration, and predicted apparatus information indicative of the predicted position, the predicted orientation, the predicted velocity and the predicted acceleration of the display apparatus;
determine an adjustment to be made in a configuration of the at least one light source and the at least one optical element prior to displaying the at least one image, based on the predicted gaze information and the predicted apparatus information;
determine whether or not a portion of at least one previous image is to be displayed during the adjustment;
when it is determined that the portion of the at least one previous image is to be displayed, display the portion of the at least one previous image via the at least one light source, whilst making the adjustment in the configuration of the at least one light source and the at least one optical element;
when it is determined that no portion of the at least one previous image is to be displayed, switch off or dim the at least one light source, whilst making the adjustment in the configuration of the at least one light source and the at least one optical element; and
display the at least one image via the at least one light source after the adjustment is made
.


More recently from the same company (optical distance measurement):

US2025216553A1 HYBRID DIRECT AND INDIRECT TIME-OF-FLIGHT IMAGING 20231227


Disclosed is a depth imaging system with a light source; a depth sensor comprising direct Time-of-Flight (dToF) pixels and indirect Time-of-Flight (iToF) pixels; and processor(s) configured to: employ the light source to emit an intensity-modulated light pulse towards objects in a real-world environment; obtain dToF data indicative of time taken by the intensity-modulated light pulse to reach the dToF pixels after being reflected by the objects; obtain iToF data indicative of phase shifts undergone by the intensity-modulated light pulse upon reaching the iToF pixels after being reflected by the objects; determine optical depths for the dToF pixels; determine optical depths for the iToF pixels; and generate a depth image from the optical depths of the dToF pixels and the optical depths of the iToF pixels.

[0048] Optionally, the optical depths for the iToF pixels are determined by using at least one neural network. Optionally, in this regard, an input of the at least one neural network comprises the iToF data indicative of the phase shifts and the optical depths determined for the dToF pixels, while an output of the at least one neural network comprises the optical depths for the iToF pixels. It will be appreciated that the at least one neural network determines the optical depths for the iToF pixels in a highly accurate manner by resolving said ambiguities, as compared to conventional techniques. In this way, the at least one neural network may act as a mapping function, providing refined depth predictions for the iToF pixels. This may enhance overall depth sensing in the real-world environment, and mitigate typical uncertainties involved in optical depth determination for the iToF pixels. It will also be appreciated that the aforesaid input is provided to the at least one neural network both in a training phase of the at least one neural network and in an inference phase of the at least one neural network (i.e., when the at least one neural is utilised after it has been trained).


{062] ... Optionally, the at least one processor is configured to employ at least one neural network for generating the high-resolution depth image.

While they refer to NNs, they don't seem to be interested in the details, but they do seem to think it is software, so there is no indication of an Akida link, but absence of evidence ...
 
  • Like
  • Fire
  • Love
Reactions: 5 users

7für7

Top 20
BrainChip investors are battle-hardened and remain unfazed by sudden spikes in volume and price… gamblers thought some people would dig deep today and top up their positions again… the plan didn’t work out, and now we’re back where we started … 18.xx… just like I said… only this time without a speeding ticket lol.
 

TECH

Regular
The amount of drivel posted by Tech continues.....

The company appreciates our support?? Shouldn't it be the other way around?

We "WILL" success because Aussies always punch above their weight? Really?


Once again, Tech, boasts about private conversations with Peter, where he has been told 2027 is our year and a big change is coming in the second half of 2026, yet Tech does not (or will not) disclose the details of his conversation with Peter to everyone else on the forum.. he'll just try and big note himself, and his relationship/friendship with Peter.

Tech has now for the second time in just a few days, alluded to the "change' that is coming. What is this change? Why do all shareholders not know about this change?

In what world is it appropriate that a founder and director of a company shares information with ONE shareholders who then comes online to talk about it, to boost his own ego....

It's like this Tech never learns....

Information has supposedly been told to a shareholder by a director (Peter) that has not been made public to all shareholders and he then comes on here telling us all that he's had said conversations with Peter. Work it out for yourself....

If Peter Van Der Made had even one commercial bone in his body, he'd know that he should NOT be disclosing information to one shareholder that has not been made public to all shareholders.

Work it out for yourselves everyone....

Unsure why behaviour this is tolerated on this site by users and admin.

You pathetic individual, that is full of defamatory statements, maybe, just maybe you took the bait......sour grapes surface yet again.

For the record, I know as much as you, which is probably f all. 😭
 
Top Bottom