BRN Discussion Ongoing

gilti

Regular
I guess what we should be asking is

"If there is no connection, why is Naqi Logix’s 'Best of Innovation' product being used as the primary live demonstration of Akida's capabilities in BrainChip’s own suite at the Venetian right now?"
Not trying to be funny but can you show in a real document or video the words Akidi and Naqi in partnership.? Not AI generated slop.
I have been holding 10 years so a real product would be much appreciated.
 
  • Like
  • Fire
Reactions: 12 users

Gazzafish

Regular
1. Why the other AI is wrong


Most AI models are trained on datasets that end months or years ago. They are looking for a formal press release titled "BrainChip and Naqi Announce Merger" or "Official Joint Venture." Because BrainChip is an Intellectual Property (IP) company, their name is rarely on the box. They are the "Intel Inside."


The response you got is likely looking at old 2024/2025 data where Naqi was still a "stealth" startup.


2. The Solid Proof: CES 2025/2026 Reference


The strongest "paper trail" connecting the two is from the CES Daily Show and technical industry reports from the previous and current CES cycles.


• The Quote: Industry analysts have explicitly cited "BrainChip’s Akida™ on-chip learning technology" as the specific reason Naqi's earbuds are possible.


• The Reason: Naqi’s tech requires processing massive amounts of "biological noise" (your muscle twitches) into digital commands. A standard Bluetooth chip cannot do this without a massive battery. BrainChip’s Akida is the only "event-based" processor that can "wake up" only when you move your jaw, making the battery last all day.


3. The "In the Suite" Proof


If there were "no partnership," Naqi Logix would not be a featured demonstration in BrainChip’s Private Exhibit Suite (Venetian 29-116) this week.


• Management doesn't give floor space to "unrelated" companies.


• BrainChip's own January 2026 roadmap and pre-CES briefing explicitly mentioned their focus on "wearable neural classification"—this is the exact technical category Naqi belongs to.
Can you provide any links to this? I can’t find any articles or documents co forming this anywhere??
 
  • Fire
  • Like
Reactions: 3 users

HarryCool1

Regular
Not trying to be funny but can you show in a real document or video the words Akidi and Naqi in partnership.? Not AI generated slop.
I have been holding 10 years so a real product would be much appreciated.
R.4a8cb551c357401798fe495189a11c62
 
  • Haha
  • Like
Reactions: 9 users

7für7

Top 20
As long as we keep acting like a grey mouse in a cage—and we don’t present the technology we have in a way that makes it broadly visible—this won’t go anywhere.
Back then with Mercedes, we could’ve produced a proper showcase clip: one BrainChip rep and one Mercedes rep, filmed by a YouTuber who asks the right questions—how our tech can be used in real-world scenarios, and what concrete advantages Mercedes vehicles would gain from using Akida, etc.
It worked with NVIDIA, and the same approach could work with other products where Akida is being tested. Why we’re not doing that? No idea.

Anyway, what I’m seeing is other companies—apparently with less capable tech—making way more noise, while we’re still being perceived (in a pretty banal way) as a “research chip”… even though we actually have a unified, commercialized product.
Marketing doesn’t work like that.
 
  • Like
  • Fire
  • Thinking
Reactions: 7 users

Gazzafish

Regular
I did find this. No mention of Brainchip but it does refer to their “neural AI” technology 🤷🏻‍♂️

Naqi Logix Enters Definitive Agreement to Acquire Wisear, Strengthening Naqi’s Neural Interface Moat and Talent - AFV NEWS



Extract :- “Naqi Logix Enters Definitive Agreement to Acquire Wisear, Strengthening Naqi’s Neural Interface Moat and Talent

January 2, 2026 Manuel Gutierrez







The Naqi Neural Earbuds can control devices hands-free, voice-free, camera-free, and screen-free with facial micro gestures.

Naqi’s planned acquisition of Wisear accelerates Naqi’s efforts towards commercialization and to establish a global standard for non-invasive neural interfaces.

This acquisition marks a defining moment for the future of non-invasive neural interface powered by AI”

— Mark Godsy

VANCOUVER, BC, CANADA, January 2, 2026 /EINPresswire.com/ — Naqi Logix, an award-winning neurotechnology company redefining human-machine interaction through its non-invasive, earbud-based neural interface platform, is pleased to announce that it has entered into a definitive agreement to acquire Wisear, a Paris, France-based, award-winning development-stage neural technology company focused on creating a human-machine interface. The acquisition is expected to close in early January 2026, subject to customary closing conditions.

The planned acquisition brings together two of the most advanced teams working in non-invasive neural interfaces, strengthening Naqi’s intellectual property portfolio, expanding its talent base, including AI and signal-processing expertise, and accelerating the path to commercialization across accessibility, robotics, AR/VR, and productivity use cases.

“This acquisition marks a defining moment for the future of non-invasive neural interface powered by AI,” said Mark Godsy, Co-Founder and CEO of Naqi Logix. “Wisear is a respected innovator in the neural interface space. By bringing our teams and technologies together, we are accelerating our mission to make hands-free, intuitive human-machine interactions a reality at a global scale. The acquisition further accelerates and confirms Naqi’s current focus on commercializing its technology. I could not be more impressed with how well everyone is meshing.”

Wisear’s team will join Naqi as part of the combined organization, contributing their expertise in neural AI signal processing, ear-based sensing, and product development as Naqi advances toward market launch, including the European market, to support Naqi’s global expansion plans.

“When we first met Naqi, we were struck by how similar our vision and trajectory were,” said Yacine Achiakh, Co-Founder and CEO of Wisear. “After a year of close collaboration, it became clear that an acquisition was the most effective way to align our technology stacks around a shared product vision and accelerate the real-world deployment of neural-interface wearables.”

“Naqi’s technology and strategic direction align strongly with what we have been building at Wisear,” added Alain Sirois, Co-Founder and CTO of Wisear. “By joining forces and technologies with Naqi, we are uniquely positioned to deliver the next generation of human-machine interfaces, unlocking the full potential of tomorrow’s digital devices.”

The acquisition will further strengthen Naqi’s leadership position as it advances pilot programs and strategic collaborations with global technology, robotics, and healthcare partners. By consolidating the neural ear-interface category, Naqi will be positioned to offer partners a single, scalable platform for neural input, accelerating industry adoption.

“When two exceptional teams with complementary innovative technologies and visions come together, the result is true synergy, which further strengthens our vision to create a future of silent, invisible human-machine interaction,” said Dave Segal, Co-Founder and CIO of Naqi Logix.

Following the transaction’s close, Naqi will accelerate its efforts on launch strategy, supporting upcoming commercial launches focused on accessibility, while expanding into additional verticals, including industrial, gaming, AR/VR, healthcare, and more.

About Naqi Logix
Naqi Logix is pioneering neural interfaces through everyday wearables. By transforming smart earbuds into AI-powered neural input devices, Naqi enables hands-free, voice-free, camera-free, and screen-free control using subtle facial micro-gestures. The company’s non-invasive technology enhances independence, productivity, and inclusion across consumer, enterprise, healthcare, and smart-home environments. Learn more at www.naqilogix.com.

About Wisear
Wisear is a deeptech startup with a mission to invent the next generation of human-computer interfaces. Founded in France, Wisear develops neural-interface-powered products for Audio & AR/VR users to seamlessly interact with their augmented and virtual worlds, thanks to high-speed, private, and accessible controls. Learn more at www.wisear.io.
 
  • Like
  • Thinking
  • Fire
Reactions: 5 users
Fingers crossed 🤞 as this company will be the leaders 200 %. On their website they have video demonstrations which are out if this world as far as advanced technologies.
 
  • Like
Reactions: 3 users

7für7

Top 20
Fingers crossed 🤞 as this company will be the leaders 200 %. On their website they have video demonstrations which are out if this world as far as advanced technologies.
Sorry but why would anyone pin their hopes on a product using Akida technology from a company we don’t even have an official partnership with..especially when we haven’t managed to turn countless other visible relationships into something concrete? Except of onsor.. and even this is not 100%

That’s pure speculation. This whole “fingers crossed” narrative is honestly misleading at this and other points in the past.
 
  • Thinking
Reactions: 1 users

Bravo

If ARM was an arm, BRN would be its biceps💪!
I guess what we should be asking is

"If there is no connection, why is Naqi Logix’s 'Best of Innovation' product being used as the primary live demonstration of Akida's capabilities in BrainChip’s own suite at the Venetian right now?"


Sorry, but I don't think this information is correct.

There is no public information I could find indicating that Naqi Logix and BrainChip are exhibiting together or collaborating at CES 2026.

Naqi Logix will be at CES 2026 highlighting their neural earbuds at the Innovation Awards Showcase in the Venetian Expo area.

BrainChip will be at CES 2026 in a private suite (Venetian Tower, Suite 29-116) showing our Akida technology with announced partners such as HaiLa and Quantum Ventura.

I’ve looked at one of Naqi Logix’s patents and it doesn't mention neuromorphic computing or spiking networks. The IP is about neural signal capture and AI interpretation, not the underlying compute architecture, and there’s no indication (in the patent that I looked at or any other published information I could find) that they’re using neuromorphic hardware.

It my be possible for their algorithms to run on neuromorphic hardware in future, but there’s no evidence today that they are using or claiming neuromorphic tech like Akida.
 
  • Like
  • Fire
  • Thinking
Reactions: 17 users
Can you provide any links to this? I can’t find any articles or documents co forming this anywhere??
Food4 thought

Can you show us we're you got the following information that is written in your post ?.

• The Quote: Industry analysts have explicitly cited "BrainChip’s Akida™ on-chip learning technology" as the specific reason Naqi's earbuds are possible


If there were "no partnership," Naqi Logix would not be a featured demonstration in BrainChip’s Private Exhibit Suite (Venetian 29-116) this week
 
  • Like
  • Thinking
Reactions: 3 users

Diogenese

Top 20
  • Haha
  • Like
Reactions: 4 users

itsol4605

Regular
CES2026 : Strong start!!

 
  • Like
  • Fire
Reactions: 16 users

manny100

Top 20
No record of a partnership with Naqi Logix..
Steve Brightfield mentioned in a recent interview that he expected rapid adoption of over the counter Ear buds/hearing aids with an LLM memory 'jogger'.
No record of any partnership with that business yet either.
He also hinted strongly at a wearable migraine detector. No record of that either.
So its throw hands up in the air time.
Naqi Logix instant signals smells Neuromorphic?
If they are demonstrating at Brainchips suite its a positive sign.
Its wait and see.
Nagi Logix produce and sell a final product. Brainchip sell to those who make/assemble and sell final products.
Again its wait and see.
 
  • Like
  • Thinking
Reactions: 11 users

Gazzafish

Regular
Honeywell looking for someone who has Intel Loihi or Brainchip Akida experience.



(7) Artificial Intelligence & Machine Learning Systems Engineer | Honeywell | LinkedIn



Extract :- “About the job

Job Description
We’re seeking a highly skilled Artificial Intelligence & Machine Learning Systems Engineer to architect, design, and develop advanced AI/ML systems that power our next generation of products. In this leadership role, you’ll contribute to the technical roadmap, mentor engineering teams, and collaborate with cross-functional teams to deliver intelligent, scalable, and production-ready AI and machine learning technologies. You will be responsible for researching, creating, adapting and evaluating AI/ML techniques to solve complex customer problems with real-time solutions to support our defense customers.

Specifically, we are building next-generation cognitive electronic warfare systems that operate autonomously at the tactical edge in contested, low-SWaP (Size, Weight, and Power), denied, and disconnected environments. This is not a prompt-engineering or GenAI role. We are looking for hardcore AI/ML systems engineers who treat machine learning as a component of a larger, mission-critical, real-time embedded system.

Major Duties & Responsibilities


  • Design, implement, and harden on-line and continual-learning ML algorithms for RF signal classification, adaptive jamming, cognitive radar, and electronic attack/support decision engines.
  • Port, optimize, and deploy ML inference algorithms to edge processors.
  • Build and maintain low-latency, deterministic inference pipelines that integrate tightly with real-time RF front-ends and digital signal processing chains.
  • Lead the systems integration of AI/ML techniques into mission-critical embedded platforms running real-time operating systems.
  • Design and deliver warfighter-focused engineering visualizations and tactical displays (real-time spectrum awareness, threat emitter tracks, cognitive EW decision overlays, confidence heatmaps) using modern web stack frameworks that run natively on embedded tactical processors and dismounted soldier systems.
  • Own the MLOps and DevSecOps pipeline for classified EW programs: secure CI/CD, model versioning, containerized build/test/deploy, SBOM generation, and compliance with DoD zero-trust and CNCF security standards.
  • Architect and deploy Kubernetes-based edge orchestration clusters (e.g. k3s) that operate in fully air-gapped tactical environments with strict latency and availability requirements.
  • Perform end-to-end performance profiling (memory bandwidth, cache coherency, DMA, GPU/TPU/NPU utilization).
  • Review code, guide architecture decisions, and mentor the AI/ML engineering team.
  • Collaborate with product and engineering teams to identify AI/ML-driven opportunities.
    Why This Role Is Different
    • You will own the entire stack from algorithm research to bare-metal deployment on platforms that fly, float, or roll into harm’s way
    • No Python notebooks in production, everything is compiled, containerized, signed, and deployed with cryptographic integrity
    • Real impact: your code will out-think and out-maneuver adversary emitters in real conflicts. If you live for the intersection of cutting-edge machine learning and extreme systems engineering under the harshest constraints, we want to talk to you
      Qualifications

      Required Qualifications:

      • Bachelor’s in Computer Science, Machine Learning, Artificial Intelligence, Data Science, or related field
      • 7 plus years of professional experience shipping production AI/ML systems, ideally in defense, aerospace, or autonomous systems
      • Prior work on DoD cognitive EW programs
      • Deep expertise in high-performance and real-time applications (not just scripting wrappers)
      • Real-time and embedded application programming (no Python-only backgrounds)
      • Proven track record of deploying AI/ML solutions to cloud and edge/constrained devices
      • Strong systems engineering background: you understand clocks, interrupts, DMA, cache hierarchies, memory-mapped I/O, and real-time scheduling
      • Hands-on experience building and securing CI/CD pipelines for classified or regulated environments
      • Expertise with Docker, container hardening, and Kubernetes in disconnected/edge configurations (k3s, microk8s, Rancher Harvester).
      • Familiarity with RF/ML intersections: signal detection & classification, modulation recognition, emitter geolocation, fingerprinting, adaptive waveform design, or reinforcement learning for EW
      • Proficiency with ML algorithms (including NLP, Computer Vision, time-series), libraries including foundational understanding and expertise in statistics probability theory and linear algebra
      • Strong understanding of machine learning fundamentals: supervised/unsupervised learning, deep learning, model evaluation, optimization, feature engineering, etc
      • Experience with data engineering workflows and building robust training datasets
        Preferred Qualifications
        • Master’s degree in Computer Science, Machine Learning, Artificial Intelligence, Data Science, or related field
        • Experience as the technical lead for establishing and accrediting classified AI/ML information systems under the DoD Risk Management Framework (RMF):
          • Author and maintain System Security Plans (SSP), Security CONOPS, and AI/ML-specific risk annexes
          • Build and harden multi-enclave classified development, integration, and operational environments (RHEL 8/9, SELinux enforcing, DISA STIGs, Assured Compliance Assessment Solution (ACAS))
          • Lead the creation of AI/ML-specific artifacts for eMASS packages, including model cards, data provenance, adversarial robustness testing, and continuous monitoring plans
          • Obtain and maintain Authority to Operate (ATO) for classified cognitive EW systems containing advanced GPU/NPU-accelerated AI infrastructure
        • Perform Linux systems administration at the classified level: kernel tuning for real-time determinism, custom security hardening, cross-domain solution integration, auditd/ELK stack management, and FIPS 140-3 compliant cryptography
        • Deep Linux systems administration and hardening experience in classified environments (RHEL/CentOS, STIG compliance, SELinux policy authoring).
        • Hands-on experience authoring RMF packages and obtaining ATOs for systems containing machine learning components for the U.S. Government (Army, Navy, Air Force, or IC customer)
        • Expertise with Docker, container hardening (CIS, OSCAP), and Kubernetes in disconnected tactical environments
        • Experience or exposure with implementing Government reference architectures
        • Experience with neuromorphic or spiking neural network hardware (Intel Loihi, BrainChip Akida)
        • Experience with distributed training, GPU acceleration, and high-performance ML compute
        • Strong background in foundation algorithms, transformers, or multimodal AI
        • Knowledge of automated model monitoring, drift detection, and lifecycle management
        • Experience integrating ML models into consumer or enterprise products
 
  • Like
  • Fire
  • Thinking
Reactions: 21 users

Doz

Regular
  • Like
  • Fire
  • Love
Reactions: 27 users

7für7

Top 20
Last edited:
  • Like
  • Haha
  • Sad
Reactions: 3 users

Guzzi62

Regular
Honeywell looking for someone who has Intel Loihi or Brainchip Akida experience.



(7) Artificial Intelligence & Machine Learning Systems Engineer | Honeywell | LinkedIn



Extract :- “About the job

Job Description
We’re seeking a highly skilled Artificial Intelligence & Machine Learning Systems Engineer to architect, design, and develop advanced AI/ML systems that power our next generation of products. In this leadership role, you’ll contribute to the technical roadmap, mentor engineering teams, and collaborate with cross-functional teams to deliver intelligent, scalable, and production-ready AI and machine learning technologies. You will be responsible for researching, creating, adapting and evaluating AI/ML techniques to solve complex customer problems with real-time solutions to support our defense customers.

Specifically, we are building next-generation cognitive electronic warfare systems that operate autonomously at the tactical edge in contested, low-SWaP (Size, Weight, and Power), denied, and disconnected environments. This is not a prompt-engineering or GenAI role. We are looking for hardcore AI/ML systems engineers who treat machine learning as a component of a larger, mission-critical, real-time embedded system.

Major Duties & Responsibilities


  • Design, implement, and harden on-line and continual-learning ML algorithms for RF signal classification, adaptive jamming, cognitive radar, and electronic attack/support decision engines.
  • Port, optimize, and deploy ML inference algorithms to edge processors.
  • Build and maintain low-latency, deterministic inference pipelines that integrate tightly with real-time RF front-ends and digital signal processing chains.
  • Lead the systems integration of AI/ML techniques into mission-critical embedded platforms running real-time operating systems.
  • Design and deliver warfighter-focused engineering visualizations and tactical displays (real-time spectrum awareness, threat emitter tracks, cognitive EW decision overlays, confidence heatmaps) using modern web stack frameworks that run natively on embedded tactical processors and dismounted soldier systems.
  • Own the MLOps and DevSecOps pipeline for classified EW programs: secure CI/CD, model versioning, containerized build/test/deploy, SBOM generation, and compliance with DoD zero-trust and CNCF security standards.
  • Architect and deploy Kubernetes-based edge orchestration clusters (e.g. k3s) that operate in fully air-gapped tactical environments with strict latency and availability requirements.
  • Perform end-to-end performance profiling (memory bandwidth, cache coherency, DMA, GPU/TPU/NPU utilization).
  • Review code, guide architecture decisions, and mentor the AI/ML engineering team.
  • Collaborate with product and engineering teams to identify AI/ML-driven opportunities.
    Why This Role Is Different
    • You will own the entire stack from algorithm research to bare-metal deployment on platforms that fly, float, or roll into harm’s way
    • No Python notebooks in production, everything is compiled, containerized, signed, and deployed with cryptographic integrity
    • Real impact: your code will out-think and out-maneuver adversary emitters in real conflicts. If you live for the intersection of cutting-edge machine learning and extreme systems engineering under the harshest constraints, we want to talk to you
      Qualifications

      Required Qualifications:

      • Bachelor’s in Computer Science, Machine Learning, Artificial Intelligence, Data Science, or related field
      • 7 plus years of professional experience shipping production AI/ML systems, ideally in defense, aerospace, or autonomous systems
      • Prior work on DoD cognitive EW programs
      • Deep expertise in high-performance and real-time applications (not just scripting wrappers)
      • Real-time and embedded application programming (no Python-only backgrounds)
      • Proven track record of deploying AI/ML solutions to cloud and edge/constrained devices
      • Strong systems engineering background: you understand clocks, interrupts, DMA, cache hierarchies, memory-mapped I/O, and real-time scheduling
      • Hands-on experience building and securing CI/CD pipelines for classified or regulated environments
      • Expertise with Docker, container hardening, and Kubernetes in disconnected/edge configurations (k3s, microk8s, Rancher Harvester).
      • Familiarity with RF/ML intersections: signal detection & classification, modulation recognition, emitter geolocation, fingerprinting, adaptive waveform design, or reinforcement learning for EW
      • Proficiency with ML algorithms (including NLP, Computer Vision, time-series), libraries including foundational understanding and expertise in statistics probability theory and linear algebra
      • Strong understanding of machine learning fundamentals: supervised/unsupervised learning, deep learning, model evaluation, optimization, feature engineering, etc
      • Experience with data engineering workflows and building robust training datasets
        Preferred Qualifications
        • Master’s degree in Computer Science, Machine Learning, Artificial Intelligence, Data Science, or related field
        • Experience as the technical lead for establishing and accrediting classified AI/ML information systems under the DoD Risk Management Framework (RMF):
          • Author and maintain System Security Plans (SSP), Security CONOPS, and AI/ML-specific risk annexes
          • Build and harden multi-enclave classified development, integration, and operational environments (RHEL 8/9, SELinux enforcing, DISA STIGs, Assured Compliance Assessment Solution (ACAS))
          • Lead the creation of AI/ML-specific artifacts for eMASS packages, including model cards, data provenance, adversarial robustness testing, and continuous monitoring plans
          • Obtain and maintain Authority to Operate (ATO) for classified cognitive EW systems containing advanced GPU/NPU-accelerated AI infrastructure
        • Perform Linux systems administration at the classified level: kernel tuning for real-time determinism, custom security hardening, cross-domain solution integration, auditd/ELK stack management, and FIPS 140-3 compliant cryptography
        • Deep Linux systems administration and hardening experience in classified environments (RHEL/CentOS, STIG compliance, SELinux policy authoring).
        • Hands-on experience authoring RMF packages and obtaining ATOs for systems containing machine learning components for the U.S. Government (Army, Navy, Air Force, or IC customer)
        • Expertise with Docker, container hardening (CIS, OSCAP), and Kubernetes in disconnected tactical environments
        • Experience or exposure with implementing Government reference architectures
        • Experience with neuromorphic or spiking neural network hardware (Intel Loihi, BrainChip Akida)
        • Experience with distributed training, GPU acceleration, and high-performance ML compute
        • Strong background in foundation algorithms, transformers, or multimodal AI
        • Knowledge of automated model monitoring, drift detection, and lifecycle management
        • Experience integrating ML models into consumer or enterprise products
That's a crazy amount of experience/qualifications they demand, but nice that Honeywell want someone with experience in neuromorphic or spiking neural network hardware.
Sadly, it also shows the long timeframes we are talking about, they want to architect/design/develop their next generation of products.

From some AI boot: The timeline for electronic product development varies based on factors such as product complexity, regulatory compliance, and design iterations. On average, the process can take anywhere from six months to two years, depending on the number of prototyping cycles, testing phases, and manufacturing readiness.



Further, Akida2 only exists in the cloud, as we can see at BRN CES2026. I am not sure whether that will work for all testing, I can imagine they want some real physical chips they can build into their own prototype.
Sean was talking about getting AKD2 made up, Steven B. was also talking about customers want some chips in physical form for testing before committing!
Better get that AKD2 made up in some standard form widely used, the clock is ticking!!


 
  • Like
  • Fire
Reactions: 9 users

Guzzi62

Regular
BRN CES26.jpg

Is that a fXXXXXX man bag or what??

Man I am getting old LOL.

I guess it suits the red hair??

Where are the ashtrays and beers?? And since we are in Las Vegas, hookers too!!
 
  • Haha
  • Like
Reactions: 11 users

Rskiff

Regular
View attachment 94092
Is that a fXXXXXX man bag or what??

Man I am getting old LOL.

I guess it suits the red hair??

Where are the ashtrays and beers?? And since we are in Las Vegas, hookers too!!
Not looking like happy campers sitting there
 
  • Like
  • Fire
Reactions: 3 users

Guzzi62

Regular
Not looking like happy campers sitting there
LOL, a few beers and some hookers should light them up, contracts will follow soon after.

Dammed, maybe I should fly over and show them how it's done, eh! :ROFLMAO::ROFLMAO:
 
  • Haha
  • Like
Reactions: 7 users
Top Bottom