BRN Discussion Ongoing

From crapper site by

FF…,
I personally would rate both Samsung and Hyundai as strong contenders:

1. Samsung because they were lending Anil Mankar their DVS Camera to prove out how AKD1000 could process on device without the need to convert their DVS event stream.

2. Hyundai because they own Boston Dynamics who provided Spot their robotic dog to Fraunhofer Research Institute to experiment with using Prophesee’s vision sensor and Brainchip’s AKD1000 to prove out that Spot could be controlled by hand gestures. I also note that Hyundai has used Fraunhofer directly over many years to undertake research on its behalf not to mention Prophesee’s partnership with Brainchip.

The above have been the subject of multiple posts here and elsewhere over the last four years.

My opinion only DYOR

Fact Finder
 
  • Like
  • Thinking
  • Fire
Reactions: 22 users

Bravo

If ARM was an arm, BRN would be its biceps💪!
Hi MD,

I've been looking into Arm's involvement in relation to 1) SoftBank's Project Izanagi, 2) OpenAI's custom AI chips, 3) the Stargate Project, and 4) Arm Holdings' chip development. Looking broadly at these projects, they are interconnected through strategic collaborations and shared objectives in advancing AI and semiconductor technologies. And Arm, in particular, is likely pivotal role in all of these initiatives IMO.

Bearing in mind, BrainChip's AKIDA processor, has been designed to integrate with Arm's product families and it actually enhances Arm's product offerings and performance in edge AI applications, it would be quite bizarre if Arm were not to eventually utilize our technology to their competitive advantage IMO.

However, given BrainChip's Akida processor is primarily designed for ultra-low-power edge AI applications and Arm's upcoming chip due for launch in 2025 is intended for high-performance data centre servers, the integration of Akida into Arm's server CPU is probably pretty unlikely at this time. But in future, you'd have to think it would be extremely likely , especially if further Arm chip designs focus more on devices and applications at the edge.

BTW, initially I thought that Arm's decision to start designing its own chips might be aligned with SoftBank's broader ambitions under Project Izanagi, but I can't find any explicit confirmation that Arm’s decision is a formal part of the Project Izanagi plan. It seems as though the aim of Project Izanagi is to pivot Softbank towards more direct technological advancements and investment in areas like AI, robotics, and semiconductor chips which will involve building infrastructure, focusing on AI-specific hardware and optimizing semiconductor designs.

1. SoftBank's Project Izanagi: Initiated in February 2024, Project Izanagi is SoftBank's ambitious venture aiming to raise up to $100 billion to develop AI processors that rival industry leaders like NVIDIA. This project seeks to leverage Arm's design expertise to create advanced AI chips, with prototype processors expected by summer 2025 and mass production targeted for 2026. Arm's involvement is crucial, as SoftBank owns a significant stake in Arm, facilitating seamless collaboration in chip design and development.

2. OpenAI's Custom AI Chips: OpenAI is developing its own AI chips in partnership with Broadcom, utilizing TSMC's 3-nanometer manufacturing technology. The goal is to finalize the chip design in the near future and commence mass production by 2026. While Arm isn't directly mentioned in this collaboration, its architecture could influence the design, given Arm's prominence in AI and data centre applications. SoftBank is a significant investor in OpenAI, but it doesn't seem to be directly involved in the development of these custom AI chips.

3. Stargate Project: Announced in early 2025, the Stargate Project is a monumental $500 billion initiative focused on building AI infrastructure across the United States. Spearheaded by SoftBank, OpenAI, Oracle, and other partners, the project plans to invest $100 billion over four years to enhance AI capabilities. Masayoshi Son, SoftBank's founder, serves as chairman. Arm's technology is integral to this project, providing the necessary chip designs for the AI infrastructure being developed.

4. Arm Holdings' Chip Development: In a strategic shift from its traditional licensing model, Arm Holdings plans to launch its own chip in 2025, with an unveiling expected as early as this summer. The new chip is anticipated to be a central processing unit (CPU) designed for servers in large data centre, aligning with the timelines of the aforementioned projects. This move positions Arm not only as a designer but also as a direct competitor in the semiconductor industry, potentially supplying chips for both Project Izanagi and the Stargate Project.


Bearing in mind, Softbank still owns 90% of Arm Holdings.

OpenAI forecasts highlight increased reliance on SoftBank, not Microsoft: report​

Feb. 21, 2025 12:11 PM ET
ChatGPT Mobile App

Kenneth Cheung
Financial forecasts from generative artificial intelligence startup OpenAI to investors have highlighted an increased reliance on Japanese tech conglomerate SoftBank (OTCPK:SFTBY), at the expense of Microsoft (NASDAQ:MSFT), The Information reported.
The startup told investors that the Stargate Project, which was announced by President Trump last month, is likely to be “heavily financed” by SoftBank, the news outlet reported. Three-quarters of the project's computing power needs would come from SoftBank by 2030, a stark contrast to present conditions, as Microsoft largely handles OpenAI's power and data center needs in the present.
OpenAI, SoftBank and Microsoft did not immediately respond to a request for comment from Seeking Alpha.
The Masayoshi Son-led SoftBank could also invest as much as $30B into OpenAI, with an additional $10B coming from co-investors, according to previous media reports.
Of that additional money, which could see OpenAI valued at $260B, roughly half will go towards Stargate. The Stargate Project is a $500B, four-year initiative to build out artificial intelligence infrastructure in the U.S., with OpenAI, SoftBank, and Oracle (ORCL) as primary partners.
Arm (ARM), Microsoft and Nvidia (NVDA) were also named as key partners in the project.
As part of the shift to SoftBank, OpenAI could slowly reduce its spending on Microsoft, though that is not expected to occur any time soon, The Information added. It is expected to spend up to $28B by 2028 on Microsoft data centers, up from $13B in 2025.
In addition, OpenAI has told investors that its revenue is expected to jump to more than $12.5B in 2025, up from $3.7B in 2024. By 2026, OpenAI's revenue could hit $28B, the news outlet added.


 
Last edited:
  • Like
  • Love
  • Wow
Reactions: 16 users

Bravo

If ARM was an arm, BRN would be its biceps💪!
View attachment 77906



Errr...Wow! Some interesting news here on Veritone that their AI solutions are now available on the DoD’s premiere AI procurement marketplace.

Could it be possible that we are involved in this in some way?

On 25 April 2018 BrainChip announced that it had signed an agreement with Veritone to integrate its AI-powered BrainChip Studio with the Veritone aiWARE™ platform.

In BrainChip's December 2018 Quarterly Update Report it states" the arrangement with Veritone provides BrainChip with potential revenue resulting from the frequency and duration with which Vertone's customers use the BrainChip studio tool".






Veritone Achieves “Awardable” Status on DoD’s Tradewinds Solutions Marketplace with Three AI Solutions

Veritone’s AI solutions available on the DoD’s premiere AI procurement marketplace, accelerating access to investigative tools and supporting compliance with freedom of information requirements
February 20, 2025 07:00 AM Eastern Standard Time
DENVER--(BUSINESS WIRE)--Veritone, Inc. (NASDAQ: VERI), a leader in building human-centered enterprise AI solutions, today announced that it has achieved “Awardable” status through the Chief Digital and Artificial Intelligence Office’s (CDAO) Tradewinds Solutions Marketplace. Veritone’s Illuminate, Redact and Track applications have been added to the Tradewinds Marketplace.

The Tradewinds Solutions Marketplace is the premiere offering of Tradewinds, the Department of Defense’s suite of tools and services designed to accelerate the procurement and adoption of Artificial Intelligence (AI)/Machine Learning (ML), data and analytics capabilities from organizations including Veritone.
Veritone’s solutions now available on Tradewinds include:
Illuminate: an AI-powered investigative and analytics tool that helps organizations search, analyze and manage vast amounts of video, audio and text-based data. It aids legal, compliance and investigative teams that need to quickly process structured and unstructured data for litigation, regulatory reviews and internal investigations.
Redact: an AI-powered redaction solution that automates the redaction of sensitive information within audio, video and image-based evidence. It is widely used by law enforcement, legal teams, media organizations and government agencies to comply with privacy laws and public records requests (FOIA).
Track: an AI-powered digital video analytics tool that enables tracking of individuals and vehicles across multiple video sources at scale, offering speed and accuracy in investigative efforts without relying on personally identifiable information (PII).
"Achieving ‘Awardable’ status on the Tradewinds Solutions Marketplace is a testament to Veritone’s commitment to delivering AI solutions that enhance national security, investigative workflows and intelligence operations," said Ryan Steelberg, chairman, president and chief executive officer, Veritone. "By streamlining access to our technology, we are empowering defense and law enforcement agencies with the tools they need to rapidly analyze, manage and act on critical data with speed, accuracy and security."
Veritone Illuminate, Redact and Track are part of Veritone’s Intelligent Digital Evidence Management System (iDEMS), a comprehensive purpose-built applications suite for the public sector that leverages AI to streamline the management and analysis of digital evidence, providing customers with powerful investigatory tools to handle vast amounts of data quickly and accurately. iDEMS is built on Veritone’s aiWARE™ platform, which is deployed in FedRAMP on AWS and Microsoft Azure and can be leveraged by DoD customers within their public cloud tenant or in their secure data center. aiWARE is an AI operating system that intelligently and securely orchestrates hundreds of best-of-breed cognitive and generative models in a single solution to transform extensive volumes of unstructured data, including video and audio, into actionable insights.
The agreement marks another milestone in Veritone’s expanding role within U.S. federal civilian and defense agencies, building upon the Company’s previously announced Test and Evaluation Services Blanket Purchase Agreement with the DoD, Sole-Contractor Blanket Purchase Agreement with the Department of Justice and Carahsoft’s GSA IT Schedule 70 contract.
To learn more about Veritone’s Public Sector solutions, visit: https://www.veritone.com/solutions/public-sector/
About Veritone
Veritone (NASDAQ: VERI) builds human-centered enterprise AI solutions. Serving customers in the media, entertainment, public sector and talent acquisition industries, Veritone’s software and services empower individuals at the world’s largest and most recognizable brands to run more efficiently, accelerate decision making and increase profitability. Veritone’s leading enterprise AI platform, aiWARE™, orchestrates an ever-growing ecosystem of machine learning models, transforming data sources into actionable intelligence. By blending human expertise with AI technology, Veritone advances human potential to help organizations solve problems and achieve more than ever before, enhancing lives everywhere. To learn more, visit Veritone.com.






View attachment 77902



View attachment 77903



View attachment 77904










Here's Ryan Steelberg, CE of Veritone, in an interview recorded 1 month ago where he states:

"So what this technology can do is, really identify in what we call "single-shot", analyse a tremendous amount of audio and video data, quickly create a database of those hits and then be able to quickly identify and make the correlations across video cameras of who these individuals are".

Obviously "single-shot" sounds very similar to "one-shot" as described in the 2018 announcement of the BrainChip Studio integration into Veritone's aiWARE platform.


EXTRACT from 2018 announcement.
Screenshot 2025-02-22 at 2.08.20 pm.png







I have set the video to start at 1.44 mins where Ryan makes the statement about single-shot.

 

Attachments

  • Screenshot 2025-02-22 at 2.05.17 pm.png
    Screenshot 2025-02-22 at 2.05.17 pm.png
    52.9 KB · Views: 34
Last edited:
  • Like
  • Love
  • Fire
Reactions: 47 users

JB49

Regular
Taiwan looking promising:
- In the interview with ITRI, Sean states "we have many engagements in Taiwan right now"
- This is reinforced by the fact there is a new Regional Sales manager role in Taiwan being advertised.
- Thomas Chang specifically mentioned the Taiwan Government has given ITRI a big boost this year. And then goes on to say they have a Venture Capital arm called ITIC who have invested over $US400 million since they began. Someone not shy to throw a bit of money at this could be exactly what we need to get this show going.
 
  • Like
  • Fire
Reactions: 17 users
Looks like our summer intern last year, posted about by @Frangipani in Nov has an extended stay as well as expanding what he's been up to.

Haven't tried to check the recent publications yet. Just a quick post.




FNU Sidharth​

ML Research @ BrainChip | Incoming CS PhD @ Univ. of Michigan, Soundability Lab | Speech and Audio Processing | UW ECE​

BrainChip University of Washington​


Machine Learning Researcher​

BrainChip

Jun 2024 - Present 9 months
Laguna Hills, California, United States
I contributed to the development of TENNs, a novel state-space model optimized for our Spiking Neural Network chip Akida, enabling efficient multimodal processing across audio, text, and vision. I helped develop aTENNuate, a real-time deep state-space speech enhancement model submitted to Interspeech 2025, and explored LoRA-based adaptation for optimizing these models. My work included refining LLM training for efficiency, designing a custom evaluation pipeline, and implementing a Triton-based GPU kernel for FFT convolution to enhance signal processing. Additionally, I developed model obfuscation techniques for secure edge inference and spearheaded a state-space-based speaker verification system for enterprise applications.

Publications​


Real-time Speech Enhancement on Raw Signals with Deep State-space Modeling

arXiv (Submitted to Interspeech 2025) September 5, 2024​


Decoding Pain: Statistical Identification of Biomarkers from Electrophysiological Signals

arXiv (accepted at AAAI 2025 Workshop on Health Intelligence) February 17, 2025​

 
  • Like
  • Fire
  • Love
Reactions: 15 users
The prickadillo's are also setting themselves up to take advantage of what is likely to be another poor result card in the soon to be released Annual report 2024, due sometime next week.
Again will be the mock shock horror at our lack of revenue followed up by a call for Sean's and the rest of the boards heads on pikes.
Then the comparisons to local coffee shops profitability and in a final indignant harrumphing will come the trenchant pleading to, at the very least, for the love of god, toss the board and sell out before the sky actually falls.
After all, the shorter's will make more money that way. 🤣

After a quick telepathic conference, Sean has authorised the unveiling of our latest space partnership, with the little known, but enthusiastic,
Thai Space Force. I wouldn't be surprised to find we have a hand in their latest triumph.....................



I’m feeling really positive that we might hit 6 figures, if not

1740200972626.gif
 
  • Haha
  • Like
  • Love
Reactions: 7 users
  • Like
Reactions: 2 users

manny100

Regular
The annual report will be out later this as me tinned earlier.
It will not have anything we do not already know.
My main interest is the table showing 'Real remuneration".
It would be good to see Both Sean and Tony V take 80% plus in equity again For salary. I am tipping that for Sean at least and close to if not more for Tony V.
 
  • Like
Reactions: 3 users

manny100

Regular
There have been many discussions here and recently some discussions over on the crapper about possibity of AKIDA getting on board with Nintendo.
For all intents and purposes the Switch is a device. AKIDA makes devices smarter.
AKIDA would offer several advantages including power savings, reduced heat generation, improved device responsiveness via real time improved object recognition and image processing.
The really big differentiator is 'on chip learning' that AKIDA offers. Nintendo developers imaginations would come up with ideas even BRN would never have dreamt of.
A game knowing your skills,preferences and weaknesses opens up a world of opportunities for developers.
I imagine it's a sure thing Megachip with a close relationship with Nintendo would have made them aware of AKIDA's potential and possibilities.
Given the Megachip licence was taken circa November 2021 given the long lead time required for a transition from traditional systems to the Edge for Nintendo it could be a chance BRN is involved.
Sean in the latest podcast said to watch for some interesting engagement in formation in the coming months
In with a chance and its a sure thing Megachip would be pushing AKIDA.
My own opinion is would Nintendo unleash the 'future' on its Switch users right now??
I am coming around to it making so much sense.
Hoping but not sure Nintendo will go for it? IMO its a sure thing Megachips/Nitendo ran all the tests and trials with the Switch.
Given its no secret the US defense is transitioning from the old traditional systems to EDGE AI it may be great marketing for Nintendo to beat them to it.
It's all wait and see from here and and a bit of fun speculating.
 
  • Like
  • Fire
  • Love
Reactions: 30 users
There have been many discussions here and recently some discussions over on the crapper about possibity of AKIDA getting on board with Nintendo.
For all intents and purposes the Switch is a device. AKIDA makes devices smarter.
AKIDA would offer several advantages including power savings, reduced heat generation, improved device responsiveness via real time improved object recognition and image processing.
The really big differentiator is 'on chip learning' that AKIDA offers. Nintendo developers imaginations would come up with ideas even BRN would never have dreamt of.
A game knowing your skills,preferences and weaknesses opens up a world of opportunities for developers.
I imagine it's a sure thing Megachip with a close relationship with Nintendo would have made them aware of AKIDA's potential and possibilities.
Given the Megachip licence was taken circa November 2021 given the long lead time required for a transition from traditional systems to the Edge for Nintendo it could be a chance BRN is involved.
Sean in the latest podcast said to watch for some interesting engagement in formation in the coming months
In with a chance and its a sure thing Megachip would be pushing AKIDA.
My own opinion is would Nintendo unleash the 'future' on its Switch users right now??
I am coming around to it making so much sense.
Hoping but not sure Nintendo will go for it? IMO its a sure thing Megachips/Nitendo ran all the tests and trials with the Switch.
Given its no secret the US defense is transitioning from the old traditional systems to EDGE AI it may be great marketing for Nintendo to beat them to it.
It's all wait and see from here and and a bit of fun speculating.

and don't forget about this:

1740214982553.png


 
  • Like
  • Thinking
  • Fire
Reactions: 26 users

Frangipani

Regular
Last edited:
  • Like
  • Love
  • Fire
Reactions: 37 users

manny100

Regular
I am starting to think Bravo's previous analysis of Nintendo using AKIDA via Megachips may have some legs.
It makes total sense.
The problem is every time something looks obvious it has not come off. Except lately of course:.
for space
Defence tranision to the Edge, Bascom/Navy/AKIDA1000 and 1500.
US AFRL
QV/BRN/Lochheed-Martin cybersecurity 'only game in town'.
We have been conditioned for disappointment but that is changing and I suspect the rate of change will increase.
I think in a year or 2 we will be re conditioned to expect new contracts/ deals on a regular basis.
BRN success is sneaking up in us.
 
  • Like
  • Love
  • Wow
Reactions: 38 users

Jimmy17

Regular
"Conditioned for disappointment" the only real statement to summarise my experience over 5 years and one which shines through beyond all thousands of pages of content on this form!!
 
  • Like
  • Haha
  • Sad
Reactions: 14 users

Bravo

If ARM was an arm, BRN would be its biceps💪!

Figure unveils first-of-its-kind brain for humanoid robots after shunning OpenAI​

Helix introduces a novel approach to upper-body manipulation control.​

Updated: Feb 20, 2025 01:46 PM EST

Kapil Kajal


1740268413366.png


In a significant move in the AI world, California-based Figure has revealed Helix, a generalist Vision-Language-Action (VLA) model that unifies perception, language understanding, and learned control to overcome multiple longstanding challenges in robotics.
Brett Adcock, founder of Figure, said that Helix is the most significant AI update in the company’s history.
“Helix thinks like a human… and to bring robots into homes, we need a step change in capabilities. Helix can generalize to virtually any household item,” Adcock said in a social media post.


“We’ve been working on this project for over a year, aiming to solve general robotics. Like a human, Helix understands speech, reasons through problems, and can grasp any object – all without needing training or code. In testing, Helix can grab almost any household object,” he added.
The launch of Helix follows Figure’s announcement of its separation from OpenAI in early February.
Adcock stated at that time, “Figure has achieved a significant breakthrough in fully end-to-end robot AI, developed entirely in-house. We are excited to reveal something that no one has ever seen before in a humanoid within the next 30 days.”

A series of the world’s first capabilities​

According to Figure, Helix introduces a novel approach to upper-body manipulation control.
It offers high-rate continuous control of the entire humanoid upper body, which includes the wrists, torso, head, and individual fingers.

This level of control allows for more nuanced movements and interactions. Another important aspect of Helix is its capability for multi-robot collaboration.

It can operate simultaneously on two robots, enabling them to work together on shared, long-term manipulation tasks involving objects they have not encountered before.
This feature significantly broadens the operational scope of robotics in complex environments.
Additionally, robots equipped with Helix can pick up a wide range of small household items, including many they have yet to encounter.

This ability is facilitated through natural language prompts, enhancing the ease of interaction and usability.

Helix also employs a distinctive approach by utilizing a single set of neural network weights to learn various behaviors, such as picking and placing items, using drawers and refrigerators, and enabling cross-robot interaction.

This eliminates the need for task-specific fine-tuning, streamlining the learning process.


Lastly, Helix operates entirely on embedded low-power GPUs, which makes it suitable for commercial deployment. This feature highlights its practicality for real-world applications.

Robots and Helix integration​

According to Figure, current robotic systems struggle to adapt quickly to new tasks, often requiring extensive programming or numerous demonstrations.
To address this, the Figure used the capabilities of Vision Language Models (VLMs) to enable robots to generalize their behaviors on demand and perform tasks through natural language instructions.
The solution presented is Helix, the model designed for controlling the entire humanoid upper body with high dexterity and speed.
Helix comprises System 1 (S1) and System 2 (S2). S2 is a slower, internet-pre-trained VLM that focuses on scene understanding and language comprehension.

At the same time, S1 is a fast visuomotor policy that converts the information from S2 into real-time robot actions. This division allows each system to operate optimally—S2 for thoughtful processing and S1 for quick execution.
“Helix addresses several issues previous robotic approaches faced, including balancing speed and generalization, scalability to manage high-dimensional actions, and architectural simplicity using standard models,” according to Figure.
Additionally, separating S1 and S2 enables independent improvements to each system without reliance on a shared observation or action space.
A dataset of around 500 hours of teleoperated behaviors was collected to train Helix, utilizing an auto-labeling VLM to generate natural language instructions.

The architecture involves a 7B-parameter VLM and an 80M parameter transformer for control, processing visual inputs to enable responsive control based on the latent representations generated by the VLM.

 
Last edited:
  • Like
  • Thinking
  • Fire
Reactions: 17 users

uiux

Regular

Attitude estimation system and attitude estimation method​


Current Assignee: MegaChips Corp

Abstract​

To estimate a user's posture, including a direction of the user's body, using a small number of sensors.SOLUTION: A posture estimation system comprises a measurement member 1 located at any part of four limbs of a user, and a posture acquisition part 520 for acquiring the posture of the measurement member. The measurement member includes an acceleration sensor 14 and a gyro sensor 15. The posture acquisition part 520 includes a reference coordinate determination part 521 for setting a reference coordinate system of the measurement member based on the user's operation of making the measurement member face a target 3, and an attitude estimation part 522 for estimating an attitude of the measurement member relative to the target by acquiring detection values Da and Dr output from the acceleration sensor and the gyro sensor in response to the user's operation of changing the attitude of the measurement member.

1740274909590.png


1740274936762.png


GPT analysis:


This patent describes a posture estimation system that determines a user's body orientation using a minimal number of sensors. It is primarily designed for gaming, VR, fitness tracking, and motion-based interaction systems.




1. Purpose & Use


The system aims to estimate the posture and orientation of a user’s body efficiently, using a small number of sensors instead of a full-body motion capture setup. This is particularly useful for:


  • Gaming – Motion-based gameplay using handheld controllers.
  • Virtual Reality (VR) & Augmented Reality (AR) – Enhancing user movement tracking.
  • Fitness & Rehabilitation – Monitoring body movement for training or therapy.
  • Human-Computer Interaction – Intuitive gesture-based controls.



2. Sensor Technologies


The system uses two key inertial sensors, embedded in a measuring device (such as a handheld controller or a wearable limb sensor):


  1. Acceleration Sensor (Accelerometer)
    • Measures movement acceleration in three axes (X, Y, Z).
    • Helps determine tilt and linear motion.
  2. Gyro Sensor (Gyroscope)
    • Measures rotational velocity in three axes (yaw, pitch, roll).
    • Tracks rotational movement and orientation changes over time.

These sensors are typically placed in:


  • Handheld controllers (left and right hands).
  • Wearable devices (e.g., strapped to feet or arms).
  • Potential expansion to lower body tracking (e.g., sensors on both hands and feet).



3. Processing Technologies & Processor Locations


The system processes sensor data at multiple levels, using different processors located in the controllers and the game console.


A. Processing at the Controller Level (Embedded Processors)


Each controller (or wearable sensor) contains an onboard processor that performs initial data collection and preprocessing:


  • Location: Inside each controller (or wearable sensor).
  • Functions:
    • Collects acceleration and gyroscope data.
    • Filters raw data to reduce noise.
    • Performs preliminary sensor fusion to combine acceleration and rotational data.
    • Communicates with the game console via wireless or wired connection.

B. Processing at the Game Console Level (Central Processing)


The main computational processing happens inside the game console:


  • Location: The game console’s central processor (CPU).
  • Functions:
    1. Reference Coordinate System Setup
      • The user performs a calibration motion, aligning the controllers to a fixed target (e.g., display screen).
      • This sets a baseline reference coordinate system.
    2. Posture Estimation
      • The console’s processor integrates accelerometer and gyroscope data from the controllers.
      • Uses sensor fusion algorithms to track movement and correct drift.
    3. Common Coordinate Conversion
      • Since each controller has an independent coordinate system, the console converts them into a unified coordinate system for consistent tracking.
    4. Machine Learning-Based Full Body Estimation
      • The console’s processor runs a machine learning model to estimate full-body posture based on limited sensor data.
      • The model is trained to predict shoulder, arm, and torso positions from hand-held controllers alone.
    5. Adaptive Motion Correction for Different Users
      • The system adjusts for different body sizes by applying acceleration correction algorithms.
      • Example: A child's arm will have different acceleration characteristics than an adult's, so the system scales acceleration values based on user height.



4. Advantages Over Traditional Systems


  • Fewer sensors required (no need for full-body tracking suits).
  • No waist-mounted sensors needed (orientation is inferred from hand-held devices).
  • Cost-effective and power-efficient (less hardware, lower processing demands).
  • Machine learning integration allows accurate full-body tracking with limited data.
  • Adaptable for different users via automated motion scaling.



 
  • Like
  • Fire
  • Love
Reactions: 50 users

Guzzi62

Regular
FF on the other place found below:

DS IAC JOURNAL 2024 No2

See page 15/16:

A Bioinspired System to
Autonomously Detect Tiny,
Fast-Moving Objects in Infrared
Imagery




The DS IAC journal: The Defense Systems Information Analysis Center (DSIAC) is a component of the U.S. Department of Defense’s (DoD's) Information Analysis Center (IAC) enterprise.

 
Last edited:
  • Like
  • Love
Reactions: 20 users


1X's 3rd iteration of NEO.
I think it's actually more uncanny, the closer it gets to moving like a real person, in a big sock..

These don't use the rigid mechanics of other humanoid robots.
 
  • Like
  • Wow
  • Love
Reactions: 11 users
Top Bottom