BRN Discussion Ongoing

JB49

Regular
Taiwan looking promising:
- In the interview with ITRI, Sean states "we have many engagements in Taiwan right now"
- This is reinforced by the fact there is a new Regional Sales manager role in Taiwan being advertised.
- Thomas Chang specifically mentioned the Taiwan Government has given ITRI a big boost this year. And then goes on to say they have a Venture Capital arm called ITIC who have invested over $US400 million since they began. Someone not shy to throw a bit of money at this could be exactly what we need to get this show going.
 
  • Like
  • Fire
Reactions: 17 users
Looks like our summer intern last year, posted about by @Frangipani in Nov has an extended stay as well as expanding what he's been up to.

Haven't tried to check the recent publications yet. Just a quick post.




FNU Sidharth​

ML Research @ BrainChip | Incoming CS PhD @ Univ. of Michigan, Soundability Lab | Speech and Audio Processing | UW ECE​

BrainChip University of Washington​


Machine Learning Researcher​

BrainChip

Jun 2024 - Present 9 months
Laguna Hills, California, United States
I contributed to the development of TENNs, a novel state-space model optimized for our Spiking Neural Network chip Akida, enabling efficient multimodal processing across audio, text, and vision. I helped develop aTENNuate, a real-time deep state-space speech enhancement model submitted to Interspeech 2025, and explored LoRA-based adaptation for optimizing these models. My work included refining LLM training for efficiency, designing a custom evaluation pipeline, and implementing a Triton-based GPU kernel for FFT convolution to enhance signal processing. Additionally, I developed model obfuscation techniques for secure edge inference and spearheaded a state-space-based speaker verification system for enterprise applications.

Publications​


Real-time Speech Enhancement on Raw Signals with Deep State-space Modeling

arXiv (Submitted to Interspeech 2025) September 5, 2024​


Decoding Pain: Statistical Identification of Biomarkers from Electrophysiological Signals

arXiv (accepted at AAAI 2025 Workshop on Health Intelligence) February 17, 2025​

 
  • Like
  • Fire
  • Love
Reactions: 16 users
The prickadillo's are also setting themselves up to take advantage of what is likely to be another poor result card in the soon to be released Annual report 2024, due sometime next week.
Again will be the mock shock horror at our lack of revenue followed up by a call for Sean's and the rest of the boards heads on pikes.
Then the comparisons to local coffee shops profitability and in a final indignant harrumphing will come the trenchant pleading to, at the very least, for the love of god, toss the board and sell out before the sky actually falls.
After all, the shorter's will make more money that way. 🤣

After a quick telepathic conference, Sean has authorised the unveiling of our latest space partnership, with the little known, but enthusiastic,
Thai Space Force. I wouldn't be surprised to find we have a hand in their latest triumph.....................



I’m feeling really positive that we might hit 6 figures, if not

1740200972626.gif
 
  • Haha
  • Like
  • Love
Reactions: 7 users
  • Like
Reactions: 3 users

manny100

Regular
The annual report will be out later this as me tinned earlier.
It will not have anything we do not already know.
My main interest is the table showing 'Real remuneration".
It would be good to see Both Sean and Tony V take 80% plus in equity again For salary. I am tipping that for Sean at least and close to if not more for Tony V.
 
  • Like
Reactions: 3 users

manny100

Regular
There have been many discussions here and recently some discussions over on the crapper about possibity of AKIDA getting on board with Nintendo.
For all intents and purposes the Switch is a device. AKIDA makes devices smarter.
AKIDA would offer several advantages including power savings, reduced heat generation, improved device responsiveness via real time improved object recognition and image processing.
The really big differentiator is 'on chip learning' that AKIDA offers. Nintendo developers imaginations would come up with ideas even BRN would never have dreamt of.
A game knowing your skills,preferences and weaknesses opens up a world of opportunities for developers.
I imagine it's a sure thing Megachip with a close relationship with Nintendo would have made them aware of AKIDA's potential and possibilities.
Given the Megachip licence was taken circa November 2021 given the long lead time required for a transition from traditional systems to the Edge for Nintendo it could be a chance BRN is involved.
Sean in the latest podcast said to watch for some interesting engagement in formation in the coming months
In with a chance and its a sure thing Megachip would be pushing AKIDA.
My own opinion is would Nintendo unleash the 'future' on its Switch users right now??
I am coming around to it making so much sense.
Hoping but not sure Nintendo will go for it? IMO its a sure thing Megachips/Nitendo ran all the tests and trials with the Switch.
Given its no secret the US defense is transitioning from the old traditional systems to EDGE AI it may be great marketing for Nintendo to beat them to it.
It's all wait and see from here and and a bit of fun speculating.
 
  • Like
  • Fire
  • Love
Reactions: 32 users
There have been many discussions here and recently some discussions over on the crapper about possibity of AKIDA getting on board with Nintendo.
For all intents and purposes the Switch is a device. AKIDA makes devices smarter.
AKIDA would offer several advantages including power savings, reduced heat generation, improved device responsiveness via real time improved object recognition and image processing.
The really big differentiator is 'on chip learning' that AKIDA offers. Nintendo developers imaginations would come up with ideas even BRN would never have dreamt of.
A game knowing your skills,preferences and weaknesses opens up a world of opportunities for developers.
I imagine it's a sure thing Megachip with a close relationship with Nintendo would have made them aware of AKIDA's potential and possibilities.
Given the Megachip licence was taken circa November 2021 given the long lead time required for a transition from traditional systems to the Edge for Nintendo it could be a chance BRN is involved.
Sean in the latest podcast said to watch for some interesting engagement in formation in the coming months
In with a chance and its a sure thing Megachip would be pushing AKIDA.
My own opinion is would Nintendo unleash the 'future' on its Switch users right now??
I am coming around to it making so much sense.
Hoping but not sure Nintendo will go for it? IMO its a sure thing Megachips/Nitendo ran all the tests and trials with the Switch.
Given its no secret the US defense is transitioning from the old traditional systems to EDGE AI it may be great marketing for Nintendo to beat them to it.
It's all wait and see from here and and a bit of fun speculating.

and don't forget about this:

1740214982553.png


 
  • Like
  • Thinking
  • Fire
Reactions: 28 users

Frangipani

Top 20
Last edited:
  • Like
  • Love
  • Fire
Reactions: 41 users

manny100

Regular
I am starting to think Bravo's previous analysis of Nintendo using AKIDA via Megachips may have some legs.
It makes total sense.
The problem is every time something looks obvious it has not come off. Except lately of course:.
for space
Defence tranision to the Edge, Bascom/Navy/AKIDA1000 and 1500.
US AFRL
QV/BRN/Lochheed-Martin cybersecurity 'only game in town'.
We have been conditioned for disappointment but that is changing and I suspect the rate of change will increase.
I think in a year or 2 we will be re conditioned to expect new contracts/ deals on a regular basis.
BRN success is sneaking up in us.
 
  • Like
  • Love
  • Wow
Reactions: 43 users

Jimmy17

Regular
"Conditioned for disappointment" the only real statement to summarise my experience over 5 years and one which shines through beyond all thousands of pages of content on this form!!
 
  • Like
  • Haha
  • Sad
Reactions: 15 users

Bravo

If ARM was an arm, BRN would be its biceps💪!

Figure unveils first-of-its-kind brain for humanoid robots after shunning OpenAI​

Helix introduces a novel approach to upper-body manipulation control.​

Updated: Feb 20, 2025 01:46 PM EST

Kapil Kajal


1740268413366.png


In a significant move in the AI world, California-based Figure has revealed Helix, a generalist Vision-Language-Action (VLA) model that unifies perception, language understanding, and learned control to overcome multiple longstanding challenges in robotics.
Brett Adcock, founder of Figure, said that Helix is the most significant AI update in the company’s history.
“Helix thinks like a human… and to bring robots into homes, we need a step change in capabilities. Helix can generalize to virtually any household item,” Adcock said in a social media post.


“We’ve been working on this project for over a year, aiming to solve general robotics. Like a human, Helix understands speech, reasons through problems, and can grasp any object – all without needing training or code. In testing, Helix can grab almost any household object,” he added.
The launch of Helix follows Figure’s announcement of its separation from OpenAI in early February.
Adcock stated at that time, “Figure has achieved a significant breakthrough in fully end-to-end robot AI, developed entirely in-house. We are excited to reveal something that no one has ever seen before in a humanoid within the next 30 days.”

A series of the world’s first capabilities​

According to Figure, Helix introduces a novel approach to upper-body manipulation control.
It offers high-rate continuous control of the entire humanoid upper body, which includes the wrists, torso, head, and individual fingers.

This level of control allows for more nuanced movements and interactions. Another important aspect of Helix is its capability for multi-robot collaboration.

It can operate simultaneously on two robots, enabling them to work together on shared, long-term manipulation tasks involving objects they have not encountered before.
This feature significantly broadens the operational scope of robotics in complex environments.
Additionally, robots equipped with Helix can pick up a wide range of small household items, including many they have yet to encounter.

This ability is facilitated through natural language prompts, enhancing the ease of interaction and usability.

Helix also employs a distinctive approach by utilizing a single set of neural network weights to learn various behaviors, such as picking and placing items, using drawers and refrigerators, and enabling cross-robot interaction.

This eliminates the need for task-specific fine-tuning, streamlining the learning process.


Lastly, Helix operates entirely on embedded low-power GPUs, which makes it suitable for commercial deployment. This feature highlights its practicality for real-world applications.

Robots and Helix integration​

According to Figure, current robotic systems struggle to adapt quickly to new tasks, often requiring extensive programming or numerous demonstrations.
To address this, the Figure used the capabilities of Vision Language Models (VLMs) to enable robots to generalize their behaviors on demand and perform tasks through natural language instructions.
The solution presented is Helix, the model designed for controlling the entire humanoid upper body with high dexterity and speed.
Helix comprises System 1 (S1) and System 2 (S2). S2 is a slower, internet-pre-trained VLM that focuses on scene understanding and language comprehension.

At the same time, S1 is a fast visuomotor policy that converts the information from S2 into real-time robot actions. This division allows each system to operate optimally—S2 for thoughtful processing and S1 for quick execution.
“Helix addresses several issues previous robotic approaches faced, including balancing speed and generalization, scalability to manage high-dimensional actions, and architectural simplicity using standard models,” according to Figure.
Additionally, separating S1 and S2 enables independent improvements to each system without reliance on a shared observation or action space.
A dataset of around 500 hours of teleoperated behaviors was collected to train Helix, utilizing an auto-labeling VLM to generate natural language instructions.

The architecture involves a 7B-parameter VLM and an 80M parameter transformer for control, processing visual inputs to enable responsive control based on the latent representations generated by the VLM.

 
Last edited:
  • Like
  • Thinking
  • Fire
Reactions: 18 users

uiux

Regular

Attitude estimation system and attitude estimation method​


Current Assignee: MegaChips Corp

Abstract​

To estimate a user's posture, including a direction of the user's body, using a small number of sensors.SOLUTION: A posture estimation system comprises a measurement member 1 located at any part of four limbs of a user, and a posture acquisition part 520 for acquiring the posture of the measurement member. The measurement member includes an acceleration sensor 14 and a gyro sensor 15. The posture acquisition part 520 includes a reference coordinate determination part 521 for setting a reference coordinate system of the measurement member based on the user's operation of making the measurement member face a target 3, and an attitude estimation part 522 for estimating an attitude of the measurement member relative to the target by acquiring detection values Da and Dr output from the acceleration sensor and the gyro sensor in response to the user's operation of changing the attitude of the measurement member.

1740274909590.png


1740274936762.png


GPT analysis:


This patent describes a posture estimation system that determines a user's body orientation using a minimal number of sensors. It is primarily designed for gaming, VR, fitness tracking, and motion-based interaction systems.




1. Purpose & Use


The system aims to estimate the posture and orientation of a user’s body efficiently, using a small number of sensors instead of a full-body motion capture setup. This is particularly useful for:


  • Gaming – Motion-based gameplay using handheld controllers.
  • Virtual Reality (VR) & Augmented Reality (AR) – Enhancing user movement tracking.
  • Fitness & Rehabilitation – Monitoring body movement for training or therapy.
  • Human-Computer Interaction – Intuitive gesture-based controls.



2. Sensor Technologies


The system uses two key inertial sensors, embedded in a measuring device (such as a handheld controller or a wearable limb sensor):


  1. Acceleration Sensor (Accelerometer)
    • Measures movement acceleration in three axes (X, Y, Z).
    • Helps determine tilt and linear motion.
  2. Gyro Sensor (Gyroscope)
    • Measures rotational velocity in three axes (yaw, pitch, roll).
    • Tracks rotational movement and orientation changes over time.

These sensors are typically placed in:


  • Handheld controllers (left and right hands).
  • Wearable devices (e.g., strapped to feet or arms).
  • Potential expansion to lower body tracking (e.g., sensors on both hands and feet).



3. Processing Technologies & Processor Locations


The system processes sensor data at multiple levels, using different processors located in the controllers and the game console.


A. Processing at the Controller Level (Embedded Processors)


Each controller (or wearable sensor) contains an onboard processor that performs initial data collection and preprocessing:


  • Location: Inside each controller (or wearable sensor).
  • Functions:
    • Collects acceleration and gyroscope data.
    • Filters raw data to reduce noise.
    • Performs preliminary sensor fusion to combine acceleration and rotational data.
    • Communicates with the game console via wireless or wired connection.

B. Processing at the Game Console Level (Central Processing)


The main computational processing happens inside the game console:


  • Location: The game console’s central processor (CPU).
  • Functions:
    1. Reference Coordinate System Setup
      • The user performs a calibration motion, aligning the controllers to a fixed target (e.g., display screen).
      • This sets a baseline reference coordinate system.
    2. Posture Estimation
      • The console’s processor integrates accelerometer and gyroscope data from the controllers.
      • Uses sensor fusion algorithms to track movement and correct drift.
    3. Common Coordinate Conversion
      • Since each controller has an independent coordinate system, the console converts them into a unified coordinate system for consistent tracking.
    4. Machine Learning-Based Full Body Estimation
      • The console’s processor runs a machine learning model to estimate full-body posture based on limited sensor data.
      • The model is trained to predict shoulder, arm, and torso positions from hand-held controllers alone.
    5. Adaptive Motion Correction for Different Users
      • The system adjusts for different body sizes by applying acceleration correction algorithms.
      • Example: A child's arm will have different acceleration characteristics than an adult's, so the system scales acceleration values based on user height.



4. Advantages Over Traditional Systems


  • Fewer sensors required (no need for full-body tracking suits).
  • No waist-mounted sensors needed (orientation is inferred from hand-held devices).
  • Cost-effective and power-efficient (less hardware, lower processing demands).
  • Machine learning integration allows accurate full-body tracking with limited data.
  • Adaptable for different users via automated motion scaling.



 
  • Like
  • Fire
  • Love
Reactions: 55 users

Guzzi62

Regular
FF on the other place found below:

DS IAC JOURNAL 2024 No2

See page 15/16:

A Bioinspired System to
Autonomously Detect Tiny,
Fast-Moving Objects in Infrared
Imagery




The DS IAC journal: The Defense Systems Information Analysis Center (DSIAC) is a component of the U.S. Department of Defense’s (DoD's) Information Analysis Center (IAC) enterprise.

 
Last edited:
  • Like
  • Love
Reactions: 23 users


1X's 3rd iteration of NEO.
I think it's actually more uncanny, the closer it gets to moving like a real person, in a big sock..

These don't use the rigid mechanics of other humanoid robots.
 
  • Like
  • Wow
  • Love
Reactions: 13 users


1X's 3rd iteration of NEO.
I think it's actually more uncanny, the closer it gets to moving like a real person, in a big sock..

These don't use the rigid mechanics of other humanoid robots.

Reminds me of an excellent tv show I watched a few years ago

 
  • Wow
  • Like
Reactions: 4 users

HopalongPetrovski

I'm Spartacus!


1X's 3rd iteration of NEO.
I think it's actually more uncanny, the closer it gets to moving like a real person, in a big sock..

These don't use the rigid mechanics of other humanoid robots.

Interesting stuff Dingo.
I particularly like the fact that it can be "driven" remotely.
This may be previous version but even so, they are getting there.

 
  • Wow
  • Like
Reactions: 4 users
Reminds me of an excellent tv show I watched a few years ago


Yeah watched that and enjoyed it.

It reminds me of a Red Dwarf episode, where there was an earlier "model" of Kryten and Rimmer questioned why the older model, was more "realistic" looking.

Kryten explained that they changed direction, because people were irked the more Life-like they became.

The only ones that will be made more Life-like, are "companion" robots..😆
 
  • Haha
  • Like
Reactions: 5 users
Top Bottom