BRN Discussion Ongoing

DK6161

Regular
At least one former BRN employee appears to be of the opinion that Sean Hehir is not the ideal choice for the CEO job (and I strongly suspect he is not the only one):


View attachment 79707


As I mentioned earlier this month, Anup Vanarse recently moved back to Australia and now works for BrainEye (https://thestockexchange.com.au/threads/brn-discussion-ongoing.1/post-452508). When I found out he had left BrainChip last year without having another job lined up, only six months after relocating from Perth to California, I suspected personal tensions to be the reason for his departure:

“The fact that he left what looks like a secure job without another one lined up (except for his ongoing side hustle as a remote AI/ML Advisor for NZ-based Scentian Bio, which presumably doesn’t pay the bills) suggests to me he was unhappy in his previous position, possibly due to personal tensions? (Or someone wasn’t happy with him and asked him to leave?)”
(https://thestockexchange.com.au/threads/brn-discussion-ongoing.1/post-435564)

And in my eyes, a number of his LinkedIn comments/likes since then appear to confirm my suspicion. Of course there will be those who’ll just shrug their shoulders and say nah, it’s nothing, just the voice of a disgruntled ex-employee, but I believe we should listen up when someone who is evidently still highly respected by many BrainChip staff members past and present (look at all the people congratulating him on his new position with BrainEye) gives posts like these a thumbs-up:

View attachment 79712


View attachment 79714
Maybe this bloke is hard to manage.
We used to have people like that working for us and glad they left.
 
  • Like
Reactions: 4 users

Frangipani

Top 20
Hey @DingoBorat,

here is another humanoid robot video for your collection!
Meet Surgie, the first ever humanoid surgeon, performing direct clinical tasks through teleoperation.


D0A7E1C5-8F9F-45E5-A54F-2C27FB1B52AF.jpeg





D3F42BE7-E43F-44AF-A499-352430EDED81.jpeg




 

Attachments

  • 761AAB0A-73D6-4E9F-ACBC-F783D1A9384C.jpeg
    761AAB0A-73D6-4E9F-ACBC-F783D1A9384C.jpeg
    731.9 KB · Views: 38
  • Like
  • Wow
  • Thinking
Reactions: 7 users
Hey @DingoBorat,

here is another humanoid robot video for your collection!
Meet Surgie, the first ever humanoid surgeon, performing direct clinical tasks through teleoperation.


View attachment 79730




View attachment 79732




I reckon I'd rather have a "medical intern" working on me directly, than that shaky thing! 😛

That's only "G1" though..
G3 to G5, most likely won't need the teleoperation and will be smooth as silk..

I thought they were only going to replace boring, mundane and dangerous tasks 🤔..
 
  • Like
Reactions: 2 users

Tothemoon24

Top 20

Intel Aims For AI Edge As NVIDIA Faces Price Critique​

Recent developments showcase Intel's strategies amidst pricing concerns over NVIDIA's GPUs and market competition.​

Intel has recently announced a series of initiatives aimed at accelerating the adoption of artificial intelligence (AI) at the edge, marking a significant push to simplify the integration of AI with existing infrastructure across various industries, including retail, manufacturing, smart cities, and media. The technology giant unveiled the new Intel AI Edge system, Edge AI suite, and Open Edge platform initiative on March 19, 2025, underlining a commitment to enhancing the efficiency and performance of AI applications deployed in real-world scenarios.

Dan Rodriguez, Intel's Corporate Vice President and General Manager of the Edge Computing Group, expressed enthusiasm about the potential for AI integration in existing workflows. "I'm enthusiastic about expanding AI utilization in existing infrastructure and workflows at the edge," Rodriguez stated, highlighting the strong demand for AI-driven solutions that cater to distinct business needs.

According to industry analysts at Gartner, the landscape of data processing is poised for transformation, with predictions indicating that by the end of 2025, fifty percent of enterprise-managed data will be processed outside traditional data centers or clouds. This shift, particularly driven by the integration of AI technologies, is expected to be significant, as companies increasingly rely on data processing at the edge.

Further, it is anticipated that by 2026, at least half of all edge computing deployments will incorporate machine learning, emphasizing the growing importance of AI in data handling and decision-making processes within organizations.

Intel is currently positioned to leverage its extensive footprint in edge deployments; it has over 100,000 real-world edge implementations in collaboration with partners, many of which capitalize on AI functionalities. The new AI technologies are crafted to address multiple industry-specific challenges, underscoring Intel's commitment to enhancing performance standards in edge AI applications.

In a notable development on the following day, March 20, 2025, former Intel CEO Pat Gelsinger issued a biting critique of NVIDIA's pricing structure for its AI GPUs during an interview. He asserted that the current pricing models are "overpriced by 10,000 times the cost required for AI inference," a claim that raises eyebrows around the industry and reflects deep concerns over the affordability of implementing AI solutions.

Gelsinger attributed NVIDIA's recent success to sheer luck rather than a sound strategic framework, suggesting that the company's advancements in AI were more incidental than planned. He emphasized, "AI is in inference, highlighting the need for optimized hardware," pointing to the necessity for improved cost structures as the AI market rapidly evolves.

The discussion surrounding NVIDIA's AI GPU pricing cannot be taken lightly. These GPUs, designed for data center applications, are traded in the tens of thousands of dollars range, making them significantly pricier compared to more affordable specialized hardware developed for inference tasks. Gelsinger's remarks suggest not only industry-wide implications for hardware production but also a serious reassessment of market competitiveness.

Despite Intel’s efforts in the AI domain, the company has faced considerable challenges in maintaining its competitive edge. Recently, it discontinued development of the 'Falcon Shores' AI chip and is now narrowing its focus on the 'Jaguar Shores' initiative. This strategic pivot reflects a recognition of the fierce competition present in the AI semiconductor market, wherein companies like NVIDIA and AMD are currently leading with innovative AI solutions.

Intel’s 'Gaudi' series also aims to deliver cost-effective performance. However, critics argue that its performance falls short when compared to powerhouses like NVIDIA's 'Hopper' and AMD's 'Instinct' lines. This competitive disadvantage is causing Intel to reevaluate its offerings in a landscape that increasingly prioritizes computational efficiency alongside performance metrics.

Looking ahead, Intel is pinning its hopes on the Jaguar Shores line as it seeks to re-establish a foothold in the AI market. However, skepticism remains regarding enterprises’ willingness to pivot away from NVIDIA’s established ecosystem, which is bolstered by its proprietary development environment, CUDA. This ecosystem has proven to be a powerful leverage point, facilitating varying AI applications beyond mere hardware comparisons.

As the industry navigates these turbulent waters, Gelsinger’s statements highlight the urgency for Intel to not only build technologically superior products but also foster a comprehensive ecosystem that includes robust software support and greater cost efficiency. Should the demand for optimized hardware solutions for AI inference grow, as Gelsinger suggests it might, Intel could regain its footing in a rapidly shifting market.

Ultimately, the future of AI market competition appears to be in a constant state of flux, characterized by emerging technologies and changing consumer expectations. The initiatives launched by Intel, coupled with the critical insights shared by industry veterans, make for an intriguing narrative, but the company's ability to adapt and innovate will determine its success in this burgeoning field.
 
  • Like
  • Fire
  • Love
Reactions: 24 users

Frangipani

Top 20

540C28F3-307E-4B3A-A33B-482BFAE3E203.jpeg
59CCE095-6844-4BCA-9CEE-0258525BFECD.jpeg


0E684949-A074-4BDD-80A8-3F7729F69A36.jpeg





3BD83651-9E24-4D36-BCD8-20AD1BD3AA6D.jpeg

2E9C3005-57ED-482C-B18C-E271A5D05F53.jpeg

48C02B1C-D9A9-4E27-80B4-DC63088A64E2.jpeg
 
  • Like
  • Love
Reactions: 10 users
When is LDA selling? As by the look of things it’s started already and will probably stop once the sell side evens up with the buy side again. Oh the sorters must love LDA so they can close there positions 🥲
 
  • Like
  • Thinking
Reactions: 2 users

Intel Aims For AI Edge As NVIDIA Faces Price Critique​

Recent developments showcase Intel's strategies amidst pricing concerns over NVIDIA's GPUs and market competition.​

Intel has recently announced a series of initiatives aimed at accelerating the adoption of artificial intelligence (AI) at the edge, marking a significant push to simplify the integration of AI with existing infrastructure across various industries, including retail, manufacturing, smart cities, and media. The technology giant unveiled the new Intel AI Edge system, Edge AI suite, and Open Edge platform initiative on March 19, 2025, underlining a commitment to enhancing the efficiency and performance of AI applications deployed in real-world scenarios.

Dan Rodriguez, Intel's Corporate Vice President and General Manager of the Edge Computing Group, expressed enthusiasm about the potential for AI integration in existing workflows. "I'm enthusiastic about expanding AI utilization in existing infrastructure and workflows at the edge," Rodriguez stated, highlighting the strong demand for AI-driven solutions that cater to distinct business needs.

According to industry analysts at Gartner, the landscape of data processing is poised for transformation, with predictions indicating that by the end of 2025, fifty percent of enterprise-managed data will be processed outside traditional data centers or clouds. This shift, particularly driven by the integration of AI technologies, is expected to be significant, as companies increasingly rely on data processing at the edge.

Further, it is anticipated that by 2026, at least half of all edge computing deployments will incorporate machine learning, emphasizing the growing importance of AI in data handling and decision-making processes within organizations.

Intel is currently positioned to leverage its extensive footprint in edge deployments; it has over 100,000 real-world edge implementations in collaboration with partners, many of which capitalize on AI functionalities. The new AI technologies are crafted to address multiple industry-specific challenges, underscoring Intel's commitment to enhancing performance standards in edge AI applications.

In a notable development on the following day, March 20, 2025, former Intel CEO Pat Gelsinger issued a biting critique of NVIDIA's pricing structure for its AI GPUs during an interview. He asserted that the current pricing models are "overpriced by 10,000 times the cost required for AI inference," a claim that raises eyebrows around the industry and reflects deep concerns over the affordability of implementing AI solutions.

Gelsinger attributed NVIDIA's recent success to sheer luck rather than a sound strategic framework, suggesting that the company's advancements in AI were more incidental than planned. He emphasized, "AI is in inference, highlighting the need for optimized hardware," pointing to the necessity for improved cost structures as the AI market rapidly evolves.

The discussion surrounding NVIDIA's AI GPU pricing cannot be taken lightly. These GPUs, designed for data center applications, are traded in the tens of thousands of dollars range, making them significantly pricier compared to more affordable specialized hardware developed for inference tasks. Gelsinger's remarks suggest not only industry-wide implications for hardware production but also a serious reassessment of market competitiveness.

Despite Intel’s efforts in the AI domain, the company has faced considerable challenges in maintaining its competitive edge. Recently, it discontinued development of the 'Falcon Shores' AI chip and is now narrowing its focus on the 'Jaguar Shores' initiative. This strategic pivot reflects a recognition of the fierce competition present in the AI semiconductor market, wherein companies like NVIDIA and AMD are currently leading with innovative AI solutions.

Intel’s 'Gaudi' series also aims to deliver cost-effective performance. However, critics argue that its performance falls short when compared to powerhouses like NVIDIA's 'Hopper' and AMD's 'Instinct' lines. This competitive disadvantage is causing Intel to reevaluate its offerings in a landscape that increasingly prioritizes computational efficiency alongside performance metrics.

Looking ahead, Intel is pinning its hopes on the Jaguar Shores line as it seeks to re-establish a foothold in the AI market. However, skepticism remains regarding enterprises’ willingness to pivot away from NVIDIA’s established ecosystem, which is bolstered by its proprietary development environment, CUDA. This ecosystem has proven to be a powerful leverage point, facilitating varying AI applications beyond mere hardware comparisons.

As the industry navigates these turbulent waters, Gelsinger’s statements highlight the urgency for Intel to not only build technologically superior products but also foster a comprehensive ecosystem that includes robust software support and greater cost efficiency. Should the demand for optimized hardware solutions for AI inference grow, as Gelsinger suggests it might, Intel could regain its footing in a rapidly shifting market.

Ultimately, the future of AI market competition appears to be in a constant state of flux, characterized by emerging technologies and changing consumer expectations. The initiatives launched by Intel, coupled with the critical insights shared by industry veterans, make for an intriguing narrative, but the company's ability to adapt and innovate will determine its success in this burgeoning field.
Wouldn’t hurt them to yell out Brainchips name in a few paragraphs at the Edge, how long this piece of string.
 
  • Like
Reactions: 10 users

manny100

Regular
Maybe this bloke is hard to manage.
We used to have people like that working for us and glad they left.
Generally bagging an ex employer publicly is risky. An emotionally intelligent person would probably hold himself together and understand its not a good career move.
Bag one employer it may turn off other prospective employers.
He may well have been hard done by but best to be discrete.
I agree that that you give your all at work but its best not to emotionally tie yourself to your employer. Emotionally tied people risk huge personal downers if things go sour.
 
  • Like
Reactions: 17 users

manny100

Regular

Intel Aims For AI Edge As NVIDIA Faces Price Critique​

Recent developments showcase Intel's strategies amidst pricing concerns over NVIDIA's GPUs and market competition.​

Intel has recently announced a series of initiatives aimed at accelerating the adoption of artificial intelligence (AI) at the edge, marking a significant push to simplify the integration of AI with existing infrastructure across various industries, including retail, manufacturing, smart cities, and media. The technology giant unveiled the new Intel AI Edge system, Edge AI suite, and Open Edge platform initiative on March 19, 2025, underlining a commitment to enhancing the efficiency and performance of AI applications deployed in real-world scenarios.

Dan Rodriguez, Intel's Corporate Vice President and General Manager of the Edge Computing Group, expressed enthusiasm about the potential for AI integration in existing workflows. "I'm enthusiastic about expanding AI utilization in existing infrastructure and workflows at the edge," Rodriguez stated, highlighting the strong demand for AI-driven solutions that cater to distinct business needs.

According to industry analysts at Gartner, the landscape of data processing is poised for transformation, with predictions indicating that by the end of 2025, fifty percent of enterprise-managed data will be processed outside traditional data centers or clouds. This shift, particularly driven by the integration of AI technologies, is expected to be significant, as companies increasingly rely on data processing at the edge.

Further, it is anticipated that by 2026, at least half of all edge computing deployments will incorporate machine learning, emphasizing the growing importance of AI in data handling and decision-making processes within organizations.

Intel is currently positioned to leverage its extensive footprint in edge deployments; it has over 100,000 real-world edge implementations in collaboration with partners, many of which capitalize on AI functionalities. The new AI technologies are crafted to address multiple industry-specific challenges, underscoring Intel's commitment to enhancing performance standards in edge AI applications.

In a notable development on the following day, March 20, 2025, former Intel CEO Pat Gelsinger issued a biting critique of NVIDIA's pricing structure for its AI GPUs during an interview. He asserted that the current pricing models are "overpriced by 10,000 times the cost required for AI inference," a claim that raises eyebrows around the industry and reflects deep concerns over the affordability of implementing AI solutions.

Gelsinger attributed NVIDIA's recent success to sheer luck rather than a sound strategic framework, suggesting that the company's advancements in AI were more incidental than planned. He emphasized, "AI is in inference, highlighting the need for optimized hardware," pointing to the necessity for improved cost structures as the AI market rapidly evolves.

The discussion surrounding NVIDIA's AI GPU pricing cannot be taken lightly. These GPUs, designed for data center applications, are traded in the tens of thousands of dollars range, making them significantly pricier compared to more affordable specialized hardware developed for inference tasks. Gelsinger's remarks suggest not only industry-wide implications for hardware production but also a serious reassessment of market competitiveness.

Despite Intel’s efforts in the AI domain, the company has faced considerable challenges in maintaining its competitive edge. Recently, it discontinued development of the 'Falcon Shores' AI chip and is now narrowing its focus on the 'Jaguar Shores' initiative. This strategic pivot reflects a recognition of the fierce competition present in the AI semiconductor market, wherein companies like NVIDIA and AMD are currently leading with innovative AI solutions.

Intel’s 'Gaudi' series also aims to deliver cost-effective performance. However, critics argue that its performance falls short when compared to powerhouses like NVIDIA's 'Hopper' and AMD's 'Instinct' lines. This competitive disadvantage is causing Intel to reevaluate its offerings in a landscape that increasingly prioritizes computational efficiency alongside performance metrics.

Looking ahead, Intel is pinning its hopes on the Jaguar Shores line as it seeks to re-establish a foothold in the AI market. However, skepticism remains regarding enterprises’ willingness to pivot away from NVIDIA’s established ecosystem, which is bolstered by its proprietary development environment, CUDA. This ecosystem has proven to be a powerful leverage point, facilitating varying AI applications beyond mere hardware comparisons.

As the industry navigates these turbulent waters, Gelsinger’s statements highlight the urgency for Intel to not only build technologically superior products but also foster a comprehensive ecosystem that includes robust software support and greater cost efficiency. Should the demand for optimized hardware solutions for AI inference grow, as Gelsinger suggests it might, Intel could regain its footing in a rapidly shifting market.

Ultimately, the future of AI market competition appears to be in a constant state of flux, characterized by emerging technologies and changing consumer expectations. The initiatives launched by Intel, coupled with the critical insights shared by industry veterans, make for an intriguing narrative, but the company's ability to adapt and innovate will determine its success in this burgeoning field.
This statement seems to be affecting us as well.
" However, skepticism remains regarding enterprises’ willingness to pivot away from NVIDIA’s established ecosystem, which is bolstered by its proprietary development environment, CUDA. This ecosystem has proven to be a powerful leverage point, facilitating varying AI applications beyond mere hardware comparisons."
Its hard to take clients from NVIDIA.
 
  • Like
Reactions: 3 users

Frangipani

Top 20
Check out this newly launched project called ARCHYTAS (ARCHitectures based on unconventional accelerators for dependable/energY efficienT AI Systems), funded by the European Defence Fund (EDF):

582874D4-943E-4894-9A0A-E173C182ACA2.jpeg


57E8CA6B-5078-46F1-9DB2-0FA8F69EAF2C.jpeg




A3373578-68C5-4B32-8F5B-0610C8776645.jpeg




2548344F-D2D4-4259-AF8B-B6CD0AD87479.jpeg
B2EB8C52-CC09-4669-AF05-85C5F159E3FF.jpeg
B8183A53-83AC-446B-AD66-4F2E1074A604.jpeg
 
Last edited:
  • Like
  • Love
  • Wow
Reactions: 8 users

Frangipani

Top 20
Check out this newly launched project called ARCHYTAS (ARCHitectures based on unconventional accelerators for dependable/energY efficienT AI Systems), funded by the European Defence Fund (EDF):

View attachment 79763

View attachment 79764



View attachment 79765



View attachment 79766 View attachment 79767 View attachment 79768

One of the ARCHYTAS project partners is Politecnico di Milano, whose neuromorphic researchers Paolo Lunghi and Stefano Silvestrini have experimented with AKD1000 in collaboration with Gabriele Meoni, Dominik Dold, Alexander Hadjiivanov and Dario Izzo from the ESA-ESTEC (European Space Research and Technology Centre) Advanced Concepts Team in Noordwijk, the Netherlands*, as evidenced by the conference paper below, presented at the 75th International Astronautical Congress in October 2024: 🚀
*(Gabriele Meoni and Dominik Dold have since left the ACT)

A preliminary successful demonstration is given for the BrainChip Akida AKD1000 neuromorphic processor. Benchmark SNN models, both latency and rate based, exhibited a minimal loss in accuracy, compared with their ANN coun- terparts, with significantly lower (from −50 % to −80 %) EMAC per inference, making SNN of extreme interest for applications limited in power and energy typical of the space environment, especially considering that an even greater improvement (with respect to standard ANN running on traditional hardware) in energy consumption can be expected with SNN when implemented on actual neuromorphic devices. A research effort is still needed, especially in the search of new architectures and training methods capable to fully exploit SNN peculiarities.

The work was funded by the European Space Agency (contract number: 4000135881/21/NL/GLC/my) in the framework of the Ariadna research program.



F0024669-8386-4D04-973A-C5DDDFDBE20B.jpeg
810789FB-7D6F-4D2A-A423-3A11D48F4C42.jpeg
8B5B25F0-C0F9-4D07-958E-46FD818E52CA.jpeg
17E1E3D7-4767-4ACE-A18C-BC682C341800.jpeg
 
Last edited:
  • Like
  • Fire
  • Wow
Reactions: 22 users

Frangipani

Top 20
Some bold claims from India that we should take with unhealthy amounts of sodium chloride: Towards the end of this video clip posted today, JSA LAABS asserts that AGI is here. I see. Erm, not.

Nevertheless, it will be interesting to see how capable their foundational model will really be, once released.

At least we can easily refute their 2024 (!) claim that NeuroCore is “the world’s first neuromorphic AI accelerator delivering unmatched speed, efficiency and intelligence”.

Judging from the number of 👍🏻👍🏻 👍🏻, not too many people seem to agree with that statement anyway.

(Although as the BRN threads here and elsewhere prove, the number of likes a post gets doesn’t necessarily correlate with its content’s grade of truthfulness… 😉)




B3B8A6A8-84FB-4284-981D-AD503F422863.jpeg




BCAA9E65-124E-4095-BCD4-9F3F6B7AADB9.jpeg

89A2DCE0-D532-4012-86A9-D18BDE27F5BD.jpeg
559FCDC0-D289-4C64-99E4-1D0A2375EDBE.jpeg




2D12D21F-C959-4D7F-960D-5DB6491A0A2D.jpeg

47B2BD48-78FD-44A1-856E-47F678CADAD9.jpeg
 
  • Thinking
  • Like
  • Wow
Reactions: 9 users

Frangipani

Top 20


The Robots Are Coming – Physical AI and the Edge Opportunity

Hero Image


By Pete Bernard
CEO, EDGE AI FOUNDATION


We have imagined “robots” for thousands of years, dating back to 3000 B.C. when Egyptian water clocks used human figurines to strike hour bells. They have infused our cultural future with movies like Metropolis in 1927 through C3PO and R2D2 in Star Wars and more.

Practically speaking, today’s working robots are much less glamorous. They have been developed over the past decades to handle dangerous and repetitive tasks and resemble nothing like humans. They roll through warehouses, mines, and deposit fertilizer on our farms. They also extend our perceptual reach through aerial and ground-based inspection systems, using visual and other sensor input.

Now that edge AI technology has evolved and getting ever more mature, the notion of physical AI is taking hold and it promises to be a critical platform that is fundamentally enabled by edge AI technologies. A generally agreed definition of physical AI is:

A combination of AI workloads running on autonomous robotic systems that include physical actuators.

This is truly “AI in the real world” in that these systems physically interact with the real world through motion, touch, vision, and physical control mechanisms including grasping, carrying and more. It can combine a full suite of edge AI technologies in a single machine. Executing AI workloads where the data is created will be critical for the low latency and low needs of these platforms. These could range from:

  • tinyML workloads running in its sensor networks and sensor fusion
  • Neuromorphic computing for high performance/ultra-low power, fast latency and wide dynamic range scenarios
  • CNN/RNN/DNN models running AI vision on image feeds, LIDAR or other “seeing” and “perceiving” platforms
  • Transformer-based generative AI models (including reasoning) performing context, understanding and human-machine interface functions
These are designed all into one system, with the complex orchestration, safety/security and controls needed for enterprise grade deployment, management and servicing. In addition, as the TOPS/watt and lower power/higher performance edge AI platforms come to the market, this will positively impact the mobility, cost and battery life of these systems.



Robotics is where AI meets physics. They require sophisticated physical capabilities to move grasp, extend sense and perform a wide range of tasks, but they are also software platforms that require training and decision making, making them prime candidates for one of the most sophisticated combinations of AI capabilities. The advent of accelerated semiconductor platforms, advanced sensor networks, sophisticated middleware for orchestration, tuned AI models, emerging powerful SLMs, applications and high-performance communication networks are ushering in a new era of physical AI.

Let’s level set with a taxonomy of robots and a definition of terms. There are many ways to describe robots – they can be sliced by environment (warehouse) or by function (payload) or even by mobility (un-manned aerial vehicles). Here is a sample of some types of robots in deployment today:

  • Pre-programmed robots
    • These can be Heavy Industrial robots, used in very controlled environments for repetitive and precise manufacturing tasks. These robots are typically fixed behind protective barriers, costs hundreds of thousands of dollars.
  • Tele-operated robots
    • These are used as “range extenders” for humans to perform inspections, observations, or repairs in challenging human environments – including drones or underwater robots for welding and repair. Perhaps the best-known tele-operated robots were the robots sent to Mars by NASA in the last few decades. There has also been a fish robot named SoFi designed to mimic propulsion via its tail and twin fins, swimming in the Pacific Ocean at depths of up to 18 meters. [1]
  • Autonomous robots
    • You probably have one of these in your house in the form a vacuum cleaner robot navigating without supervision and relying on its sensors for navigation. Recently we have seen a number of “lawnmower” robots introduced to take on this laborious task. In Agriculture, robots are already inspecting and even harvesting crops in an industry with chronic labor shortages[2]. There is also a thriving industry for autonomous warehouse robots – including in Amazon warehouses. [3]
  • Augmenting robots
    • These are designed to aid or enhance human capabilities such as prosthetic limbs or exoskeletons. You probably first were exposed to this category of robots when you watched The Six Million Dollar Man” on TV –but on a more serious note, they are providing incredible capabilities for amputees and enabling safer work environments for physical labor.[4]
  • Humanoid robots
    • Here’s where it gets interesting. We have developed a bi-pedal world – why not develop robots that work in that world as it’s been designed? Humanoid robots represent humans – as bi-pedal (or quad pedal in the case of Boston Dynamics), can communicate in natural language and facial expressions and perform a broad range of tasks using their limbs, hands and human-like appendages. The number of quad-pedal robot have only been deployed in the low thousands worldwide and we are still in the very early stages of development, deployment, and reasonable cost. Companies like Enchanted Tools[5] are demonstrating humanoid robots that can move amongst humans for carry lighter loads, deliver items, and communicate in natural language. Although humanoid robots will catch the bulk of the attention of the media in coming years, and face the most “cultural impact,” the other robot categories will also benefit greatly from generative AI and drive significantly greater efficiencies across industries.


How Generative AI on the edge will impact Physical AI

It’s hard to overstate the impact that Generative AI will have on the field of robotics. Beyond the ability for much more natural communication and understanding, Generative AI model architectures like Transformers will be combined with other model architectures like CNNs, Isolated Forests and others to provide context and human machine interfaces for image recognition, anomaly detection and observational learning. It will be a “full stack” of edge AI from metal to cloud.


Let’s take a look at the differences between traditional AI used in robotics and what Generative AI can bring:

Traditional AIGenerative AI
Rule-Based Approach: Traditional AI relies on strict rules set by programmers – like an actor following a precise script. These rules dictate how the AI system behaves, processes data, and makes decisions.Learning from Data Examples: Generative AI learns from data examples – essentially “tokenized movement.” It adapts and evolves based on the patterns it recognizes in the training data – like a drummer that watches their teacher and keeps improving. This can be done in the physical world or in a simulated world for safer and more extensive “observational training.”
Focused Adaptability: ML and models such as CNN/RNN/DNN are designed for focused tasks and operates based on predefined instructions. They run on very resource constrained environments at very low power and cost.

Creating New Data: Unlike traditional AI, generative AI can create new data based on experience and can adapt to new surroundings or conditions. However, this requires significant more TOPS/W and RAM, which can drive cost and battery powered applicability.

Data Analysis and Prediction: Non-generative AI excels at data analysis, pattern recognition, and making predictions. However, there is no creation of new data; it merely processes existing information.Applications in Robotics:Generative AI can drive new designs and implementations in robotics that leverages their ability to generate new data, whether it’s new communication/conversational techniques (in multiple languages), new movement scenarios or other creative problem solving.



In summary, while many forms edge AI are excellent and necessary for analyzing existing data and making predictions in resource constrained and low power environments, generative AI at the edge will now add the ability to create new data and adapt dynamically based on experience. The application of Generative AI to robotics will unlock observational learning, rich communication, and a much broader application of robots across our industries and our lives.



Safe and Ethical Robotics

Whenever robots are mentioned, the comparison to
“evil robots’ from our culture are not far behind. The Terminator, Ultron or Gunslinger from Westworld. And at the same time, we have enjoyed anthropomorphized robots like C3PO and R2D2, or Wall-E. And then there are ones in-between, like from the movie The Creator.

As attention has been paid to the scope Generative AI moving to AGI, what guardrails, best practices and outright legislation exists to keep robotic efforts – pared with Generative AI – in the category of good or neutral?

Isaac Asimov famously penned his three laws of robotics back as part of his short story “Runaround” in 1942:[6]
  • A robot shall not harm a human, or by inaction allow a human to come to harm
  • A robot shall obey any instruction given to it by a human
  • A robot shall avoid actions or situations that could cause it to come to harm itself
In 2021, Dr. Kate Darling – a research specialist in human-robot interaction, robot ethics and intellectual property theory and policy at the Massachusetts Institute of Technology (MIT) Media Lab – wrote an article in The Guardian proposing that we think about robots more like animals than a rival to humans. Once we make that shift, we can better discuss who are responsible for robot actions and who is responsible for the societal impacts that robots bring, such as transformations in the labor market.[7]

The European Union published “Civil law rules on robotics” back in 2017 that addressed the definition of a robot, where liability lies, the role of insurance and other key items. In 2023 a law was introduced in Massachusetts in the US that would 1) ban the sale and use of weapons-mounted robotic devices, 2) ban the use of robotic devices to threaten or harass, and 3) ban the usage of robotic devices to physically restrain an individual. It’s unclear how or when similar legislation will make it to the federal level.



Observational Learning Is a Game Changer

In the world of edge AI, training has happened on “the cloud” or in server-class GPU environments and inferencing has happened on the light edge. With the introduction of reinforcement learning and new work in continuous learning we will see the edge becoming a much more viable area for training.

However, in physical AI platforms, observational learning (sometimes referred to as behavior cloning) in AI allows robots to learn new skills simply by watching humans – in reality or in a simulated physical environment. Instead of being programmed step-by-step, robots can make connections in their neural networks based on observing human behavior and actions. This kind of unstructured training will enable robots to better understand the nuances of a given task and make their interaction with humans much more natural.


There have been a number of key advanced in AI models for observational learning, starting with CNN model types and recently leveraging diffusion model types such as the one presented in the Microsoft Research paper in 2023 – Imitating Human Behaviour with Diffusion Models.[8]

In March of 2024, NVIDIA introduced Gr00t[9], their own foundational model designed for observational learning of their ISAAC/JETSON robotics platforms. It was demonstrated at the NVIDIA GTC keynote by Jensen Huang and also leverages their Omniverse “digital twin” environment to develop virtualized physical environments that can train robots via observational learning in a safe and flexible virtualized environment. This was updated in 2025 to Gr00t N1 as well as a new “Newton” physics engine. We’re now seeing Foundation models tuned for robotics platforms[10] like Gr00t, but also RFM-1 by Covoariant, among others. Expect this area to proliferate with options much like Foundation models for LLMs in the cloud.

Robotics as a “three computer problem” – there is an AI model training in the cloud using generative AI and LLMs, there is model execution and ROS running on a robotics platform itself, and a simulation/digital twin environment to safely and efficiently develop and train.



The Edge AI Opportunity for Robotics

“Everything That Moves Will Be Robotic” – Jensen Huang

The confluence of generative AI and robotics is swinging the robotic pendulum back into the spotlight. Although Boston Dynamics has only deployed around 1500 Spot robots worldwide so far, expect many more, and in many more configurations, throughout our warehouses, our farms, or manufacturing floor. Expect many more humanoid experiments and expect a hype wave washing over us with plenty of media coverage of every failure.

Running generative AI on these platforms will require significant TOPS horsepower as well as high performance memory subsystems in addition to advanced controls actuators and sensors. We will see “datacenter” class semiconductors moving down into these platforms but just as interesting will be edge native semiconductor platforms moving up into this space, with the kinds of ruggedized thermal and physical properties as well as low power and the integrated communications needed. We will also see many new stand-alone AI acceleration silicon paired with traditional server class silicon. Mainstream platforms like phones and AI PCs will help drive down costs with their market scale.

However, in addition to requiring top end semiconductors and plenty of RAM, robotic platforms – especially humanoid ones – will require very sophisticated sensors, actuators, and electro-mechanical equipment – costing tens of thousands of dollars for the foreseeable future.

To keep things in perspective, Goldman Sachs[11] forecasted a 2035 Humanoid Robot TAM of US$38bn with shipments reaching 1.4m units. That’s not a tremendous unit volume for humanoid robots (PCs ship around 250m units per year, smartphones north of a billion) – we can expect orders of magnitude more “functional form factor robots” in warehouse, vacuuming homes and doing other focused tasks.

These platforms – like the ones now available from Qualcomm, NVIDIA, NXP, Analog Devices and more – are attracting developers that are taking their server class software skills and combining them with embedded computing expertise. Like mobility, robotics and physical AI are challenging developers and designers in new ways and provides a unique opportunity for workforce development, skill enhancement and career growth.

A key challenge here is to avoid the pitfalls of Industry 4.0 and IoT – how do we collaborate as an industry to help standardize on data sharing models, digital twin models, code portability and other elements of the robotics stack? If this area becomes more fractured and siloed we could see significant delays in real deployments of more advanced genAI driven robots.

Developers, designers and scientists are pushing the envelope and closing the gap between our imaginations and reality. Like with cloud-based AI, the use of physical AI will require important guardrails and best practices to keep us not only safe but make this newfound expansion of physical AI capabilities accretive to our society, but the future

We cannot underestimate the impact that new robotics platforms will have on our culture, our labor force, and our existential mindset. We’re at a turning point as edge AI technologies like physical AI are leveraging traditional sensor AI and machine learning with generative AI, providing a call-to-action for all technology providers in the edge AI “stack,” from metal to cloud, as well an opportunity for business across segments to rethink how these new platforms will leverage this new edge AI technology in ways that are still in our imagination.


[1] https://www.csail.mit.edu/research/sofi-soft-robotic-fish

[2] https://builtin.com/robotics/farming-agricultural-robots

[3] https://www.aboutamazon.com/news/operations/amazon-introduces-new-robotics-solutions

[4] https://www.automate.org/robotics/service-robots/service-robots-exoskeleton

[5] https://enchanted.tools/

[6] https://www.goodreads.com/en/book/show/48928553

[7] https://tdwi.org/articles/2021/06/1...drails-into-ai-driven-robotic-assistants.aspx

[8] https://www.microsoft.com/en-us/res...tating-human-behaviour-with-diffusion-models/

[9] https://nvidianews.nvidia.com/news/foundation-model-isaac-robotics-platform

[10] Foundation Models in Robotics: Applications, Challenges, and the Future – https://arxiv.org/html/2312.07843v1

[11] https://www.goldmansachs.com/intell...n-humanoid-robot-the-ai-accelerant/report.pdf
 
  • Like
  • Fire
  • Thinking
Reactions: 13 users

Frangipani

Top 20


Enhancing Wireless Communication with AI-Optimized RF Systems​


Rashi Bajpai
ByRashi Bajpai
March 20, 2025
https://www.facebook.com/sharer.php...ss-communication-with-ai-optimized-rf-systems

Introduction: The Convergence of AI and RF Engineering
The integration of Artificial Intelligence (AI) into Radio Frequency (RF) systems marks a paradigm shift in wireless communications. Traditional RF design relies on static, rule-based optimization, whereas AI enables dynamic, data-driven adaptation. With the rise of 5G, mmWave, satellite communications, and radar technologies, AI-driven RF solutions are crucial for maximizing spectral efficiency, improving signal integrity, and reducing energy consumption.

The Urgency for AI in RF Systems: Industry Challenges & Market Trends

The RF industry is under immense pressure to meet growing demands for higher data rates, better spectral utilization, and reduced latency. One of the key challenges is Dynamic Spectrum Management, where the increasing scarcity of available spectrum forces telecom providers to adopt intelligent allocation mechanisms. AI-powered systems can predict and allocate spectrum dynamically, ensuring optimal utilization and minimizing congestion.
Another significant challenge is Electromagnetic Interference (EMI) Mitigation. As the density of wireless devices grows, the likelihood of interference between different RF signals increases. AI can analyze vast amounts of data in real-time to predict and mitigate EMI, thus improving overall signal integrity.

Power Efficiency is another major concern, especially in battery-operated and energy-constrained applications. AI-driven power control mechanisms in RF front-ends enable systems to dynamically adjust transmission power based on network conditions, leading to significant energy savings. Additionally, Edge Processing Demands are increasing with the advent of autonomous systems that require real-time, AI-driven RF adaptation for high-speed decision-making and low-latency communications.

Advanced AI Techniques in RF System Optimization

Industry leaders like Qualcomm, Ericsson, and NVIDIA are investing heavily in AI-driven RF innovations. The following AI methodologies are transforming RF architectures:

Reinforcement Learning for Adaptive Spectrum Allocation

AI-driven Cognitive Radio Networks (CRNs) leverage Deep Reinforcement Learning (DRL) to optimize spectrum usage dynamically. By continuously learning from environmental conditions and past allocations, DRL can predict interference patterns and proactively assign spectrum in a way that maximizes efficiency. This allows for the intelligent utilization of both sub-6 GHz and mmWave bands, ensuring high data throughput while minimizing collisions and latency.

Deep Neural Networks for RF Signal Classification & Modulation Recognition

Traditional RF signal classification methods struggle in complex, noisy environments. AI-based techniques such as Convolutional Neural Networks (CNNs) and Long Short-Term Memory (LSTMs) networks enhance modulation recognition accuracy, even in fading channels. These deep learning models can also be used for RF fingerprinting, which improves security by uniquely identifying signal sources. Furthermore, AI-based anomaly detection helps identify and counteract jamming or spoofing attempts in critical communication systems.

AI-Driven Beamforming for Massive MIMO Systems

Massive Multiple-Input Multiple-Output (MIMO) is a cornerstone technology for 5G and 6G networks. AI-driven beamforming techniques use deep reinforcement learning to dynamically adjust transmission beams, improving directional accuracy and link reliability. Additionally, unsupervised clustering methods help optimize beam selection by analyzing traffic load variations, ensuring that the best possible configuration is applied in real-time.

Generative Adversarial Networks (GANs) for RF Signal Synthesis

GANs are being explored for RF waveform synthesis, where they generate realistic signal patterns that adapt to changing environmental conditions. This capability is particularly beneficial in electronic warfare (EW) applications, where adaptive waveform generation can enhance jamming resilience. GANs are also useful for RF data augmentation, allowing AI models to be trained on synthetic RF datasets when real-world data is scarce.

AI-Enabled Digital Predistortion (DPD) for Power Amplifiers

Power amplifiers (PAs) suffer from nonlinearities that introduce spectral regrowth, degrading signal quality. AI-driven Digital Predistortion (DPD)techniques leverage neural network-based PA modeling to compensate for these distortions in real-time. Bayesian optimization is used to fine-tune DPD parameters dynamically, ensuring optimal performance under varying transmission conditions. Additionally, adaptive biasing techniques help improve PA efficiency by adjusting power consumption based on the input signal’s requirements.

Industry-Specific Applications of AI-Optimized RF Systems

The impact of AI-driven RF innovation extends across multiple high-tech industries:

Telecommunications: AI-Powered 5G & 6G Networks

AI plays a crucial role in optimizing adaptive coding and modulation (ACM)techniques, allowing for dynamic throughput adjustments based on network conditions. Additionally, AI-enhanced network slicing enables operators to allocate bandwidth efficiently, ensuring quality-of-service (QoS) for diverse applications. AI-based predictive analytics also assist in proactive interference management, allowing networks to mitigate potential disruptions before they occur.

Defense & Aerospace: Cognitive RF for Military Applications

In military communications, AI is revolutionizing RF situational awareness, enabling autonomous systems to detect and analyze threats in real-time. AI-driven electronic countermeasures (ECMs) help counteract enemy jamming techniques, ensuring robust and secure battlefield communications. Machine learning algorithms are also being deployed for predictive maintenance of radar and RF systems, reducing operational downtime and enhancing mission readiness.

Automotive & IoT: AI-Driven RF Optimization for V2X Communication

Vehicle-to-everything (V2X) communication requires reliable, low-latency RF links for applications such as autonomous driving and smart traffic management. AI-powered spectrum sharing ensures that vehicular networks can coexist efficiently with other wireless systems. Predictive congestion control algorithms allow urban IoT deployments to adapt to traffic variations dynamically, improving efficiency. Additionally, AI-driven adaptive RF front-end tuning enhances communication reliability in connected vehicles by automatically adjusting antenna parameters based on driving conditions.

Satellite Communications: AI-Enabled Adaptive Link Optimization

Satellite communication systems benefit from AI-driven link adaptation, where AI models adjust signal parameters based on atmospheric conditions such as rain fade and ionospheric disturbances. Machine learning algorithms are also being used for RF interference classification, helping satellite networks distinguish between different types of interference sources. Predictive beam hopping strategies optimize resource allocation in non-geostationary satellite constellations, improving coverage and efficiency.

The Future of AI-Optimized RF: Key Challenges and Technological Roadmap

While AI is revolutionizing RF systems, several roadblocks must be addressed. One major challenge is computational overhead, as implementing AI at the edge requires energy-efficient neuromorphic computing solutions. The lack of standardization in AI-driven RF methodologies also hinders widespread adoption, necessitating global collaboration to establish common frameworks. Furthermore, security vulnerabilities pose risks, as adversarial attacks on AI models can compromise RF system integrity.

Future Innovations

One promising area is Quantum Machine Learning for RF Signal Processing, which could enable ultra-low-latency decision-making in complex RF environments. Another key advancement is Federated Learning for Secure Distributed RF Intelligence, allowing multiple RF systems to share AI models while preserving data privacy. Additionally, AI-Optimized RF ASICs & Chipsetsare expected to revolutionize real-time signal processing by embedding AI functionalities directly into hardware.

Conclusion

AI-driven RF optimization is at the forefront of wireless communication evolution, offering unparalleled efficiency, adaptability, and intelligence. Industry pioneers are integrating AI into RF design to enhance spectrum utilization, interference mitigation, and power efficiency. As AI algorithms and RF hardware continue to co-evolve, the fusion of these technologies will redefine the future of telecommunications, defense, IoT, and satellite communications.



Rashi Bajpai

Rashi Bajpaihttps://www.eletimes.com/
Rashi Bajpai is a Sub-Editor associated with ELE Times. She is an engineer with a specialization in Computer Science and Application. She focuses deeply on the new facets of artificial intelligence and other emerging technologies. Her passion for science, writing, and research brings fresh insights into her articles and updates on technology and innovation.
 
  • Like
  • Love
  • Fire
Reactions: 19 users

yogi

Regular
Hi FMF,

Looks like your suspicions were correct!

"Arquimea has deployed Akida with a Prophesee camera on a drone to detect distressed swimmers and surfers in the ocean helping lifeguards scale their services for large beach areas, opting for an event-based computing solution for its superior efficiency and consistently high-quality results."

I wonder if this has anything to do with the recent hiring of Finn Ryder to Development Representative at BrainChip, since he was a Senior Lifeguard and First Responder for the City of Huntington Beach for 5 years prior to joining us?


View attachment 79709


View attachment 79711 View attachment 79710

Wow Brainchip removed it from their website wonder why
 
  • Thinking
  • Like
  • Wow
Reactions: 16 users

MDhere

Top 20
Wow Brainchip removed it from their website wonder why
Maybe there was a typo or maybe just maybe it will be part of a price sensitive announcement?
 
  • Like
  • Thinking
Reactions: 7 users

jtardif999

Regular
ADVISOR UPSIDE

Nasdaq to Open New Office on ‘Y’All Street

Davy Crockett famously said “You may all go to hell, and I will go to Texas.” Nasdaq seems to agree (at least with that last part).

The exchange operator announced Tuesday that it will set up a new regional headquarters in Dallas, with an expected opening by yearend, and is planning additional investments in Texas. The goal for the new site isn’t just about winning listings; it will also include part of Nasdaq’s corporate solutions and financial crime management technology businesses.

The move is just the latest development for the Lone Star State’s burgeoning “Y’all Street,” which is already set to become home to the New York Stock Exchange Texas (formerly NYSE Chicago), and the Texas Stock Exchange — an upstart exchange financed by names including BlackRock, Charles Schwab and Citadel.

Yippee Ki-Yay

Though a second US headquarters in Dallas is a new chapter for the New York-based Nasdaq, it’s had a presence in Texas for more than a decade, establishing an office in Irving in 2013. The decision to double down on the largest state in the US mainland comes as a result of Nasdaq’s growing reach in
the South as well as the regions’ economic success:


Today, Nasdaq generates more than $750 million in Texas and the Southeast region, and has about 800 clients in the state, including corporate issuers, financial institutions, and asset managers, according to the exchange operator.


Texas is also home to more than 200 companies listed on the Nasdaq Composite Index, representing nearly $2 trillion in market cap as of December.
“Nasdaq is deeply ingrained in the fabric of the Texas economy,” Adena Friedman, chair and CEO of Nasdaq, said in a statement.

 What’s So Great About Texas? Just like New York and California, Texas hosts businesses both big and small, and it has the benefits of a large and growing labor force, no corporate or personal income tax, and being a “right-to-work” state, meaning workers can’t be required to join a union as a condition of employment. Because of its business-friendly environment, Texas contains more Fortune 500 companies than any other state, including Hewlett Packard, Tesla, and Charles Schwab.

And if any of those names don’t impress you, even Chuck E. Cheese is based in Texas.
 
  • Like
  • Fire
  • Love
Reactions: 5 users


The Robots Are Coming – Physical AI and the Edge Opportunity

Hero Image


By Pete Bernard
CEO, EDGE AI FOUNDATION


We have imagined “robots” for thousands of years, dating back to 3000 B.C. when Egyptian water clocks used human figurines to strike hour bells. They have infused our cultural future with movies like Metropolis in 1927 through C3PO and R2D2 in Star Wars and more.

Practically speaking, today’s working robots are much less glamorous. They have been developed over the past decades to handle dangerous and repetitive tasks and resemble nothing like humans. They roll through warehouses, mines, and deposit fertilizer on our farms. They also extend our perceptual reach through aerial and ground-based inspection systems, using visual and other sensor input.

Now that edge AI technology has evolved and getting ever more mature, the notion of physical AI is taking hold and it promises to be a critical platform that is fundamentally enabled by edge AI technologies. A generally agreed definition of physical AI is:

A combination of AI workloads running on autonomous robotic systems that include physical actuators.

This is truly “AI in the real world” in that these systems physically interact with the real world through motion, touch, vision, and physical control mechanisms including grasping, carrying and more. It can combine a full suite of edge AI technologies in a single machine. Executing AI workloads where the data is created will be critical for the low latency and low needs of these platforms. These could range from:

  • tinyML workloads running in its sensor networks and sensor fusion
  • Neuromorphic computing for high performance/ultra-low power, fast latency and wide dynamic range scenarios
  • CNN/RNN/DNN models running AI vision on image feeds, LIDAR or other “seeing” and “perceiving” platforms
  • Transformer-based generative AI models (including reasoning) performing context, understanding and human-machine interface functions
These are designed all into one system, with the complex orchestration, safety/security and controls needed for enterprise grade deployment, management and servicing. In addition, as the TOPS/watt and lower power/higher performance edge AI platforms come to the market, this will positively impact the mobility, cost and battery life of these systems.



Robotics is where AI meets physics. They require sophisticated physical capabilities to move grasp, extend sense and perform a wide range of tasks, but they are also software platforms that require training and decision making, making them prime candidates for one of the most sophisticated combinations of AI capabilities. The advent of accelerated semiconductor platforms, advanced sensor networks, sophisticated middleware for orchestration, tuned AI models, emerging powerful SLMs, applications and high-performance communication networks are ushering in a new era of physical AI.

Let’s level set with a taxonomy of robots and a definition of terms. There are many ways to describe robots – they can be sliced by environment (warehouse) or by function (payload) or even by mobility (un-manned aerial vehicles). Here is a sample of some types of robots in deployment today:

  • Pre-programmed robots
    • These can be Heavy Industrial robots, used in very controlled environments for repetitive and precise manufacturing tasks. These robots are typically fixed behind protective barriers, costs hundreds of thousands of dollars.
  • Tele-operated robots
    • These are used as “range extenders” for humans to perform inspections, observations, or repairs in challenging human environments – including drones or underwater robots for welding and repair. Perhaps the best-known tele-operated robots were the robots sent to Mars by NASA in the last few decades. There has also been a fish robot named SoFi designed to mimic propulsion via its tail and twin fins, swimming in the Pacific Ocean at depths of up to 18 meters. [1]
  • Autonomous robots
    • You probably have one of these in your house in the form a vacuum cleaner robot navigating without supervision and relying on its sensors for navigation. Recently we have seen a number of “lawnmower” robots introduced to take on this laborious task. In Agriculture, robots are already inspecting and even harvesting crops in an industry with chronic labor shortages[2]. There is also a thriving industry for autonomous warehouse robots – including in Amazon warehouses. [3]
  • Augmenting robots
    • These are designed to aid or enhance human capabilities such as prosthetic limbs or exoskeletons. You probably first were exposed to this category of robots when you watched The Six Million Dollar Man” on TV –but on a more serious note, they are providing incredible capabilities for amputees and enabling safer work environments for physical labor.[4]
  • Humanoid robots
    • Here’s where it gets interesting. We have developed a bi-pedal world – why not develop robots that work in that world as it’s been designed? Humanoid robots represent humans – as bi-pedal (or quad pedal in the case of Boston Dynamics), can communicate in natural language and facial expressions and perform a broad range of tasks using their limbs, hands and human-like appendages. The number of quad-pedal robot have only been deployed in the low thousands worldwide and we are still in the very early stages of development, deployment, and reasonable cost. Companies like Enchanted Tools[5] are demonstrating humanoid robots that can move amongst humans for carry lighter loads, deliver items, and communicate in natural language. Although humanoid robots will catch the bulk of the attention of the media in coming years, and face the most “cultural impact,” the other robot categories will also benefit greatly from generative AI and drive significantly greater efficiencies across industries.


How Generative AI on the edge will impact Physical AI

It’s hard to overstate the impact that Generative AI will have on the field of robotics. Beyond the ability for much more natural communication and understanding, Generative AI model architectures like Transformers will be combined with other model architectures like CNNs, Isolated Forests and others to provide context and human machine interfaces for image recognition, anomaly detection and observational learning. It will be a “full stack” of edge AI from metal to cloud.

Let’s take a look at the differences between traditional AI used in robotics and what Generative AI can bring:

Traditional AIGenerative AI
Rule-Based Approach: Traditional AI relies on strict rules set by programmers – like an actor following a precise script. These rules dictate how the AI system behaves, processes data, and makes decisions.Learning from Data Examples: Generative AI learns from data examples – essentially “tokenized movement.” It adapts and evolves based on the patterns it recognizes in the training data – like a drummer that watches their teacher and keeps improving. This can be done in the physical world or in a simulated world for safer and more extensive “observational training.”
Focused Adaptability: ML and models such as CNN/RNN/DNN are designed for focused tasks and operates based on predefined instructions. They run on very resource constrained environments at very low power and cost.Creating New Data: Unlike traditional AI, generative AI can create new data based on experience and can adapt to new surroundings or conditions. However, this requires significant more TOPS/W and RAM, which can drive cost and battery powered applicability.
Data Analysis and Prediction: Non-generative AI excels at data analysis, pattern recognition, and making predictions. However, there is no creation of new data; it merely processes existing information.Applications in Robotics:Generative AI can drive new designs and implementations in robotics that leverages their ability to generate new data, whether it’s new communication/conversational techniques (in multiple languages), new movement scenarios or other creative problem solving.


In summary, while many forms edge AI are excellent and necessary for analyzing existing data and making predictions in resource constrained and low power environments, generative AI at the edge will now add the ability to create new data and adapt dynamically based on experience. The application of Generative AI to robotics will unlock observational learning, rich communication, and a much broader application of robots across our industries and our lives.



Safe and Ethical Robotics

Whenever robots are mentioned, the comparison to
“evil robots’ from our culture are not far behind. The Terminator, Ultron or Gunslinger from Westworld. And at the same time, we have enjoyed anthropomorphized robots like C3PO and R2D2, or Wall-E. And then there are ones in-between, like from the movie The Creator.

As attention has been paid to the scope Generative AI moving to AGI, what guardrails, best practices and outright legislation exists to keep robotic efforts – pared with Generative AI – in the category of good or neutral?

Isaac Asimov famously penned his three laws of robotics back as part of his short story “Runaround” in 1942:[6]
  • A robot shall not harm a human, or by inaction allow a human to come to harm
  • A robot shall obey any instruction given to it by a human
  • A robot shall avoid actions or situations that could cause it to come to harm itself
In 2021, Dr. Kate Darling – a research specialist in human-robot interaction, robot ethics and intellectual property theory and policy at the Massachusetts Institute of Technology (MIT) Media Lab – wrote an article in The Guardian proposing that we think about robots more like animals than a rival to humans. Once we make that shift, we can better discuss who are responsible for robot actions and who is responsible for the societal impacts that robots bring, such as transformations in the labor market.[7]

The European Union published “Civil law rules on robotics” back in 2017 that addressed the definition of a robot, where liability lies, the role of insurance and other key items. In 2023 a law was introduced in Massachusetts in the US that would 1) ban the sale and use of weapons-mounted robotic devices, 2) ban the use of robotic devices to threaten or harass, and 3) ban the usage of robotic devices to physically restrain an individual. It’s unclear how or when similar legislation will make it to the federal level.



Observational Learning Is a Game Changer

In the world of edge AI, training has happened on “the cloud” or in server-class GPU environments and inferencing has happened on the light edge. With the introduction of reinforcement learning and new work in continuous learning we will see the edge becoming a much more viable area for training.

However, in physical AI platforms, observational learning (sometimes referred to as behavior cloning) in AI allows robots to learn new skills simply by watching humans – in reality or in a simulated physical environment. Instead of being programmed step-by-step, robots can make connections in their neural networks based on observing human behavior and actions. This kind of unstructured training will enable robots to better understand the nuances of a given task and make their interaction with humans much more natural.


There have been a number of key advanced in AI models for observational learning, starting with CNN model types and recently leveraging diffusion model types such as the one presented in the Microsoft Research paper in 2023 – Imitating Human Behaviour with Diffusion Models.[8]

In March of 2024, NVIDIA introduced Gr00t[9], their own foundational model designed for observational learning of their ISAAC/JETSON robotics platforms. It was demonstrated at the NVIDIA GTC keynote by Jensen Huang and also leverages their Omniverse “digital twin” environment to develop virtualized physical environments that can train robots via observational learning in a safe and flexible virtualized environment. This was updated in 2025 to Gr00t N1 as well as a new “Newton” physics engine. We’re now seeing Foundation models tuned for robotics platforms[10] like Gr00t, but also RFM-1 by Covoariant, among others. Expect this area to proliferate with options much like Foundation models for LLMs in the cloud.

Robotics as a “three computer problem” – there is an AI model training in the cloud using generative AI and LLMs, there is model execution and ROS running on a robotics platform itself, and a simulation/digital twin environment to safely and efficiently develop and train.



The Edge AI Opportunity for Robotics

“Everything That Moves Will Be Robotic” – Jensen Huang

The confluence of generative AI and robotics is swinging the robotic pendulum back into the spotlight. Although Boston Dynamics has only deployed around 1500 Spot robots worldwide so far, expect many more, and in many more configurations, throughout our warehouses, our farms, or manufacturing floor. Expect many more humanoid experiments and expect a hype wave washing over us with plenty of media coverage of every failure.

Running generative AI on these platforms will require significant TOPS horsepower as well as high performance memory subsystems in addition to advanced controls actuators and sensors. We will see “datacenter” class semiconductors moving down into these platforms but just as interesting will be edge native semiconductor platforms moving up into this space, with the kinds of ruggedized thermal and physical properties as well as low power and the integrated communications needed. We will also see many new stand-alone AI acceleration silicon paired with traditional server class silicon. Mainstream platforms like phones and AI PCs will help drive down costs with their market scale.

However, in addition to requiring top end semiconductors and plenty of RAM, robotic platforms – especially humanoid ones – will require very sophisticated sensors, actuators, and electro-mechanical equipment – costing tens of thousands of dollars for the foreseeable future.

To keep things in perspective, Goldman Sachs[11] forecasted a 2035 Humanoid Robot TAM of US$38bn with shipments reaching 1.4m units. That’s not a tremendous unit volume for humanoid robots (PCs ship around 250m units per year, smartphones north of a billion) – we can expect orders of magnitude more “functional form factor robots” in warehouse, vacuuming homes and doing other focused tasks.

These platforms – like the ones now available from Qualcomm, NVIDIA, NXP, Analog Devices and more – are attracting developers that are taking their server class software skills and combining them with embedded computing expertise. Like mobility, robotics and physical AI are challenging developers and designers in new ways and provides a unique opportunity for workforce development, skill enhancement and career growth.

A key challenge here is to avoid the pitfalls of Industry 4.0 and IoT – how do we collaborate as an industry to help standardize on data sharing models, digital twin models, code portability and other elements of the robotics stack? If this area becomes more fractured and siloed we could see significant delays in real deployments of more advanced genAI driven robots.

Developers, designers and scientists are pushing the envelope and closing the gap between our imaginations and reality. Like with cloud-based AI, the use of physical AI will require important guardrails and best practices to keep us not only safe but make this newfound expansion of physical AI capabilities accretive to our society, but the future

We cannot underestimate the impact that new robotics platforms will have on our culture, our labor force, and our existential mindset. We’re at a turning point as edge AI technologies like physical AI are leveraging traditional sensor AI and machine learning with generative AI, providing a call-to-action for all technology providers in the edge AI “stack,” from metal to cloud, as well an opportunity for business across segments to rethink how these new platforms will leverage this new edge AI technology in ways that are still in our imagination.


[1] https://www.csail.mit.edu/research/sofi-soft-robotic-fish

[2] https://builtin.com/robotics/farming-agricultural-robots

[3] https://www.aboutamazon.com/news/operations/amazon-introduces-new-robotics-solutions

[4] https://www.automate.org/robotics/service-robots/service-robots-exoskeleton

[5] https://enchanted.tools/

[6] https://www.goodreads.com/en/book/show/48928553

[7] https://tdwi.org/articles/2021/06/1...drails-into-ai-driven-robotic-assistants.aspx

[8] https://www.microsoft.com/en-us/res...tating-human-behaviour-with-diffusion-models/

[9] https://nvidianews.nvidia.com/news/foundation-model-isaac-robotics-platform

[10] Foundation Models in Robotics: Applications, Challenges, and the Future – https://arxiv.org/html/2312.07843v1

[11] https://www.goldmansachs.com/intell...n-humanoid-robot-the-ai-accelerant/report.pdf
Very sexy post Frangipani!
This is where it's at!
Nudge nudge, wink wink, say no more, say no more..




c3po.gif


2026 is going to be "Our" year!
 
  • Haha
Reactions: 9 users
Top Bottom