BRN Discussion Ongoing

Frangipani

Regular
Can’t be long now before we find out more about Nikunj Kotecha’s Edge AI Stealth Startup, with below article about him published on TechBullion earlier today. Consider it blatant, over-the-top self-promotion or not, the piece also happens to be excellent publicity for his former employer… 😀
Intentionally or unintentionally.

(Although at the same time it begs the question - once again - why he is no longer with our company…)

823464A9-7371-4312-B408-9658F26FEA91.jpeg


89BD6596-A1D2-489D-B08B-F44275C1FB5E.jpeg

2A9F5B7A-D7BB-4E1C-B20E-4B6C9A212358.jpeg




ARTIFICIAL INTELLIGENCE

Nikunj Kotecha: “ML and AI are little–explored technologies with great potential, which we reveal every day”​

34e5404ac3214bbbd73cc54f2d4ce695-80x80.jpg

ByMiller Victor
Posted on November 15, 2024
Nikunj-Kotecha.jpg


By 2026, artificial intelligence is forecasted to generate up to 90% of all internet content!

These impressive statistics spark conversations about the quality and diversity of the generated content. Among the leading experts shaping this transformation is Nikunj Kotecha, a seasoned Machine Learning Leader with ten years of experience in advanced AI solutions for global clients. Experts like him are currently engaged in “training” AI models and programming them using complex mathematical algorithms to optimize various business processes. These models help improve customer service, optimize internal processes, and achieve technological leadership in the market.

Nikunj holds certifications from Amazon Web Services (AWS) as an AI Practitioner expert and from DeepLearning.ai in Generative AI with large language models (LLMs). His work focuses on developing efficient, secure, and privacy oriented AI solutions for semiconductor accelerators at the Edge. As a Technical lead, he has successfully guided cross-functional teams, pushing the limits of Edge AI and Neuromorphic computing.

During his time as a researcher at the Rochester Institute of Technology (RIT) from 2018 to 2020, Nikunj investigated innovative methods to enhance American Sign Language (ASL) video translations. By integrating multimodal features and developing Transformer networks, first at the time, Nikunj improved translation accuracy by 10% measured by BLEU score. His other work in Bayesian inference for skin lesions further advanced AI’s role in healthcare, developing models that confidently defer classification in cases of uncertainty, leading to 5% accuracy boost.

From 2021 to 2023, Nikunj led as a Senior Solutions Architect at BrainChip Inc., an Australian company specializing in brain-inspired AI Hardware.
He led BrainChip technical team in securing a multi-year license agreement for its Intellectual Property (IP) of Akida AI accelerator with MegaChips, a japanese based global fabless semiconductor company. The multi-year licensing valued in millions and a $2 million forecast expected in royalties
.

Nikunj’s technical expertise facilitated the development of the next-generation Neuromorphic processor and an updated MetaTF Software Development Kit (SDK) publicly available for developers to build custom Neuromorphic models. Combined together, it supports the newer Transformer networks and features such as Residual connections, 8-bit Integer Quantization, and Post-Training Quantization. Another notable advancement under his expertise was the implementation of Temporal Event-Based Network (TENNs), an innovative state space model used for denoising audio in hearing aids and earphones devices. TENNs demonstrated superior performance, achieving state of the art results in audio clarity and noise suppression measured by improvements in PESQ and STOI of 16% and 4% respectively on the Microsoft denoising challenge. Nikunj also developed industry models such as Akidanet FOMO optimizing object detection speed and reducing detection delay by 20%.

Nikunj Kotecha has made a groundbreaking contribution to the AI industry by creating BrainChip technology and launching the BrainChip University AI Accelerator Program. His work has revolutionized AI hardware at the Edge.
Its architecture centers on Neural Processing Units (NPUs) paired with dedicated Static Random Access Memory (SRAM) coupled together as a Node. This unique design with its neuromorphic processing delivers low power, high efficiency, and a dedicated AI accelerator in an SoC compared to any traditional deep learning accelerators. Recognizing the need to demonstrate these unique capabilities, Nikunj developed a benchmark framework to demonstrate BrainChip’s core capabilities, showcasing its efficiency in real-world AI applications. He further simplified this tool into a no-code version that allows develops to assess performance without need deep technical expertise.

With a deep understanding of Edge AI and application-specific integrated circuits (ASICs), Nikunj actively spread awareness and learning of this technology. He led workshops such as “Bringing Development of BrainChip Akida Neuromorphic models at Edge Impulse Imaging event. There he also participated as a guest speaker for a webinar “Neuromorphic Deep Dive into Next-Gen Edge AI solutions using Edge Impulse”.

In addition to his technical achievements, Nikunj led the BrainChip University AI Accelerator Program, a global initiative that helps students learn about neuromorphic AI through hands-on projects and access to BrainChip technology. His lectures at top universities like Carnegie Mellon, Arizona State University and Cornell Tech have inspired a new generation of AI engineers, building a strong talent pool and expanding the reach of BrainChip’s technology.


Nikunj’s contributions have significantly advanced AI hardware and education, creating lasting impacts on the industry and fostering the next generation of AI professionals.

As an active member of Institute of Electrical and Electronics Engineers (IEEE) and the Association for Computing Machinery (ACM), Nikunj frequently contributed to AI research. Currently serving as a peer reviewer for the 19th IEEE International Conference on Automatic Face and Gesture Recognition, he has also published articles in leading venues such as 16th IEEE International Conference on Automatic Face and Gesture Recognition and at a 2021 journal of ACM. The AI expert also participates in independent research projects
such as creation of Benchmark dataset called “Indic MMLU-Pro” for Indian languages that helps in the development of LLMs for such regions. His involvement in technical Hackathons and Competitions extends to judging roles and member of the jury respectively. He has evaluated multiple projects of professionals in hackathons such as Patient Journey Challenge, Galaxy One: 2024 Hackathon, Medihacks 2024 and AI for Change by Launchology. At competitions such as Globee Awards for Business and Globee Awards for Women in Business, he evaluated the achievements and innovations of participants and organizations.

Nikunj Kotecha is one of the best AI and ML specialists with internationally certified qualifications! His extensive work across research and commercial sectors has uniquely positioned him as a leader in cutting edge AI technology. His contributions not only advance the AI field but also inspire future developments that will benefit industries worldwide. His ability to evaluate and drive innovations within the industry has a profound influence on the growth and responsible development of AI.

RELATED ITEMS:AI, ML, NIKUNJ KOTECHA
 
  • Like
  • Fire
  • Wow
Reactions: 35 users
Rather interesting read...esp Megachips...wonder if that's PM, PQ or PA...we wait :unsure:


Nikunj Kotecha: “ML and AI are little–explored technologies with great potential, which we reveal every day”​

34e5404ac3214bbbd73cc54f2d4ce695-80x80.jpg

ByMiller Victor
Posted on November 15, 2024
Nikunj-Kotecha.jpg

By 2026, artificial intelligence is forecasted to generate up to 90% of all internet content!

These impressive statistics spark conversations about the quality and diversity of the generated content. Among the leading experts shaping this transformation is Nikunj Kotecha, a seasoned Machine Learning Leader with ten years of experience in advanced AI solutions for global clients. Experts like him are currently engaged in “training” AI models and programming them using complex mathematical algorithms to optimize various business processes. These models help improve customer service, optimize internal processes, and achieve technological leadership in the market.

Nikunj holds certifications from Amazon Web Services (AWS) as an AI Practitioner expert and from DeepLearning.ai in Generative AI with large language models (LLMs). His work focuses on developing efficient, secure, and privacy oriented AI solutions for semiconductor accelerators at the Edge. As a Technical lead, he has successfully guided cross-functional teams, pushing the limits of Edge AI and Neuromorphic computing.

During his time as a researcher at the Rochester Institute of Technology (RIT) from 2018 to 2020, Nikunj investigated innovative methods to enhance American Sign Language (ASL) video translations. By integrating multimodal features and developing Transformer networks, first at the time, Nikunj improved translation accuracy by 10% measured by BLEU score. His other work in Bayesian inference for skin lesions further advanced AI’s role in healthcare, developing models that confidently defer classification in cases of uncertainty, leading to 5% accuracy boost.

From 2021 to 2023, Nikunj led as a Senior Solutions Architect at BrainChip Inc., an Australian company specializing in brain-inspired AI Hardware. He led BrainChip technical team in securing a multi-year license agreement for its Intellectual Property (IP) of Akida AI accelerator with MegaChips, a japanese based global fabless semiconductor company. The multi-year licensing valued in millions and a $2 million forecast expected in royalties. Nikunj’s technical expertise facilitated the development of the next-generation Neuromorphic processor and an updated MetaTF Software Development Kit (SDK) publicly available for developers to build custom Neuromorphic models. Combined together, it supports the newer Transformer networks and features such as Residual connections, 8-bit Integer Quantization, and Post-Training Quantization. Another notable advancement under his expertise was the implementation of Temporal Event-Based Network (TENNs), an innovative state space model used for denoising audio in hearing aids and earphones devices. TENNs demonstrated superior performance, achieving state of the art results in audio clarity and noise suppression measured by improvements in PESQ and STOI of 16% and 4% respectively on the Microsoft
denoising challenge. Nikunj also developed industry models such as Akidanet FOMO optimizing object detection speed and reducing detection delay by 20%.

Nikunj Kotecha has made a groundbreaking contribution to the AI industry by creating BrainChip technology and launching the BrainChip University AI Accelerator Program. His work has revolutionized AI hardware at the Edge.

Its architecture centers on Neural Processing Units (NPUs) paired with dedicated Static Random Access Memory (SRAM) coupled together as a Node. This unique design with its neuromorphic processing delivers low power, high efficiency, and a dedicated AI accelerator in an SoC compared to any traditional deep learning accelerators. Recognizing the need to demonstrate these unique capabilities, Nikunj developed a benchmark framework to demonstrate BrainChip’s core capabilities, showcasing its efficiency in real-world AI applications. He further simplified this tool into a no-code version that allows develops to assess performance without need deep technical expertise.

With a deep understanding of Edge AI and application-specific integrated circuits (ASICs), Nikunj actively spread awareness and learning of this technology. He led workshops such as “Bringing Development of BrainChip Akida Neuromorphic models at Edge Impulse Imaging event. There he also participated as a guest speaker for a webinar “Neuromorphic Deep Dive into Next-Gen Edge AI solutions using Edge Impulse”.
In addition to his technical achievements, Nikunj led the BrainChip University AI Accelerator Program, a global initiative that helps students learn about neuromorphic AI through hands-on projects and access to BrainChip technology. His lectures at top universities like Carnegie Mellon, Arizona State University and Cornell Tech have inspired a new generation of AI engineers, building a strong talent pool and expanding the reach of BrainChip’s technology.

Nikunj’s contributions have significantly advanced AI hardware and education, creating lasting impacts on the industry and fostering the next generation of AI professionals.

As an active member of Institute of Electrical and Electronics Engineers (IEEE) and the Association for Computing Machinery (ACM), Nikunj frequently contributed to AI research. Currently serving as a peer reviewer for the 19th IEEE International Conference on Automatic Face and Gesture Recognition, he has also published articles in leading venues such as 16th IEEE International Conference on Automatic Face and Gesture Recognition and at a 2021 journal of ACM. The AI expert also participates in independent research projects such as creation of Benchmark dataset called “Indic MMLU-Pro” for Indian languages that helps in the development of LLMs for such regions. His involvement in technical Hackathons and Competitions extends to judging roles and member of the jury respectively. He has evaluated multiple projects of professionals in hackathons such as Patient Journey Challenge, Galaxy One: 2024 Hackathon, Medihacks 2024 and AI for Change by Launchology. At competitions such as Globee Awards for Business and Globee Awards for Women in Business, he evaluated the achievements and innovations of participants and organizations.

Nikunj Kotecha is one of the best AI and ML specialists with internationally certified qualifications! His extensive work across research and commercial sectors has uniquely positioned him as a leader in cutting edge AI technology. His contributions not only advance the AI field but also inspire future developments that will benefit industries worldwide. His ability to evaluate and drive innovations within the industry has a profound influence on the growth and responsible development of AI.
 
  • Like
  • Fire
  • Love
Reactions: 23 users
Can’t be long now before we find out more about Nikunj Kotecha’s Edge AI Stealth Startup, with below article about him published on TechBullion earlier today. Consider it blatant, over-the-top self-promotion or not, the piece also happens to be excellent publicity for his former employer… 😀
Intentionally or unintentionally.

(Although at the same time it begs the question - once again - why he is no longer with our company…)

View attachment 72910

View attachment 72908
View attachment 72909



ARTIFICIAL INTELLIGENCE

Nikunj Kotecha: “ML and AI are little–explored technologies with great potential, which we reveal every day”​

34e5404ac3214bbbd73cc54f2d4ce695-80x80.jpg

ByMiller Victor
Posted on November 15, 2024
Nikunj-Kotecha.jpg


By 2026, artificial intelligence is forecasted to generate up to 90% of all internet content!

These impressive statistics spark conversations about the quality and diversity of the generated content. Among the leading experts shaping this transformation is Nikunj Kotecha, a seasoned Machine Learning Leader with ten years of experience in advanced AI solutions for global clients. Experts like him are currently engaged in “training” AI models and programming them using complex mathematical algorithms to optimize various business processes. These models help improve customer service, optimize internal processes, and achieve technological leadership in the market.

Nikunj holds certifications from Amazon Web Services (AWS) as an AI Practitioner expert and from DeepLearning.ai in Generative AI with large language models (LLMs). His work focuses on developing efficient, secure, and privacy oriented AI solutions for semiconductor accelerators at the Edge. As a Technical lead, he has successfully guided cross-functional teams, pushing the limits of Edge AI and Neuromorphic computing.

During his time as a researcher at the Rochester Institute of Technology (RIT) from 2018 to 2020, Nikunj investigated innovative methods to enhance American Sign Language (ASL) video translations. By integrating multimodal features and developing Transformer networks, first at the time, Nikunj improved translation accuracy by 10% measured by BLEU score. His other work in Bayesian inference for skin lesions further advanced AI’s role in healthcare, developing models that confidently defer classification in cases of uncertainty, leading to 5% accuracy boost.

From 2021 to 2023, Nikunj led as a Senior Solutions Architect at BrainChip Inc., an Australian company specializing in brain-inspired AI Hardware.
He led BrainChip technical team in securing a multi-year license agreement for its Intellectual Property (IP) of Akida AI accelerator with MegaChips, a japanese based global fabless semiconductor company. The multi-year licensing valued in millions and a $2 million forecast expected in royalties
.

Nikunj’s technical expertise facilitated the development of the next-generation Neuromorphic processor and an updated MetaTF Software Development Kit (SDK) publicly available for developers to build custom Neuromorphic models. Combined together, it supports the newer Transformer networks and features such as Residual connections, 8-bit Integer Quantization, and Post-Training Quantization. Another notable advancement under his expertise was the implementation of Temporal Event-Based Network (TENNs), an innovative state space model used for denoising audio in hearing aids and earphones devices. TENNs demonstrated superior performance, achieving state of the art results in audio clarity and noise suppression measured by improvements in PESQ and STOI of 16% and 4% respectively on the Microsoft denoising challenge. Nikunj also developed industry models such as Akidanet FOMO optimizing object detection speed and reducing detection delay by 20%.

Nikunj Kotecha has made a groundbreaking contribution to the AI industry by creating BrainChip technology and launching the BrainChip University AI Accelerator Program. His work has revolutionized AI hardware at the Edge.
Its architecture centers on Neural Processing Units (NPUs) paired with dedicated Static Random Access Memory (SRAM) coupled together as a Node. This unique design with its neuromorphic processing delivers low power, high efficiency, and a dedicated AI accelerator in an SoC compared to any traditional deep learning accelerators. Recognizing the need to demonstrate these unique capabilities, Nikunj developed a benchmark framework to demonstrate BrainChip’s core capabilities, showcasing its efficiency in real-world AI applications. He further simplified this tool into a no-code version that allows develops to assess performance without need deep technical expertise.

With a deep understanding of Edge AI and application-specific integrated circuits (ASICs), Nikunj actively spread awareness and learning of this technology. He led workshops such as “Bringing Development of BrainChip Akida Neuromorphic models at Edge Impulse Imaging event. There he also participated as a guest speaker for a webinar “Neuromorphic Deep Dive into Next-Gen Edge AI solutions using Edge Impulse”.

In addition to his technical achievements, Nikunj led the BrainChip University AI Accelerator Program, a global initiative that helps students learn about neuromorphic AI through hands-on projects and access to BrainChip technology. His lectures at top universities like Carnegie Mellon, Arizona State University and Cornell Tech have inspired a new generation of AI engineers, building a strong talent pool and expanding the reach of BrainChip’s technology.


Nikunj’s contributions have significantly advanced AI hardware and education, creating lasting impacts on the industry and fostering the next generation of AI professionals.

As an active member of Institute of Electrical and Electronics Engineers (IEEE) and the Association for Computing Machinery (ACM), Nikunj frequently contributed to AI research. Currently serving as a peer reviewer for the 19th IEEE International Conference on Automatic Face and Gesture Recognition, he has also published articles in leading venues such as 16th IEEE International Conference on Automatic Face and Gesture Recognition and at a 2021 journal of ACM. The AI expert also participates in independent research projects
such as creation of Benchmark dataset called “Indic MMLU-Pro” for Indian languages that helps in the development of LLMs for such regions. His involvement in technical Hackathons and Competitions extends to judging roles and member of the jury respectively. He has evaluated multiple projects of professionals in hackathons such as Patient Journey Challenge, Galaxy One: 2024 Hackathon, Medihacks 2024 and AI for Change by Launchology. At competitions such as Globee Awards for Business and Globee Awards for Women in Business, he evaluated the achievements and innovations of participants and organizations.

Nikunj Kotecha is one of the best AI and ML specialists with internationally certified qualifications! His extensive work across research and commercial sectors has uniquely positioned him as a leader in cutting edge AI technology. His contributions not only advance the AI field but also inspire future developments that will benefit industries worldwide. His ability to evaluate and drive innovations within the industry has a profound influence on the growth and responsible development of AI.

RELATED ITEMS:AI, ML, NIKUNJ KOTECHA
Snap :LOL:

Was just reading and posting it too.
 
  • Like
  • Haha
Reactions: 13 users


Company logo

BrainChip Inc

CES 2025​


30 min

Suite 29-312, Venetian Tower
CES 2025 is an opportunity to connect or meet new partners. Thank you for the opportunity to connect. Once again, BrainChip will be Live @ CES2025 - our podcast where we will talk “All Things AI” with the industry's sharpest minds. Please reserve a little extra time to chat with us on the podcast.
Show less



Select a Day​


January 2025
 
  • Like
Reactions: 19 users

IloveLamp

Top 20
  • Like
  • Fire
  • Love
Reactions: 16 users

Bravo

If ARM was an arm, BRN would be its biceps💪!
A paper showing AKIDA 1000 being integrated into an on-board processor to help detect fugitive methane emissions from ageing oil and gas infrastructures facilitating operators to locate and mitigate these leaks, to order to help address the global climate crisis. The processor works in conjunction with NASA’s core Flight System (cFS).
Screenshot 2024-11-16 at 10.48.07 am.png



EXTRACT FROM PAGE 6

.1 BrainSat neuromorphic processor for CubeSats
One of the objectives of our work was to propose an al-
gorithm that could be executed on the novel neuromorphic
on-board processor (OBP) developed by BrainSat [12],
demonstrating its capability to serve small satellite mis-
sion needs and showing the potential of AI-based edge
computing. The OBP was specifically designed accord-
ing to mission requirements established for the monitoring
of point-source methane emissions with a 6U CubeSat, as
detailed in [13].
The designed OBP includes two PC104 modules, con-
nected through a mezzanine connector. It integrates both
CPU and FPGA capabilities and can cater for on board
computer functions, payload data processing and down-
link management. The OBP is equipped with the Akida
1000 neuromorphic processor, selected for its design ma-
turity, real-time processing capabilities and flight heritage.
The chip is tailored for event-based processing, featuring
80 Neuromorphic Processing Units (NPUs) with 100 KB
of SRAM each, supporting up to 1.2 million virtual neu-
rons and 10 billion virtual synapses with up to 8-bit pre-
cision, ideal for inference-heavy tasks.
An FPGA pro-
vides glue logic to implement necessary data protocols
and serve as a soft CPU for OBC functions. Additionally,
a 12GB flight-proven Micron Solid State Device (SSD) is
included to provide non-volatile on-board memory. This
sub-system is constrained to a 0.5 U volume and is esti-
mated to consume a low power of less than 4 W.
The processor’s software architecture was designed to
minimize resource consumption. It is equipped with Real-
Time Executive for Multiprocessor Systems (RTEMS) as
its Real Time Operating System (RTOS), on top of which
NASA’s core Flight System (cFS) is used. cFS provides a
flight proven product on which OBP functions are built us-
ing open-source applications. For more information about
the BrainSat architecture, refer to our co-published pro-
ceeding [12]



 
Last edited:
  • Like
  • Fire
  • Love
Reactions: 85 users

Guzzi62

Regular
Can’t be long now before we find out more about Nikunj Kotecha’s Edge AI Stealth Startup, with below article about him published on TechBullion earlier today. Consider it blatant, over-the-top self-promotion or not, the piece also happens to be excellent publicity for his former employer… 😀
Intentionally or unintentionally.

(Although at the same time it begs the question - once again - why he is no longer with our company…)

View attachment 72910

View attachment 72908
View attachment 72909



ARTIFICIAL INTELLIGENCE

Nikunj Kotecha: “ML and AI are little–explored technologies with great potential, which we reveal every day”​

34e5404ac3214bbbd73cc54f2d4ce695-80x80.jpg

ByMiller Victor
Posted on November 15, 2024
Nikunj-Kotecha.jpg


By 2026, artificial intelligence is forecasted to generate up to 90% of all internet content!

These impressive statistics spark conversations about the quality and diversity of the generated content. Among the leading experts shaping this transformation is Nikunj Kotecha, a seasoned Machine Learning Leader with ten years of experience in advanced AI solutions for global clients. Experts like him are currently engaged in “training” AI models and programming them using complex mathematical algorithms to optimize various business processes. These models help improve customer service, optimize internal processes, and achieve technological leadership in the market.

Nikunj holds certifications from Amazon Web Services (AWS) as an AI Practitioner expert and from DeepLearning.ai in Generative AI with large language models (LLMs). His work focuses on developing efficient, secure, and privacy oriented AI solutions for semiconductor accelerators at the Edge. As a Technical lead, he has successfully guided cross-functional teams, pushing the limits of Edge AI and Neuromorphic computing.

During his time as a researcher at the Rochester Institute of Technology (RIT) from 2018 to 2020, Nikunj investigated innovative methods to enhance American Sign Language (ASL) video translations. By integrating multimodal features and developing Transformer networks, first at the time, Nikunj improved translation accuracy by 10% measured by BLEU score. His other work in Bayesian inference for skin lesions further advanced AI’s role in healthcare, developing models that confidently defer classification in cases of uncertainty, leading to 5% accuracy boost.

From 2021 to 2023, Nikunj led as a Senior Solutions Architect at BrainChip Inc., an Australian company specializing in brain-inspired AI Hardware.
He led BrainChip technical team in securing a multi-year license agreement for its Intellectual Property (IP) of Akida AI accelerator with MegaChips, a japanese based global fabless semiconductor company. The multi-year licensing valued in millions and a $2 million forecast expected in royalties
.

Nikunj’s technical expertise facilitated the development of the next-generation Neuromorphic processor and an updated MetaTF Software Development Kit (SDK) publicly available for developers to build custom Neuromorphic models. Combined together, it supports the newer Transformer networks and features such as Residual connections, 8-bit Integer Quantization, and Post-Training Quantization. Another notable advancement under his expertise was the implementation of Temporal Event-Based Network (TENNs), an innovative state space model used for denoising audio in hearing aids and earphones devices. TENNs demonstrated superior performance, achieving state of the art results in audio clarity and noise suppression measured by improvements in PESQ and STOI of 16% and 4% respectively on the Microsoft denoising challenge. Nikunj also developed industry models such as Akidanet FOMO optimizing object detection speed and reducing detection delay by 20%.

Nikunj Kotecha has made a groundbreaking contribution to the AI industry by creating BrainChip technology and launching the BrainChip University AI Accelerator Program. His work has revolutionized AI hardware at the Edge.
Its architecture centers on Neural Processing Units (NPUs) paired with dedicated Static Random Access Memory (SRAM) coupled together as a Node. This unique design with its neuromorphic processing delivers low power, high efficiency, and a dedicated AI accelerator in an SoC compared to any traditional deep learning accelerators. Recognizing the need to demonstrate these unique capabilities, Nikunj developed a benchmark framework to demonstrate BrainChip’s core capabilities, showcasing its efficiency in real-world AI applications. He further simplified this tool into a no-code version that allows develops to assess performance without need deep technical expertise.

With a deep understanding of Edge AI and application-specific integrated circuits (ASICs), Nikunj actively spread awareness and learning of this technology. He led workshops such as “Bringing Development of BrainChip Akida Neuromorphic models at Edge Impulse Imaging event. There he also participated as a guest speaker for a webinar “Neuromorphic Deep Dive into Next-Gen Edge AI solutions using Edge Impulse”.

In addition to his technical achievements, Nikunj led the BrainChip University AI Accelerator Program, a global initiative that helps students learn about neuromorphic AI through hands-on projects and access to BrainChip technology. His lectures at top universities like Carnegie Mellon, Arizona State University and Cornell Tech have inspired a new generation of AI engineers, building a strong talent pool and expanding the reach of BrainChip’s technology.


Nikunj’s contributions have significantly advanced AI hardware and education, creating lasting impacts on the industry and fostering the next generation of AI professionals.

As an active member of Institute of Electrical and Electronics Engineers (IEEE) and the Association for Computing Machinery (ACM), Nikunj frequently contributed to AI research. Currently serving as a peer reviewer for the 19th IEEE International Conference on Automatic Face and Gesture Recognition, he has also published articles in leading venues such as 16th IEEE International Conference on Automatic Face and Gesture Recognition and at a 2021 journal of ACM. The AI expert also participates in independent research projects
such as creation of Benchmark dataset called “Indic MMLU-Pro” for Indian languages that helps in the development of LLMs for such regions. His involvement in technical Hackathons and Competitions extends to judging roles and member of the jury respectively. He has evaluated multiple projects of professionals in hackathons such as Patient Journey Challenge, Galaxy One: 2024 Hackathon, Medihacks 2024 and AI for Change by Launchology. At competitions such as Globee Awards for Business and Globee Awards for Women in Business, he evaluated the achievements and innovations of participants and organizations.

Nikunj Kotecha is one of the best AI and ML specialists with internationally certified qualifications! His extensive work across research and commercial sectors has uniquely positioned him as a leader in cutting edge AI technology. His contributions not only advance the AI field but also inspire future developments that will benefit industries worldwide. His ability to evaluate and drive innovations within the industry has a profound influence on the growth and responsible development of AI.

RELATED ITEMS:AI, ML, NIKUNJ KOTECHA
He seems to be "the man" in Brainchip when he was there!!

Another notable advancement under his expertise was the implementation TENNs, securing the Maegachips IP deal and starting the university programe.

Wow, wow, we want him back: Now!

LOL
 
  • Like
  • Haha
  • Fire
Reactions: 8 users

Bravo

If ARM was an arm, BRN would be its biceps💪!

Managing The Huge Power Demands Of AI Everywhere​

sharethis sharing button

More efficient hardware, better planning, and better utilization of available power can help significantly.

November 14th, 2024 - By: Ann Mutschler
popularity

Before generative AI burst onto the scene, no one predicted how much energy would be needed to power AI systems. Those numbers are just starting to come into focus, and so is the urgency about how to sustain it all.
AI power demand is expected to surge 550% by 2026, from 8 TWh in 2024 to 52 TWh, before rising another 1,150% to 652 TWh by 2030. Commensurately, U.S. power grid planners have doubled the estimated U.S. load forecast, from 2.6% to 4.7%, an increase of nearly 38 gigawatts through 2028, which is the equivalent to adding another two more states equivalent to New York to the U.S. power grid in 5 years.
Microsoft and Google, meanwhile, report electricity consumption has surpassed the power usage of more than 100 countries, and Google’s latest report shows a 50% rise in greenhouse gas emissions from 2019 to 2023, partly due to data centers.
This has put the entire tech sector on a worrisome trajectory. The chip industry had been doing well in terms of the amount of power being consumed for computation, which was matched somewhat with efficiency gains. Until AI, there wasn’t the big push for so much more compute power as is seen today, and many report they were caught by surprise. This may be why there is so much research into alternatives to traditional power sources, even including nuclear power plants, which are now being planned, built, or recommissioned.
“AI models will continue to become larger and smarter, fueling the need for more compute, which increases demand for power as part of a virtuous cycle,” said Dermot O’Driscoll, vice president of product solutions in Arm’s Infrastructure Line of Business. “Finding ways to reduce the power requirements for these large data centers is paramount to achieving the societal breakthroughs and realizing the AI promise. Today’s data centers already consume lots of power. Globally, 460 terawatt-hours (TWh) of electricity are needed annually, which is the equivalent to the entire country of Germany.”
To fully harness the potential of AI, the industry must rethink compute architectures and designs, O’Driscoll says. But while many of the largest AI hyperscalers are using Arm cores to reduce power, that’s only part of the solution. AI searches need to deliver more reliable and targeted information for each query, and AI models themselves need to become more efficient.
“AI applications are driving unprecedented power demand,” said William Ruby, senior director of product management for power analysis products at Synopsys. “The International Energy Agency in its 2024 report indicated that a ChatGPT request consumes 10X of the amount of power consumed by a traditional Google search. We are seeing this play out for semiconductor ICs. Power consumption of SoCs for high-performance computing applications is now in the hundreds of watts, and in some cases exceeding a kilowatt.”
The rollout and rapid adoption of AI was as much of a surprise to the tech world as it was to the power utilities. Until a couple years ago, most people assumed AI was plodding along at the same pace it had been for decades.
“You could argue the internet back in the mid-to-late ’90s was a big life changing thing — one of those once-in-a-generation type technologies,” said Steven Woo, distinguished inventor and fellow at Rambus. “Smart phones are another one. But with AI the ramp is faster, and the potential is like the internet — and in some ways maybe even greater. With so many people experimenting, and with the user base being able to do more sophisticated things that need more power, the semiconductor industry is being asked to try and become more power-efficient. In a lot of ways these architectures are becoming more power efficient. It’s just that you’re still getting dwarfed by the increase in the amount of compute you want to do for more advanced AI. It’s one of those things where you just can’t keep up with the demand. You are making things more power-efficient, but it’s just not enough, so now we must find ways to get more power. The models are getting bigger. The calculations are more complex. The hardware is getting more sophisticated. So the key things that happen are that we’re getting more sophisticated as the model is getting bigger, more accurate, and all that. But a lot of it now is coming down to how we power all this stuff, and then how we cool it. Those are the big questions.”
AI and sustainability
Where will the all power come from? Do the engineering teams that are writing the training algorithms need to start being more power-aware?
“Sustainability is something that we have been addressing in the semiconductor industry for 20 years,” said Rich Goldman, director at Ansys. “There’s been awareness that we need low-power designs, and software to enable low-power designs. Today, it comes down to an issue of engineering ethics and morality. Do our customers care about it when they buy a chip or when they buy a training model? I don’t think they make their decisions based on that.”
What also comes into play is how engineers are rewarded, evaluated, and assessed. “Commitment to sustainability is typically not included on what they must put into the product, so they aren’t motivated, except by their own internal ethics and the company’s ethics towards that. It’s the age-old ethics versus dollars in business, and in general we know who wins that. It’s a huge issue. Maybe we should be teaching ethics in engineering in school, because they’re not going to stop making big, powerful LLMs and training on these huge data centers,” Goldman noted.
Still, it’s going to take huge numbers of processors to run AI models. “So you want to take your data centers and rip those CPUs out and put in GPUs that run millions of times more efficiently to get more compute power out of it,” he said.” And while you’re doing that, you’re increasing your power efficiency. It might seem counterintuitive, because GPUs take so much power, but per compute cycle it’s much, much less. Given that you have limited space in your data center — because you’re not going to add more space — you’re going to take out the inefficient processors and put in GPUs. This is a bit self-serving for NVIDIA, because they sell more GPUs that way, but it’s true. So even today, when we’re at Hopper H100s, H200s — and even though Blackwell is coming, which is 10 or 100 times better — people are buying the Hopper because it’s so much more efficient than what they have. In the meantime, they’re going to save more on power expense than they are in buying and replacing. Then, when Blackwell becomes available, they’ll replace the Hopper with Blackwell, and that’s sufficient for them in a dollar sense, which helps with the power issue. That’s the way we have to tackle it. We have to look at the dollars involved and make it attractive for people to expend less power based on the dollars that go to the bottom line for the company.”
Meeting the AI energy/power challenges
Meeting the current and upcoming energy and power demands from large-scale deployments of AI, creates three challenges. “One is how to deliver power,” said Woo. “There’s a lot of talk in the news about nuclear power, or newer ways of supplying nuclear power-class amounts of power. Two is how to deal with the thermals. All these systems are not just trying to become more powerful. They’re doing it in small spaces. You’re anticipating all this power, and you’ve got to figure out how to cool all of that. Three involves opportunities for co-design, making the hardware and the software work together to gain other efficiencies. You try to find ways to make better use of what the hardware is giving you through software. Then, on the semiconductor side of things, supplying power is really challenging, and one of the biggest things that’s going on right now in data centers is the move to a higher voltage supply of power.”
At the very least, product development teams must consider energy efficiency at initial stages of the development process.
“You cannot really address energy efficiency at the tail end of the process, because by then the architecture has been defined and many design decisions have already been made,” said Synopsys’ Ruby. “Energy efficiency in some sense is an equal opportunity challenge, where every stage in the development process can contribute to energy efficiency, with the understanding that earlier stages can have a bigger impact than later stages. Collectively, every seemingly small decision can have a profound impact on a chip’s overall power consumption.”
A ‘shift-left’ methodology, designing hardware and writing software simultaneously and early enough in the development process can have a profound effect on energy efficiency. “This includes decisions such as overall hardware architecture, hardware versus software partitioning, software and compiler optimizations, memory subsystem architecture, application of SoC level power management techniques such as dynamic voltage and frequency scaling (DVFS) – to name just a few,” he said. It also requires running realistic application workloads to understand the impact.
That’s only part of the problem. The mindset around sustainability also needs to change. “We should be thinking about it, but I don’t think the industry as a whole is doing that,” said Sharad Chole, chief scientist at Expedera “It’s only about cost at the moment. It’s not about sustainability, unfortunately.”
But as generative AI models and algorithms become more stable, the costs can become more predictable. That includes how many data center resources will be required, and ultimately it can include how much power will be needed.
“Unlike previous iterations of model architectures, where architectures were changing and everyone had slightly different tweaks, the industry-recognized models for Gen AI have been stable for quite a long time,” Chole said. “The transformer architecture is the basis of everything. And there is innovation in terms of what support needs to be there for workloads, which is very useful.”
The is a good understanding of what needs to be optimized, as well, which needs to be balanced against the cost of retraining a model. “If it’s something like training a 4 billion- or 5 billion-parameter model, that’s going to take 30,000 GPUs three months,” Chole said. “It’s a huge cost to pay.”
Once those formulas are established, then it becomes possible to determine how much power will be needed to run the generative AI models when they’re implemented.
“OpenAI has said it can predict the performance of its model 3.5 and model 4 while projecting the scaling laws onto growth of the model versus the training dataset,” he explained. “That is very useful, because then the companies can plan that it’s going to take them 10 times more computation, or three times more data sets, to be able to get to the next generation accuracy improvement. These laws are still being used, and even though they were developed for a very small set of models, they can scale well in terms of the model insights into this. The closed-source companies that are developing the models — for example, OpenAI, Anthropic, and others are developing models that are not open — can optimize in a way that we don’t understand. They can optimize for both training as well as the deployment of the model, because they have better understanding of it. And because they’re investing billions of dollars into it, they must have better understanding of how it needs to be scaled. ‘In the next two years, this is how much funding I need to raise.’ It is very predictable. That allows users to say, ‘We are going to set this much compute. We’re going to need to build this many data centers, and this is how much power I’m going to need.’ It is planned quite well.”
Stranded power
A key aspect of managing the increasing power demands of large-scale AI involves data center design and utilization.
“The data center marketplace is extremely inefficient, and the inefficiency is a consequence of the split between the two market spaces of the building infrastructure and the EDA side where the applications run,” said Hassan Moezzi, founder of Future Facilities, which was acquired by Cadence in July 2022. “People talk about the power consumption and the disruption that it’s bringing to the marketplace. The AI equipment, like NVIDIA has, is far more power-hungry perhaps than the previous CPU-based products, and the equivalency is not there because no matter how much processing capability you throw at the marketplace, the market wants more. No matter how good and how efficiently you make your chips and technology, that’s not really where the power issue comes from. The power issue comes from the divide.”
According to Cato Digital, in 2021, 105 gigawatts of power was created for data centers, but well over 30% of that was never used, Moezzi said. “This is called stranded capacity. The data center is there to give you the power to run your applications. That’s the only reason you build these very expensive buildings and run them at huge costs. And the elephant in the room is the stranded capacity. However, if you speak to anybody in the data center business, especially on the infrastructural side, and you say, ‘stranded capacity,’ they all nod, and say they know about it. They don’t talk about it because they assume this is only about over-provisioning to safeguard risk. The truth is that some of it is over-provisioning deliberately, which is stranded capacity. But they do over-provisioning because they don’t know what’s going on inside the data center from a physics point of view. The 30%-plus statistic doesn’t do the situation justice in the enterprise marketplace, which is anybody who’s not hyperscale, since those companies are more efficient given their engineering orientation, and they take care of things. But the enterprises, the CoLos, the government data centers, they are far more inefficient. This means if you buy a megawatt of capacity — or you think you bought a megawatt — you will be lucky as an enterprise to get 60% of that. In other words, it’s more than 30%.”
This is important because a lot of people are jumping up and down about environmental impacts of data centers and the grids being tapped out. “But we’re saying you can slow this process down,” Moezzi said. “You can’t stop data centers being built, but you can slow it down by a huge margin by utilizing what you’ve already got as stranded capacity.”
Conclusion
Generative AI is unstoppable, and attempts to slow it are unrealistic, given its rapid spread and popularity. But it can be significantly more efficient than it is today, and this is where economics will drive the industry. What’s clear, though, is there is no single solution for making this happen. It will be a combination of factors, from more efficient processing to better AI models that can achieve sufficiently accurate results using less power, and utilizing the power that is available today more effectively.


 
  • Like
  • Fire
  • Love
Reactions: 17 users

Bravo

If ARM was an arm, BRN would be its biceps💪!
  • Like
  • Love
  • Fire
Reactions: 26 users

Bravo

If ARM was an arm, BRN would be its biceps💪!
I don't think this has been posted previously. Mentions the use of SpiNNaaker2 board.


PAL Robotics’ Kangaroo Biped Robot Joins Project PRIMI​

HRT Badge Humanoid Robotics Technology
7 months ago
3 min read
kangaroo pal robotics primi project

In our rapidly changing world, there’s a growing demand for robots that are more intuitive and interactive. The EU Project PRIMI strives to meet this demand by creating robots capable of understanding and interacting with humans more effectively. This initiative aims to enhance the cognitive abilities of robots, enabling them to be more responsive and adaptable in social settings.

Introducing the EU Project PRIMI​

The EU Project PRIMI is focusing on integration of advanced neuromorphic technologies into robotic systems. This project brings experts from various fields to enhance the theoretical and practical aspects of robotics, making significant steps towards robots that can operate alongside humans in complex environments.
kangaroo-pal-robotics-primi-project-2-1024x651.jpg

Kangaroo robot’s debut in Collaborative Projects​

Kangaroo, the latest biped robot from PAL Robotics, is making its first debut in the world of collaborative projects by being part of the PRIMI Initiative. Despite being new to the scene, Kangaroo is equipped with advanced neuromorphic computing and sensing capabilities, making it an ideal candidate for this high-profile project.

PAL Robotics role in PRIMI: Work Packages and Goals​

PAL Robotics participate in the PRIMI project by taking on specific roles aimed at advancing robotic capabilities. Their tasks are focused on integrating and developing key technologies that enhance the interaction between humans and robots.
In the realm of cognitive robotics, the focus lies on enhancing the social interaction capabilities of the Kangaroo robot through integration with ROS4HRI (Robot Operating System for Human-Robot Interaction) and alongside the iCub robot. This initiative includes the development of abstract reasoning and a Theory of Mind, pivotal for comprehending and predicting human actions.
The engineering and development process of the Kangaroo robot employ sophisticated design principles with a focus on neuromorphic technologies. This involves integrating Whole-Body Control with a cognitive architecture leveraging neuromorphic models, thereby enabling a new realm of interaction capabilities in humanoid robots.
Additionally, co-design efforts extend to the neuromorphic computing infrastructure, with the development of a Sensorimotor Board based on the innovative SpiNNaker2 Board. This infrastructure enhances the robot’s processing capabilities, crucial for seamless hardware and software integration. Kangaroo will incorporate event-based cameras and bio-inspired vision sensors, offering advantages like high dynamic range and minimal motion blur. However, this necessitates the development of new algorithms tailored to exploit the sensor’s unique properties.

Looking ahead, PRIMI aims to integrate these technologies through iterative prototypes and laboratory demonstrators, with a focus on refining interaction and cooperation abilities in dynamic settings. Clinical pilot studies involving neuromorphic humanoid robots like Kangaroo will validate prototypes in robot-led physical rehabilitation of stroke survivors.
The outcomes of the PRIMI project are poised to set new standards in interactive robotics, fostering enhanced efficiency in human-robot collaborations, improved safety in shared environments, and groundbreaking contributions to cognitive robotics.
PAL Robotics is deeply engaged in collaborative projects spanning healthcare, Ambient Assisted Living, smart cities, and more. For further insights into PAL Robotics and their involvement in collaborative initiatives, visit the PAL Robotics website and feel free to reach out with any inquiries.

 
  • Like
  • Love
  • Fire
Reactions: 11 users

Diogenese

Top 20
I don't think this has been posted previously. Mentions the use of SpiNNaaker2 board.


PAL Robotics’ Kangaroo Biped Robot Joins Project PRIMI​

HRT Badge Humanoid Robotics Technology
7 months ago
3 min read
kangaroo pal robotics primi project

In our rapidly changing world, there’s a growing demand for robots that are more intuitive and interactive. The EU Project PRIMI strives to meet this demand by creating robots capable of understanding and interacting with humans more effectively. This initiative aims to enhance the cognitive abilities of robots, enabling them to be more responsive and adaptable in social settings.

Introducing the EU Project PRIMI​

The EU Project PRIMI is focusing on integration of advanced neuromorphic technologies into robotic systems. This project brings experts from various fields to enhance the theoretical and practical aspects of robotics, making significant steps towards robots that can operate alongside humans in complex environments.
kangaroo-pal-robotics-primi-project-2-1024x651.jpg

Kangaroo robot’s debut in Collaborative Projects​

Kangaroo, the latest biped robot from PAL Robotics, is making its first debut in the world of collaborative projects by being part of the PRIMI Initiative. Despite being new to the scene, Kangaroo is equipped with advanced neuromorphic computing and sensing capabilities, making it an ideal candidate for this high-profile project.

PAL Robotics role in PRIMI: Work Packages and Goals​

PAL Robotics participate in the PRIMI project by taking on specific roles aimed at advancing robotic capabilities. Their tasks are focused on integrating and developing key technologies that enhance the interaction between humans and robots.
In the realm of cognitive robotics, the focus lies on enhancing the social interaction capabilities of the Kangaroo robot through integration with ROS4HRI (Robot Operating System for Human-Robot Interaction) and alongside the iCub robot. This initiative includes the development of abstract reasoning and a Theory of Mind, pivotal for comprehending and predicting human actions.
The engineering and development process of the Kangaroo robot employ sophisticated design principles with a focus on neuromorphic technologies. This involves integrating Whole-Body Control with a cognitive architecture leveraging neuromorphic models, thereby enabling a new realm of interaction capabilities in humanoid robots.
Additionally, co-design efforts extend to the neuromorphic computing infrastructure, with the development of a Sensorimotor Board based on the innovative SpiNNaker2 Board. This infrastructure enhances the robot’s processing capabilities, crucial for seamless hardware and software integration. Kangaroo will incorporate event-based cameras and bio-inspired vision sensors, offering advantages like high dynamic range and minimal motion blur. However, this necessitates the development of new algorithms tailored to exploit the sensor’s unique properties.

Looking ahead, PRIMI aims to integrate these technologies through iterative prototypes and laboratory demonstrators, with a focus on refining interaction and cooperation abilities in dynamic settings. Clinical pilot studies involving neuromorphic humanoid robots like Kangaroo will validate prototypes in robot-led physical rehabilitation of stroke survivors.
The outcomes of the PRIMI project are poised to set new standards in interactive robotics, fostering enhanced efficiency in human-robot collaborations, improved safety in shared environments, and groundbreaking contributions to cognitive robotics.
PAL Robotics is deeply engaged in collaborative projects spanning healthcare, Ambient Assisted Living, smart cities, and more. For further insights into PAL Robotics and their involvement in collaborative initiatives, visit the PAL Robotics website and feel free to reach out with any inquiries.

Hi Bravo,

This Arvix paper discusses SpiNNaker2. It uses ARM Cortex M4F. ARM have in-house AI in Helium and Ethos.

https://arxiv.org/pdf/2103.08392

1731732692221.png


The second generation SpiNNaker2 scales down technology from 130nm CMOS to 22nm FDSOI CMOS [5], while also introducing a number of new features. Adaptive body biasing (ABB) in this 22nm FDSOI process node delivers cutting-edge power consumption [6]. With dynamic voltage and frequency scaling, the energy consumption of the PEs scales with the spiking activity computed on the cores [7], [8]. The Arm Cortex-M4 cores employed for SpiNNaker2 integrate a single - precision floating point unit, thus extending the fixed-point arithmetic of the first generation SpiNNaker. Computationwise, SpiNNaker2 retains the processor-based flexibility of the first generation system [9], while adding additional numerical accelerators to speed up common operations [10]–[12]. In the current prototype described in this paper, another accelerator has been added, a 16 by 4 array of 8 bit multiply-accumulate units (MAC), enabling faster 2D convolution and matrix-matrix multiplication [13].
 
  • Like
  • Sad
  • Fire
Reactions: 6 users

Frangipani

Regular
A paper showing AKIDA 1000 being integrated into an on-board processor to help detect fugitive methane emissions from ageing oil and gas infrastructures facilitating operators to locate and mitigate these leaks, to order to help address the global climate crisis. The processor works in conjunction with NASA’s core Flight System (cFS).
View attachment 72916


EXTRACT FROM PAGE 6

.1 BrainSat neuromorphic processor for CubeSats
One of the objectives of our work was to propose an al-
gorithm that could be executed on the novel neuromorphic
on-board processor (OBP) developed by BrainSat [12],
demonstrating its capability to serve small satellite mis-
sion needs and showing the potential of AI-based edge
computing. The OBP was specifically designed accord-
ing to mission requirements established for the monitoring
of point-source methane emissions with a 6U CubeSat, as
detailed in [13].
The designed OBP includes two PC104 modules, con-
nected through a mezzanine connector. It integrates both
CPU and FPGA capabilities and can cater for on board
computer functions, payload data processing and down-
link management. The OBP is equipped with the Akida
1000 neuromorphic processor, selected for its design ma-
turity, real-time processing capabilities and flight heritage.
The chip is tailored for event-based processing, featuring
80 Neuromorphic Processing Units (NPUs) with 100 KB
of SRAM each, supporting up to 1.2 million virtual neu-
rons and 10 billion virtual synapses with up to 8-bit pre-
cision, ideal for inference-heavy tasks.
An FPGA pro-
vides glue logic to implement necessary data protocols
and serve as a soft CPU for OBC functions. Additionally,
a 12GB flight-proven Micron Solid State Device (SSD) is
included to provide non-volatile on-board memory. This
sub-system is constrained to a 0.5 U volume and is esti-
mated to consume a low power of less than 4 W.
The processor’s software architecture was designed to
minimize resource consumption. It is equipped with Real-
Time Executive for Multiprocessor Systems (RTEMS) as
its Real Time Operating System (RTOS), on top of which
NASA’s core Flight System (cFS) is used. cFS provides a
flight proven product on which OBP functions are built us-
ing open-source applications. For more information about
the BrainSat architecture, refer to our co-published pro-
ceeding [12]




To save those of you some time who would like to find out more about the above mentioned co-publication on the BrainSat hardware architecture by seven other SGAC Small Satellite Group members the authors collaborated with - you can just click on the three posts of mine I referred back to in below post

https://thestockexchange.com.au/threads/brn-discussion-ongoing.1/post-439236

as I had already covered both papers a month ago:

03809C04-5BD3-4E8E-AF05-3FE7BA135D3C.jpeg

57BEEFBE-7B12-4BFB-852F-2B5B592112DB.jpeg

FCF1C438-3D8E-40C0-8D2D-DFE359B01EC8.jpeg
 
  • Like
  • Love
  • Fire
Reactions: 16 users

Mt09

Regular
To save those of you some time who would like to find out more about the above mentioned co-publication on the BrainSat hardware architecture by seven other SGAC Small Satellite Group members the authors collaborated with - you can just click on the three posts of mine I referred back to in below post

https://thestockexchange.com.au/threads/brn-discussion-ongoing.1/post-439236

as I had already covered both papers a month ago:

View attachment 72943
View attachment 72944
View attachment 72945
Congrats.
 
  • Haha
  • Like
Reactions: 11 users

Earlyrelease

Regular
Perth crew.
Yip its is that time of the year again for the obligatory xmas drinks.
Wednesday 11 December is booked for the 4pm-4.30pm start, I need numbers as they now take my credit card and charge $500 if I don't get the numbers (and not wing it like normal), so just PM if you are interest. Let us pray that because of the big break between drinks we can do as we did for the $1 dollar party and on the day (19 Jan2022) it was a $2 party.

BRN 1st.jpeg
 
  • Like
  • Love
  • Fire
Reactions: 29 users

Frangipani

Regular
Narendhiran Saravanane (https://naren200.github.io) - whose name didn’t pop up when I searched for it here on TSE - was a summer intern with BrainChip last year (from May to August 2023), while enrolled for a Master’s program at Arizona State University, which has been part of our company’s University AI Accelerator Program since its inauguration in September 2022.

After graduating in April of this year with an “MS in Robotics and Autonomous Systems (Honors)”, he worked for a company called Padma AgRobotics for six months - so until very recently (his CV is not quite as up-to-date as his LinkedIn and GitHub profiles in that respect).

D19555BA-8B23-46E3-B36A-9680102E982F.jpeg


AFBCE590-0AAC-476F-BAB1-B1230923DFC9.jpeg




51A06E5F-9828-43E1-9F42-B8F2EE84AB4D.jpeg


48164DF8-4C75-48D4-98E2-C9439853BC6B.jpeg


Padma AgRobotics is a five-year old startup based in Chandler, AZ and very much intertwined with Arizona State University (ASU): Not only is its founder, Raghu Nandivada, an ASU alumni, but equally robotics engineer Cole Brauer, who had joined the then one-month old company in January 2020 and only left the start-up in July of this year and is now self-employed.

In May 2020, Padma AgRobotics won US$15,000 for their invention of weed-killing robots in the ASU-backed Sarsam Family Venture Challenge, and earlier this year Raghu Nandivada was featured in a fireside chat with Cultivate PHX, an AgriFood Tech Incubator within the J. Orin Edson Entrepreneurship + Innovation Institute at Arizona State University, that is “set to award an impressive $300,000 in seed grant funding to ventures demonstrating technological innovation and delivering benefits to the broader Phoenix area. Funding opportunities are available for ventures addressing key areas within the full lifecycle of food and advanced ventures seeking to pilot new technology.”


https://entrepreneurship.asu.edu/programs/cultivate-phx-agrifood-tech-incubator/ (where Raghu Nandivada gets quoted)

As you can see, Padma AgRobotics has lots of connections to Arizona State University.


E8D3D2A6-2FCE-4CCD-B0C8-C36CF94EC28F.jpeg


“Gimme an R!” 😆 👆🏻


FB361536-9D21-49F3-9C90-E2C5F392A01D.jpeg




0C628402-39F8-4CFD-9438-5F0E88DEBD07.jpeg





A01076AA-22AA-47F7-B813-08FA327287EF.jpeg


Could Padma AgRobotics possibly be engaged with us?

Remember the AkidaNet model published in a November 2022 paper by Vi Nguyen Thanh Le, Kevin, Tsiknos, Kristofor Carlson (all BrainChip) and Selam Ahderom (Edith Cowan University), first referred to in posts by @thelittleshort and @Fullmoonfever?


Don’t think anyone has gone down this rabbit hole yet? I looked into the DSTG Women in STEM Award - specifically what her paper was about

I couldn’t actually find the paper that the won the award - An energy-efficient AkidaNet for morphologically similar weeds and crops recognition at the Edge' (co-authors Kevin Tsiknos, Kristofor Carlson, Selam Ahder, but another one that lead to the outputs below
Just googling for dots and this presso / paper I posted about back in Jan popped up.

Couldn't find it back then but it's HERE if anyone wanted a read.

@Diogenese thoughts whenever if you have time or anything of interest in it?

TIA


While I couldn’t find any specific mention of neuromorphic technology anywhere on their website or in either of their two SBIR grant applications for another precision agriculture project, namely “an autonomous harvester for cilantro with bunching and tying capability” (SBIR I phase was from 24 April 2023 to 29 February 2024 > US$181,500; the ongoing SBIR II phase is from 1 September 2024 to 31 August 2026 > US$ 650,000), the multiple ASU connections as well as the fact that a 2023 BrainChip summer intern verifiably worked with them for six months earlier this year, in combination with recalling a July 2023 interview with Nandan Nayampally made me wonder whether Padma AgRobotics could be a valid dot-join. 🤔

In said interview (watch from around 18 min), which I believe was first posted here on TSE by @TECH https://thestockexchange.com.au/threads/brn-discussion-ongoing.1/post-323887), Nandan referred to the long-term benefits of scaling your business offerings to a broader clientele by giving the example of preferably employing a consumer-friendly-priced service model for compact lawnmowers with weed-killing capabilities utilising Akida technology (hypothetical example only?) rather than trying to sell units of (already existing) premium lawnmowers with very expensive and bulky weed-control systems. (“Today, the economy is moving from a device economy to a service economy.”).

Now guess what the current business model of Padma AgRobotics looks like?



EC7E760E-924D-4EF0-B322-ACBD0DB828A8.jpeg




[To be continued in another post due to the upload limit of 10 files…]
 
  • Like
  • Love
  • Fire
Reactions: 27 users

Frangipani

Regular
While all of this sounded really promising (although it didn’t exactly evoke massive revenue 🤪), I found out there is another company - Atlanta-based R2 Labs - involved that helped to develop the AI-powered agricultural robot for Padma AgRobotics and built their solution using the Arduino Pro Portenta H7 platform:


88B64E76-E2B5-4E1C-91DF-5D5AFBDE3C9E.jpeg


2ce51ec7-341e-4ada-8f66-e9871e55b1e3-jpeg.72962

9FE0AA00-D93A-45B5-927F-15789A1DBE03.jpeg

054F42B9-F29E-45E1-8A0C-04852366EBF9.jpeg



Now here comes the question for the more tech-savvy: Would it still hypothetically be possible that we are involved?

I did find two references online that Portenta X8 now has support for the Akida PCIe module (one by YouTuber Chris Méndez from the Dominican Republic, who works for Arduino and has uploaded several videos featuring the Akida), but couldn’t find anything regarding Portenta H7…


23841899-BB65-45F6-8971-3098E537D5ED.jpeg



9C588F4C-1138-4BCD-803A-31C47A0A5263.jpeg



And since I’m in the mood of maxing out my file uploads again, here are Padma AgRobotics’ SBIR phase I and II applications, in case anyone would like to use these to fuel or end this speculation (note that they refer to a cilantro harvester, though, not to the weed-control AI-powered agricultural robot featured on their website and in the January 2023 video):

8C64FF6D-B330-4342-84FA-03C5A9F9CB33.jpeg

F611A596-30AF-467C-BBFA-C9722BE657E4.jpeg

3323A4BD-6FF0-4A80-A197-71BE52A1E16D.jpeg

1642AA12-1B32-4838-B4B6-88A854EF1A8C.jpeg
 

Attachments

  • 2CE51EC7-341E-4ADA-8F66-E9871E55B1E3.jpeg
    2CE51EC7-341E-4ADA-8F66-E9871E55B1E3.jpeg
    693.1 KB · Views: 578
  • Like
  • Love
  • Fire
Reactions: 35 users

Bravo

If ARM was an arm, BRN would be its biceps💪!
Texas Instruments integrating an NPU as an edge AI accelerator.



TI Integrates Edge AI and Real-Time Control in New Mission-Critical MCUs​

8 hours ago by Duane Benson

Today, TI unveiled new members of the C2000 processor family—one with hardware-accelerated AI and the other with a 64-bit core for real-time control.​


Today at Electronica 2024, Texas Instruments (TI) announced its new TMS320F28P55x series of C2000 MCUs. TI calls the series the industry’s first real-time microcontrollers with an integrated neural processing unit (NPU). Along with that announcement, TI also revealed the new 64-bit F29H85x series of MCUs built around its C29 digital signal processing (DSP) core. The C29 MCUs target automotive applications that require fault-tolerant, low-latency operations and predictive decision-making. Both sets of new MCUs can serve mission-critical applications that require low-latency, real-time detection, calculation, and response.

A Brief Introduction to the Two New MCU Series​

The 40+ variants of the TMS320F28P55x series (datasheet linked) come with an integrated hardware NPU and on-chip Flash memory of up to 1.1 MB. This series also features 24 pulse width modulation (PWM) channels and 39 analog-to-digital (ADC) channels.

TMS320F28P55x and F29H85x MCUs

TMS320F28P55x and F29H85x MCUs. Image (modified) used courtesy of Texas Instruments

The second series of MCUs in this announcement, the F29H85x, provides motor and power control with two to three times the signal chain performance of its predecessors. TI also claims these devices have a five times faster fast Fourier transform (FFT) performance in diagnostics, tuning, and arc detection. Real-time interrupts run four times faster, and the MCUs are reportedly two to three times higher performing for general-purpose code execution. It includes an isolated hardware security module as well.

The TMS320F28P55x Series: Aided by a Powerful NPU​

For over 25 years, TI's C2000 family has provided real-time control in industrial and automotive applications. The newest additions to the family, the TMS320F28P55x series, integrate an NPU as an edge AI hardware accelerator. The NPU enables the MCU to offload AI processing from the primary core to dramatically increase real-time performance. The NPU offers advanced AI-based decision-making without loading down the primary processing core.

Functional block diagram of the TMS320F28P55x

Functional block diagram of the TMS320F28P55x. Image used courtesy of Texas Instruments

Conventional microcontrollers use simple logic to make real-time decisions. They use combinations of “if-then” or state machines to evaluate conditions and essentially make Boolean logic decisions based on given sets of input conditions. When sensor input is obvious and accurate, this type of system can work quite well. However, with more sensors providing input and with fast-changing conditions, ambiguity or sensor lag can lead to invalid input conditions or improper results. With today’s stringent safety and efficiency requirements, Boolean logic is insufficient for many requirements. That’s where edge AI can deliver significant improvements.

Adding NPU-based AI enables greater one-chip functionality

Adding NPU-based AI enables greater one-chip functionality with improved accuracy. Image used courtesy of Texas Instruments

TI notes that NPU capabilities will benefit applications like arc fault detection in solar and energy storage systems and motor-bearing fault detection for predictive maintenance. In both cases, conventional MCU code can misconstrue such faults, misidentifying them or not identifying them soon enough. The NPU allows the MCU to perform more advanced AI-style interpretation of sensor inputs in real time.
The TMS320F28P55x's NPU can also be trained to adapt to different environments with different sensor inputs, greatly increasing detection accuracy. It can run convolution neural network models to learn complex patterns from sensor data. The NPU will offload these calculations from the main CPU core and use AI to detect complex fault conditions, which can result in a five to 10x decrease in latency for detection operations.


The F29H85x Series: Leveraging 64-bit DSP and Real-Time Control​

The F29H85x MCU uses TI’s new C29 DSP core to deliver more than double the real-time performance of its predecessor, the C28. The new F29H85x processor series with the C29 DSP core boasts TI’s very long instruction word (VLIW) architecture, which supports the execution of up to eight instructions per cycle. The MCUs offer cyber security features as well, including a fully isolated hardware security module to protect the system. Further, the hardware safety and security unit uses context-aware memory protection to extend hardware isolation to CPU tasks without interference. The architecture provides security without a performance penalty added to the rest of the MCU.

Improved C29 DSP core

Improved C29 DSP core response receives, processes, and responds more than twice as fast. Image used courtesy of Texas Instruments

The 64-bit DSP with complex math ability can speed the signal chain performance for motor and power control by two to three times over the C28. It has five times the fast Fourier transform (FFT) performance. (FFT is used for systems diagnostics, tuning, and arc detection.) Interrupt response is four times faster than the C28 and general-purpose processing code can be executed two to three times faster.
TI engineered the chips to comply with the International Organization for Standardization (ISO) 26262 and International Electrotechnical Commission (IEC) 61508 automotive and industrial safety standards.

C2000 real-time MCU F28P55x development kit

C2000 real-time MCU F28P55x development kit in TI LaunchPad form factor. Image used courtesy of Texas Instruments

The F29 processors are automotive safety integrity level (ASIL) D and safety integrity level (SIL) 3 certified. ASIL D is the highest of four automotive safety risk management levels. SIL 3 is an industrial standard for risk mitigation used in a number of industry standards. SIL has three levels, with three being the highest. Visit TI at its Electronica booth C4-158.



Here's a video on the above Texas Instruments edge-based AI solution with realtime fault prediction. All inference work is performed off-line. All of the neural network computation is off-loaded to the hardware accelerator.

This is a very interesting discussion!

Screenshot 2024-11-17 at 10.20.31 am.png


 
  • Like
  • Love
  • Thinking
Reactions: 22 users

Bravo

If ARM was an arm, BRN would be its biceps💪!
  • Like
  • Fire
  • Love
Reactions: 32 users

rgupta

Regular
Here's a video on the above Texas Instruments edge-based AI solution with realtime fault prediction. All inference work is performed off-line. All of the neural network computation is off-loaded to the hardware accelerator.

This is a very interesting discussion!

View attachment 72971


This one have a merit of akida in it.
If no akida here either that will mean ??
Dyor
 
  • Like
Reactions: 1 users

Bravo

If ARM was an arm, BRN would be its biceps💪!
I'm keeping an eye on Microchip. I beleive it's only a matter of time before our technology will be integrated in some way with PolarFire as I've mentioned in previous posts.

Remembering that Akida has already been integrated with Microchip's 32-bit processor.

Screenshot 2024-11-17 at 11.24.13 am.png


Microchip to Accelerate Real-time Edge AI with NVIDIA Holoscan​

PolarFire® FPGA Ethernet Sensor Bridge provides low-power multi-sensor bridging to NVIDIA edge AI platforms​

November 14, 2024 08:03 ET| Source: Microchip Technology Inc.Follow

Share


CHANDLER, Ariz., Nov. 14, 2024 (GLOBE NEWSWIRE) -- To enable developers building artificial intelligence (AI)-driven sensor processing systems, Microchip Technology (Nasdaq: MCHP) has released its PolarFire® FPGA Ethernet Sensor Bridge that works with the NVIDIA Holoscan sensor processing platform.
PolarFire FPGAs enable multi-protocol support, and this first solution to be released as part of Microchip’s platform is compatible with MIPI® CSI-2®-based sensors and the MIPI D-PHY℠ physical layer. Future solutions will support a wide range of sensors with different interfaces, including SLVS-EC™ 2.0, 12G SDI, CoaXPress® 2.0 and JESD204B. The platform allows designers to leverage the power of the NVIDIA Holoscan ecosystem while taking advantage of the PolarFire FPGA’s power-efficient technology with low-latency communication and multi-protocol sensor support.
NVIDIA Holoscan helps streamline the development and deployment of AI and high-performance computing (HPC) applications at the edge for real-time insights. It brings into a single platform the necessary hardware and software systems for low-latency sensor streaming and network connectivity. The platform includes optimized libraries for data processing, sample AI models for jump-starting AI inference pipeline development, template applications to facilitate rapid prototyping and core microservices to run streaming, imaging and other applications.
With its ability to bridge real-time sensor data to NVIDIA Holoscan and the NVIDIA IGX and NVIDIA Jetson platforms for edge AI and robotics, the PolarFire FPGA Ethernet Sensor Bridge unlocks new edge-to-cloud applications, enables AI/ML inferencing and facilitates the adoption of AI in the medical, industrial and automotive markets.
“The Ethernet sensor bridge is based on Microchip's highly power-efficient, secure and reliable PolarFire FPGA platform,” said Bruce Weyer, vice president of Microchip’s FPGA business unit. “By combining our flexible FPGA fabric with NVIDIA's advanced AI platform and multi-protocol support, we're empowering developers to create innovative, real-time solutions that will revolutionize sensor interfaces across a wide range of powerful AI-driven edge applications.”
By utilizing the low power consumption of Microchip’s PolarFire FPGA technology, the NVIDIA Holoscan Sensor Bridge efficiently manages high-bandwidth data from diverse sensors over Ethernet, enabling real-time, high-performance edge AI processing on NVIDIA AI platforms. The power-efficient design is also conducive for small-footprint and energy- or cost-sensitive applications.
PolarFire FPGAs address security concerns in sensor applications by providing embedded security and safety features to help protect against potential cyber threats and provide physical, device, design and data integrity. They are additionally designed with single event upset (SEU) immunity, making them highly reliable in environments subject to radiation, such as space or high-altitude applications and medical environments. The SEU immunity also helps reduce the risk of data corruption and system failures.
To learn more about how Microchip’s development tool supports NVIDIA Holoscan and other applications, visit the PolarFire FPGA Ethernet Sensor Bridge web page.

About Microchip Technology:
Microchip Technology Inc. is a leading provider of smart, connected and secure embedded control and processing solutions. Its easy-to-use development tools and comprehensive product portfolio enable customers to create optimal designs which reduce risk while lowering total system cost and time to market. The company’s solutions serve approximately 123,000 customers across the industrial, automotive, consumer, aerospace and defense, communications and computing markets. Headquartered in Chandler, Arizona, Microchip offers outstanding technical support along with dependable delivery and quality. For more information, visit the Microchip website at www.microchip.com.
 
Last edited:
  • Like
  • Love
  • Fire
Reactions: 25 users
Top Bottom