BRN Discussion Ongoing

CHIPS

Regular
How can you be that sure

Don't expect too much, he is just joking. :ROFLMAO:
 
  • Like
  • Haha
Reactions: 2 users
How can you be that sure
Sometimes, you just have that feeling, you know..

20241115_191134.jpg
 
  • Haha
  • Thinking
Reactions: 7 users

CHIPS

Regular
  • Haha
Reactions: 4 users

TheDrooben

Pretty Pretty Pretty Pretty Good
  • Haha
Reactions: 9 users

FiveBucks

Regular
IP announcement, next week for sure.

Well it's a 1 in 5 chance. We should have a deal this year right Seany boy?
 
  • Like
  • Fire
Reactions: 3 users
Oh no, we are dying. :oops: I found only 5 new posts to read this morning. 😭
Even @Bravo cannot run anymore. Sean, please help with an announcement.


Dying Sharon Needles GIF by RuPaul's Drag Race's Drag Race
That’s why I left the county it was getting so boring here

1731675705578.gif
 
  • Haha
  • Like
  • Love
Reactions: 7 users

Frangipani

Top 20
Can’t be long now before we find out more about Nikunj Kotecha’s Edge AI Stealth Startup, with below article about him published on TechBullion earlier today. Consider it blatant, over-the-top self-promotion or not, the piece also happens to be excellent publicity for his former employer… 😀
Intentionally or unintentionally.

(Although at the same time it begs the question - once again - why he is no longer with our company…)

823464A9-7371-4312-B408-9658F26FEA91.jpeg


89BD6596-A1D2-489D-B08B-F44275C1FB5E.jpeg

2A9F5B7A-D7BB-4E1C-B20E-4B6C9A212358.jpeg




ARTIFICIAL INTELLIGENCE

Nikunj Kotecha: “ML and AI are little–explored technologies with great potential, which we reveal every day”​

34e5404ac3214bbbd73cc54f2d4ce695-80x80.jpg

ByMiller Victor
Posted on November 15, 2024
Nikunj-Kotecha.jpg


By 2026, artificial intelligence is forecasted to generate up to 90% of all internet content!

These impressive statistics spark conversations about the quality and diversity of the generated content. Among the leading experts shaping this transformation is Nikunj Kotecha, a seasoned Machine Learning Leader with ten years of experience in advanced AI solutions for global clients. Experts like him are currently engaged in “training” AI models and programming them using complex mathematical algorithms to optimize various business processes. These models help improve customer service, optimize internal processes, and achieve technological leadership in the market.

Nikunj holds certifications from Amazon Web Services (AWS) as an AI Practitioner expert and from DeepLearning.ai in Generative AI with large language models (LLMs). His work focuses on developing efficient, secure, and privacy oriented AI solutions for semiconductor accelerators at the Edge. As a Technical lead, he has successfully guided cross-functional teams, pushing the limits of Edge AI and Neuromorphic computing.

During his time as a researcher at the Rochester Institute of Technology (RIT) from 2018 to 2020, Nikunj investigated innovative methods to enhance American Sign Language (ASL) video translations. By integrating multimodal features and developing Transformer networks, first at the time, Nikunj improved translation accuracy by 10% measured by BLEU score. His other work in Bayesian inference for skin lesions further advanced AI’s role in healthcare, developing models that confidently defer classification in cases of uncertainty, leading to 5% accuracy boost.

From 2021 to 2023, Nikunj led as a Senior Solutions Architect at BrainChip Inc., an Australian company specializing in brain-inspired AI Hardware.
He led BrainChip technical team in securing a multi-year license agreement for its Intellectual Property (IP) of Akida AI accelerator with MegaChips, a japanese based global fabless semiconductor company. The multi-year licensing valued in millions and a $2 million forecast expected in royalties
.

Nikunj’s technical expertise facilitated the development of the next-generation Neuromorphic processor and an updated MetaTF Software Development Kit (SDK) publicly available for developers to build custom Neuromorphic models. Combined together, it supports the newer Transformer networks and features such as Residual connections, 8-bit Integer Quantization, and Post-Training Quantization. Another notable advancement under his expertise was the implementation of Temporal Event-Based Network (TENNs), an innovative state space model used for denoising audio in hearing aids and earphones devices. TENNs demonstrated superior performance, achieving state of the art results in audio clarity and noise suppression measured by improvements in PESQ and STOI of 16% and 4% respectively on the Microsoft denoising challenge. Nikunj also developed industry models such as Akidanet FOMO optimizing object detection speed and reducing detection delay by 20%.

Nikunj Kotecha has made a groundbreaking contribution to the AI industry by creating BrainChip technology and launching the BrainChip University AI Accelerator Program. His work has revolutionized AI hardware at the Edge.
Its architecture centers on Neural Processing Units (NPUs) paired with dedicated Static Random Access Memory (SRAM) coupled together as a Node. This unique design with its neuromorphic processing delivers low power, high efficiency, and a dedicated AI accelerator in an SoC compared to any traditional deep learning accelerators. Recognizing the need to demonstrate these unique capabilities, Nikunj developed a benchmark framework to demonstrate BrainChip’s core capabilities, showcasing its efficiency in real-world AI applications. He further simplified this tool into a no-code version that allows develops to assess performance without need deep technical expertise.

With a deep understanding of Edge AI and application-specific integrated circuits (ASICs), Nikunj actively spread awareness and learning of this technology. He led workshops such as “Bringing Development of BrainChip Akida Neuromorphic models at Edge Impulse Imaging event. There he also participated as a guest speaker for a webinar “Neuromorphic Deep Dive into Next-Gen Edge AI solutions using Edge Impulse”.

In addition to his technical achievements, Nikunj led the BrainChip University AI Accelerator Program, a global initiative that helps students learn about neuromorphic AI through hands-on projects and access to BrainChip technology. His lectures at top universities like Carnegie Mellon, Arizona State University and Cornell Tech have inspired a new generation of AI engineers, building a strong talent pool and expanding the reach of BrainChip’s technology.


Nikunj’s contributions have significantly advanced AI hardware and education, creating lasting impacts on the industry and fostering the next generation of AI professionals.

As an active member of Institute of Electrical and Electronics Engineers (IEEE) and the Association for Computing Machinery (ACM), Nikunj frequently contributed to AI research. Currently serving as a peer reviewer for the 19th IEEE International Conference on Automatic Face and Gesture Recognition, he has also published articles in leading venues such as 16th IEEE International Conference on Automatic Face and Gesture Recognition and at a 2021 journal of ACM. The AI expert also participates in independent research projects
such as creation of Benchmark dataset called “Indic MMLU-Pro” for Indian languages that helps in the development of LLMs for such regions. His involvement in technical Hackathons and Competitions extends to judging roles and member of the jury respectively. He has evaluated multiple projects of professionals in hackathons such as Patient Journey Challenge, Galaxy One: 2024 Hackathon, Medihacks 2024 and AI for Change by Launchology. At competitions such as Globee Awards for Business and Globee Awards for Women in Business, he evaluated the achievements and innovations of participants and organizations.

Nikunj Kotecha is one of the best AI and ML specialists with internationally certified qualifications! His extensive work across research and commercial sectors has uniquely positioned him as a leader in cutting edge AI technology. His contributions not only advance the AI field but also inspire future developments that will benefit industries worldwide. His ability to evaluate and drive innovations within the industry has a profound influence on the growth and responsible development of AI.

RELATED ITEMS:AI, ML, NIKUNJ KOTECHA
 
  • Like
  • Fire
  • Wow
Reactions: 35 users
Rather interesting read...esp Megachips...wonder if that's PM, PQ or PA...we wait :unsure:


Nikunj Kotecha: “ML and AI are little–explored technologies with great potential, which we reveal every day”​

34e5404ac3214bbbd73cc54f2d4ce695-80x80.jpg

ByMiller Victor
Posted on November 15, 2024
Nikunj-Kotecha.jpg

By 2026, artificial intelligence is forecasted to generate up to 90% of all internet content!

These impressive statistics spark conversations about the quality and diversity of the generated content. Among the leading experts shaping this transformation is Nikunj Kotecha, a seasoned Machine Learning Leader with ten years of experience in advanced AI solutions for global clients. Experts like him are currently engaged in “training” AI models and programming them using complex mathematical algorithms to optimize various business processes. These models help improve customer service, optimize internal processes, and achieve technological leadership in the market.

Nikunj holds certifications from Amazon Web Services (AWS) as an AI Practitioner expert and from DeepLearning.ai in Generative AI with large language models (LLMs). His work focuses on developing efficient, secure, and privacy oriented AI solutions for semiconductor accelerators at the Edge. As a Technical lead, he has successfully guided cross-functional teams, pushing the limits of Edge AI and Neuromorphic computing.

During his time as a researcher at the Rochester Institute of Technology (RIT) from 2018 to 2020, Nikunj investigated innovative methods to enhance American Sign Language (ASL) video translations. By integrating multimodal features and developing Transformer networks, first at the time, Nikunj improved translation accuracy by 10% measured by BLEU score. His other work in Bayesian inference for skin lesions further advanced AI’s role in healthcare, developing models that confidently defer classification in cases of uncertainty, leading to 5% accuracy boost.

From 2021 to 2023, Nikunj led as a Senior Solutions Architect at BrainChip Inc., an Australian company specializing in brain-inspired AI Hardware. He led BrainChip technical team in securing a multi-year license agreement for its Intellectual Property (IP) of Akida AI accelerator with MegaChips, a japanese based global fabless semiconductor company. The multi-year licensing valued in millions and a $2 million forecast expected in royalties. Nikunj’s technical expertise facilitated the development of the next-generation Neuromorphic processor and an updated MetaTF Software Development Kit (SDK) publicly available for developers to build custom Neuromorphic models. Combined together, it supports the newer Transformer networks and features such as Residual connections, 8-bit Integer Quantization, and Post-Training Quantization. Another notable advancement under his expertise was the implementation of Temporal Event-Based Network (TENNs), an innovative state space model used for denoising audio in hearing aids and earphones devices. TENNs demonstrated superior performance, achieving state of the art results in audio clarity and noise suppression measured by improvements in PESQ and STOI of 16% and 4% respectively on the Microsoft
denoising challenge. Nikunj also developed industry models such as Akidanet FOMO optimizing object detection speed and reducing detection delay by 20%.

Nikunj Kotecha has made a groundbreaking contribution to the AI industry by creating BrainChip technology and launching the BrainChip University AI Accelerator Program. His work has revolutionized AI hardware at the Edge.

Its architecture centers on Neural Processing Units (NPUs) paired with dedicated Static Random Access Memory (SRAM) coupled together as a Node. This unique design with its neuromorphic processing delivers low power, high efficiency, and a dedicated AI accelerator in an SoC compared to any traditional deep learning accelerators. Recognizing the need to demonstrate these unique capabilities, Nikunj developed a benchmark framework to demonstrate BrainChip’s core capabilities, showcasing its efficiency in real-world AI applications. He further simplified this tool into a no-code version that allows develops to assess performance without need deep technical expertise.

With a deep understanding of Edge AI and application-specific integrated circuits (ASICs), Nikunj actively spread awareness and learning of this technology. He led workshops such as “Bringing Development of BrainChip Akida Neuromorphic models at Edge Impulse Imaging event. There he also participated as a guest speaker for a webinar “Neuromorphic Deep Dive into Next-Gen Edge AI solutions using Edge Impulse”.
In addition to his technical achievements, Nikunj led the BrainChip University AI Accelerator Program, a global initiative that helps students learn about neuromorphic AI through hands-on projects and access to BrainChip technology. His lectures at top universities like Carnegie Mellon, Arizona State University and Cornell Tech have inspired a new generation of AI engineers, building a strong talent pool and expanding the reach of BrainChip’s technology.

Nikunj’s contributions have significantly advanced AI hardware and education, creating lasting impacts on the industry and fostering the next generation of AI professionals.

As an active member of Institute of Electrical and Electronics Engineers (IEEE) and the Association for Computing Machinery (ACM), Nikunj frequently contributed to AI research. Currently serving as a peer reviewer for the 19th IEEE International Conference on Automatic Face and Gesture Recognition, he has also published articles in leading venues such as 16th IEEE International Conference on Automatic Face and Gesture Recognition and at a 2021 journal of ACM. The AI expert also participates in independent research projects such as creation of Benchmark dataset called “Indic MMLU-Pro” for Indian languages that helps in the development of LLMs for such regions. His involvement in technical Hackathons and Competitions extends to judging roles and member of the jury respectively. He has evaluated multiple projects of professionals in hackathons such as Patient Journey Challenge, Galaxy One: 2024 Hackathon, Medihacks 2024 and AI for Change by Launchology. At competitions such as Globee Awards for Business and Globee Awards for Women in Business, he evaluated the achievements and innovations of participants and organizations.

Nikunj Kotecha is one of the best AI and ML specialists with internationally certified qualifications! His extensive work across research and commercial sectors has uniquely positioned him as a leader in cutting edge AI technology. His contributions not only advance the AI field but also inspire future developments that will benefit industries worldwide. His ability to evaluate and drive innovations within the industry has a profound influence on the growth and responsible development of AI.
 
  • Like
  • Fire
  • Love
Reactions: 23 users
Can’t be long now before we find out more about Nikunj Kotecha’s Edge AI Stealth Startup, with below article about him published on TechBullion earlier today. Consider it blatant, over-the-top self-promotion or not, the piece also happens to be excellent publicity for his former employer… 😀
Intentionally or unintentionally.

(Although at the same time it begs the question - once again - why he is no longer with our company…)

View attachment 72910

View attachment 72908
View attachment 72909



ARTIFICIAL INTELLIGENCE

Nikunj Kotecha: “ML and AI are little–explored technologies with great potential, which we reveal every day”​

34e5404ac3214bbbd73cc54f2d4ce695-80x80.jpg

ByMiller Victor
Posted on November 15, 2024
Nikunj-Kotecha.jpg


By 2026, artificial intelligence is forecasted to generate up to 90% of all internet content!

These impressive statistics spark conversations about the quality and diversity of the generated content. Among the leading experts shaping this transformation is Nikunj Kotecha, a seasoned Machine Learning Leader with ten years of experience in advanced AI solutions for global clients. Experts like him are currently engaged in “training” AI models and programming them using complex mathematical algorithms to optimize various business processes. These models help improve customer service, optimize internal processes, and achieve technological leadership in the market.

Nikunj holds certifications from Amazon Web Services (AWS) as an AI Practitioner expert and from DeepLearning.ai in Generative AI with large language models (LLMs). His work focuses on developing efficient, secure, and privacy oriented AI solutions for semiconductor accelerators at the Edge. As a Technical lead, he has successfully guided cross-functional teams, pushing the limits of Edge AI and Neuromorphic computing.

During his time as a researcher at the Rochester Institute of Technology (RIT) from 2018 to 2020, Nikunj investigated innovative methods to enhance American Sign Language (ASL) video translations. By integrating multimodal features and developing Transformer networks, first at the time, Nikunj improved translation accuracy by 10% measured by BLEU score. His other work in Bayesian inference for skin lesions further advanced AI’s role in healthcare, developing models that confidently defer classification in cases of uncertainty, leading to 5% accuracy boost.

From 2021 to 2023, Nikunj led as a Senior Solutions Architect at BrainChip Inc., an Australian company specializing in brain-inspired AI Hardware.
He led BrainChip technical team in securing a multi-year license agreement for its Intellectual Property (IP) of Akida AI accelerator with MegaChips, a japanese based global fabless semiconductor company. The multi-year licensing valued in millions and a $2 million forecast expected in royalties
.

Nikunj’s technical expertise facilitated the development of the next-generation Neuromorphic processor and an updated MetaTF Software Development Kit (SDK) publicly available for developers to build custom Neuromorphic models. Combined together, it supports the newer Transformer networks and features such as Residual connections, 8-bit Integer Quantization, and Post-Training Quantization. Another notable advancement under his expertise was the implementation of Temporal Event-Based Network (TENNs), an innovative state space model used for denoising audio in hearing aids and earphones devices. TENNs demonstrated superior performance, achieving state of the art results in audio clarity and noise suppression measured by improvements in PESQ and STOI of 16% and 4% respectively on the Microsoft denoising challenge. Nikunj also developed industry models such as Akidanet FOMO optimizing object detection speed and reducing detection delay by 20%.

Nikunj Kotecha has made a groundbreaking contribution to the AI industry by creating BrainChip technology and launching the BrainChip University AI Accelerator Program. His work has revolutionized AI hardware at the Edge.
Its architecture centers on Neural Processing Units (NPUs) paired with dedicated Static Random Access Memory (SRAM) coupled together as a Node. This unique design with its neuromorphic processing delivers low power, high efficiency, and a dedicated AI accelerator in an SoC compared to any traditional deep learning accelerators. Recognizing the need to demonstrate these unique capabilities, Nikunj developed a benchmark framework to demonstrate BrainChip’s core capabilities, showcasing its efficiency in real-world AI applications. He further simplified this tool into a no-code version that allows develops to assess performance without need deep technical expertise.

With a deep understanding of Edge AI and application-specific integrated circuits (ASICs), Nikunj actively spread awareness and learning of this technology. He led workshops such as “Bringing Development of BrainChip Akida Neuromorphic models at Edge Impulse Imaging event. There he also participated as a guest speaker for a webinar “Neuromorphic Deep Dive into Next-Gen Edge AI solutions using Edge Impulse”.

In addition to his technical achievements, Nikunj led the BrainChip University AI Accelerator Program, a global initiative that helps students learn about neuromorphic AI through hands-on projects and access to BrainChip technology. His lectures at top universities like Carnegie Mellon, Arizona State University and Cornell Tech have inspired a new generation of AI engineers, building a strong talent pool and expanding the reach of BrainChip’s technology.


Nikunj’s contributions have significantly advanced AI hardware and education, creating lasting impacts on the industry and fostering the next generation of AI professionals.

As an active member of Institute of Electrical and Electronics Engineers (IEEE) and the Association for Computing Machinery (ACM), Nikunj frequently contributed to AI research. Currently serving as a peer reviewer for the 19th IEEE International Conference on Automatic Face and Gesture Recognition, he has also published articles in leading venues such as 16th IEEE International Conference on Automatic Face and Gesture Recognition and at a 2021 journal of ACM. The AI expert also participates in independent research projects
such as creation of Benchmark dataset called “Indic MMLU-Pro” for Indian languages that helps in the development of LLMs for such regions. His involvement in technical Hackathons and Competitions extends to judging roles and member of the jury respectively. He has evaluated multiple projects of professionals in hackathons such as Patient Journey Challenge, Galaxy One: 2024 Hackathon, Medihacks 2024 and AI for Change by Launchology. At competitions such as Globee Awards for Business and Globee Awards for Women in Business, he evaluated the achievements and innovations of participants and organizations.

Nikunj Kotecha is one of the best AI and ML specialists with internationally certified qualifications! His extensive work across research and commercial sectors has uniquely positioned him as a leader in cutting edge AI technology. His contributions not only advance the AI field but also inspire future developments that will benefit industries worldwide. His ability to evaluate and drive innovations within the industry has a profound influence on the growth and responsible development of AI.

RELATED ITEMS:AI, ML, NIKUNJ KOTECHA
Snap :LOL:

Was just reading and posting it too.
 
  • Like
  • Haha
Reactions: 13 users


Company logo

BrainChip Inc

CES 2025​


30 min

Suite 29-312, Venetian Tower
CES 2025 is an opportunity to connect or meet new partners. Thank you for the opportunity to connect. Once again, BrainChip will be Live @ CES2025 - our podcast where we will talk “All Things AI” with the industry's sharpest minds. Please reserve a little extra time to chat with us on the podcast.
Show less



Select a Day​


January 2025
 
  • Like
Reactions: 19 users

IloveLamp

Top 20
  • Like
  • Fire
  • Love
Reactions: 16 users

Bravo

If ARM was an arm, BRN would be its biceps💪!
A paper showing AKIDA 1000 being integrated into an on-board processor to help detect fugitive methane emissions from ageing oil and gas infrastructures facilitating operators to locate and mitigate these leaks, to order to help address the global climate crisis. The processor works in conjunction with NASA’s core Flight System (cFS).
Screenshot 2024-11-16 at 10.48.07 am.png



EXTRACT FROM PAGE 6

.1 BrainSat neuromorphic processor for CubeSats
One of the objectives of our work was to propose an al-
gorithm that could be executed on the novel neuromorphic
on-board processor (OBP) developed by BrainSat [12],
demonstrating its capability to serve small satellite mis-
sion needs and showing the potential of AI-based edge
computing. The OBP was specifically designed accord-
ing to mission requirements established for the monitoring
of point-source methane emissions with a 6U CubeSat, as
detailed in [13].
The designed OBP includes two PC104 modules, con-
nected through a mezzanine connector. It integrates both
CPU and FPGA capabilities and can cater for on board
computer functions, payload data processing and down-
link management. The OBP is equipped with the Akida
1000 neuromorphic processor, selected for its design ma-
turity, real-time processing capabilities and flight heritage.
The chip is tailored for event-based processing, featuring
80 Neuromorphic Processing Units (NPUs) with 100 KB
of SRAM each, supporting up to 1.2 million virtual neu-
rons and 10 billion virtual synapses with up to 8-bit pre-
cision, ideal for inference-heavy tasks.
An FPGA pro-
vides glue logic to implement necessary data protocols
and serve as a soft CPU for OBC functions. Additionally,
a 12GB flight-proven Micron Solid State Device (SSD) is
included to provide non-volatile on-board memory. This
sub-system is constrained to a 0.5 U volume and is esti-
mated to consume a low power of less than 4 W.
The processor’s software architecture was designed to
minimize resource consumption. It is equipped with Real-
Time Executive for Multiprocessor Systems (RTEMS) as
its Real Time Operating System (RTOS), on top of which
NASA’s core Flight System (cFS) is used. cFS provides a
flight proven product on which OBP functions are built us-
ing open-source applications. For more information about
the BrainSat architecture, refer to our co-published pro-
ceeding [12]



 
Last edited:
  • Like
  • Fire
  • Love
Reactions: 85 users

Guzzi62

Regular
Can’t be long now before we find out more about Nikunj Kotecha’s Edge AI Stealth Startup, with below article about him published on TechBullion earlier today. Consider it blatant, over-the-top self-promotion or not, the piece also happens to be excellent publicity for his former employer… 😀
Intentionally or unintentionally.

(Although at the same time it begs the question - once again - why he is no longer with our company…)

View attachment 72910

View attachment 72908
View attachment 72909



ARTIFICIAL INTELLIGENCE

Nikunj Kotecha: “ML and AI are little–explored technologies with great potential, which we reveal every day”​

34e5404ac3214bbbd73cc54f2d4ce695-80x80.jpg

ByMiller Victor
Posted on November 15, 2024
Nikunj-Kotecha.jpg


By 2026, artificial intelligence is forecasted to generate up to 90% of all internet content!

These impressive statistics spark conversations about the quality and diversity of the generated content. Among the leading experts shaping this transformation is Nikunj Kotecha, a seasoned Machine Learning Leader with ten years of experience in advanced AI solutions for global clients. Experts like him are currently engaged in “training” AI models and programming them using complex mathematical algorithms to optimize various business processes. These models help improve customer service, optimize internal processes, and achieve technological leadership in the market.

Nikunj holds certifications from Amazon Web Services (AWS) as an AI Practitioner expert and from DeepLearning.ai in Generative AI with large language models (LLMs). His work focuses on developing efficient, secure, and privacy oriented AI solutions for semiconductor accelerators at the Edge. As a Technical lead, he has successfully guided cross-functional teams, pushing the limits of Edge AI and Neuromorphic computing.

During his time as a researcher at the Rochester Institute of Technology (RIT) from 2018 to 2020, Nikunj investigated innovative methods to enhance American Sign Language (ASL) video translations. By integrating multimodal features and developing Transformer networks, first at the time, Nikunj improved translation accuracy by 10% measured by BLEU score. His other work in Bayesian inference for skin lesions further advanced AI’s role in healthcare, developing models that confidently defer classification in cases of uncertainty, leading to 5% accuracy boost.

From 2021 to 2023, Nikunj led as a Senior Solutions Architect at BrainChip Inc., an Australian company specializing in brain-inspired AI Hardware.
He led BrainChip technical team in securing a multi-year license agreement for its Intellectual Property (IP) of Akida AI accelerator with MegaChips, a japanese based global fabless semiconductor company. The multi-year licensing valued in millions and a $2 million forecast expected in royalties
.

Nikunj’s technical expertise facilitated the development of the next-generation Neuromorphic processor and an updated MetaTF Software Development Kit (SDK) publicly available for developers to build custom Neuromorphic models. Combined together, it supports the newer Transformer networks and features such as Residual connections, 8-bit Integer Quantization, and Post-Training Quantization. Another notable advancement under his expertise was the implementation of Temporal Event-Based Network (TENNs), an innovative state space model used for denoising audio in hearing aids and earphones devices. TENNs demonstrated superior performance, achieving state of the art results in audio clarity and noise suppression measured by improvements in PESQ and STOI of 16% and 4% respectively on the Microsoft denoising challenge. Nikunj also developed industry models such as Akidanet FOMO optimizing object detection speed and reducing detection delay by 20%.

Nikunj Kotecha has made a groundbreaking contribution to the AI industry by creating BrainChip technology and launching the BrainChip University AI Accelerator Program. His work has revolutionized AI hardware at the Edge.
Its architecture centers on Neural Processing Units (NPUs) paired with dedicated Static Random Access Memory (SRAM) coupled together as a Node. This unique design with its neuromorphic processing delivers low power, high efficiency, and a dedicated AI accelerator in an SoC compared to any traditional deep learning accelerators. Recognizing the need to demonstrate these unique capabilities, Nikunj developed a benchmark framework to demonstrate BrainChip’s core capabilities, showcasing its efficiency in real-world AI applications. He further simplified this tool into a no-code version that allows develops to assess performance without need deep technical expertise.

With a deep understanding of Edge AI and application-specific integrated circuits (ASICs), Nikunj actively spread awareness and learning of this technology. He led workshops such as “Bringing Development of BrainChip Akida Neuromorphic models at Edge Impulse Imaging event. There he also participated as a guest speaker for a webinar “Neuromorphic Deep Dive into Next-Gen Edge AI solutions using Edge Impulse”.

In addition to his technical achievements, Nikunj led the BrainChip University AI Accelerator Program, a global initiative that helps students learn about neuromorphic AI through hands-on projects and access to BrainChip technology. His lectures at top universities like Carnegie Mellon, Arizona State University and Cornell Tech have inspired a new generation of AI engineers, building a strong talent pool and expanding the reach of BrainChip’s technology.


Nikunj’s contributions have significantly advanced AI hardware and education, creating lasting impacts on the industry and fostering the next generation of AI professionals.

As an active member of Institute of Electrical and Electronics Engineers (IEEE) and the Association for Computing Machinery (ACM), Nikunj frequently contributed to AI research. Currently serving as a peer reviewer for the 19th IEEE International Conference on Automatic Face and Gesture Recognition, he has also published articles in leading venues such as 16th IEEE International Conference on Automatic Face and Gesture Recognition and at a 2021 journal of ACM. The AI expert also participates in independent research projects
such as creation of Benchmark dataset called “Indic MMLU-Pro” for Indian languages that helps in the development of LLMs for such regions. His involvement in technical Hackathons and Competitions extends to judging roles and member of the jury respectively. He has evaluated multiple projects of professionals in hackathons such as Patient Journey Challenge, Galaxy One: 2024 Hackathon, Medihacks 2024 and AI for Change by Launchology. At competitions such as Globee Awards for Business and Globee Awards for Women in Business, he evaluated the achievements and innovations of participants and organizations.

Nikunj Kotecha is one of the best AI and ML specialists with internationally certified qualifications! His extensive work across research and commercial sectors has uniquely positioned him as a leader in cutting edge AI technology. His contributions not only advance the AI field but also inspire future developments that will benefit industries worldwide. His ability to evaluate and drive innovations within the industry has a profound influence on the growth and responsible development of AI.

RELATED ITEMS:AI, ML, NIKUNJ KOTECHA
He seems to be "the man" in Brainchip when he was there!!

Another notable advancement under his expertise was the implementation TENNs, securing the Maegachips IP deal and starting the university programe.

Wow, wow, we want him back: Now!

LOL
 
  • Like
  • Haha
  • Fire
Reactions: 8 users

Bravo

If ARM was an arm, BRN would be its biceps💪!

Managing The Huge Power Demands Of AI Everywhere​

sharethis sharing button

More efficient hardware, better planning, and better utilization of available power can help significantly.

November 14th, 2024 - By: Ann Mutschler
popularity

Before generative AI burst onto the scene, no one predicted how much energy would be needed to power AI systems. Those numbers are just starting to come into focus, and so is the urgency about how to sustain it all.
AI power demand is expected to surge 550% by 2026, from 8 TWh in 2024 to 52 TWh, before rising another 1,150% to 652 TWh by 2030. Commensurately, U.S. power grid planners have doubled the estimated U.S. load forecast, from 2.6% to 4.7%, an increase of nearly 38 gigawatts through 2028, which is the equivalent to adding another two more states equivalent to New York to the U.S. power grid in 5 years.
Microsoft and Google, meanwhile, report electricity consumption has surpassed the power usage of more than 100 countries, and Google’s latest report shows a 50% rise in greenhouse gas emissions from 2019 to 2023, partly due to data centers.
This has put the entire tech sector on a worrisome trajectory. The chip industry had been doing well in terms of the amount of power being consumed for computation, which was matched somewhat with efficiency gains. Until AI, there wasn’t the big push for so much more compute power as is seen today, and many report they were caught by surprise. This may be why there is so much research into alternatives to traditional power sources, even including nuclear power plants, which are now being planned, built, or recommissioned.
“AI models will continue to become larger and smarter, fueling the need for more compute, which increases demand for power as part of a virtuous cycle,” said Dermot O’Driscoll, vice president of product solutions in Arm’s Infrastructure Line of Business. “Finding ways to reduce the power requirements for these large data centers is paramount to achieving the societal breakthroughs and realizing the AI promise. Today’s data centers already consume lots of power. Globally, 460 terawatt-hours (TWh) of electricity are needed annually, which is the equivalent to the entire country of Germany.”
To fully harness the potential of AI, the industry must rethink compute architectures and designs, O’Driscoll says. But while many of the largest AI hyperscalers are using Arm cores to reduce power, that’s only part of the solution. AI searches need to deliver more reliable and targeted information for each query, and AI models themselves need to become more efficient.
“AI applications are driving unprecedented power demand,” said William Ruby, senior director of product management for power analysis products at Synopsys. “The International Energy Agency in its 2024 report indicated that a ChatGPT request consumes 10X of the amount of power consumed by a traditional Google search. We are seeing this play out for semiconductor ICs. Power consumption of SoCs for high-performance computing applications is now in the hundreds of watts, and in some cases exceeding a kilowatt.”
The rollout and rapid adoption of AI was as much of a surprise to the tech world as it was to the power utilities. Until a couple years ago, most people assumed AI was plodding along at the same pace it had been for decades.
“You could argue the internet back in the mid-to-late ’90s was a big life changing thing — one of those once-in-a-generation type technologies,” said Steven Woo, distinguished inventor and fellow at Rambus. “Smart phones are another one. But with AI the ramp is faster, and the potential is like the internet — and in some ways maybe even greater. With so many people experimenting, and with the user base being able to do more sophisticated things that need more power, the semiconductor industry is being asked to try and become more power-efficient. In a lot of ways these architectures are becoming more power efficient. It’s just that you’re still getting dwarfed by the increase in the amount of compute you want to do for more advanced AI. It’s one of those things where you just can’t keep up with the demand. You are making things more power-efficient, but it’s just not enough, so now we must find ways to get more power. The models are getting bigger. The calculations are more complex. The hardware is getting more sophisticated. So the key things that happen are that we’re getting more sophisticated as the model is getting bigger, more accurate, and all that. But a lot of it now is coming down to how we power all this stuff, and then how we cool it. Those are the big questions.”
AI and sustainability
Where will the all power come from? Do the engineering teams that are writing the training algorithms need to start being more power-aware?
“Sustainability is something that we have been addressing in the semiconductor industry for 20 years,” said Rich Goldman, director at Ansys. “There’s been awareness that we need low-power designs, and software to enable low-power designs. Today, it comes down to an issue of engineering ethics and morality. Do our customers care about it when they buy a chip or when they buy a training model? I don’t think they make their decisions based on that.”
What also comes into play is how engineers are rewarded, evaluated, and assessed. “Commitment to sustainability is typically not included on what they must put into the product, so they aren’t motivated, except by their own internal ethics and the company’s ethics towards that. It’s the age-old ethics versus dollars in business, and in general we know who wins that. It’s a huge issue. Maybe we should be teaching ethics in engineering in school, because they’re not going to stop making big, powerful LLMs and training on these huge data centers,” Goldman noted.
Still, it’s going to take huge numbers of processors to run AI models. “So you want to take your data centers and rip those CPUs out and put in GPUs that run millions of times more efficiently to get more compute power out of it,” he said.” And while you’re doing that, you’re increasing your power efficiency. It might seem counterintuitive, because GPUs take so much power, but per compute cycle it’s much, much less. Given that you have limited space in your data center — because you’re not going to add more space — you’re going to take out the inefficient processors and put in GPUs. This is a bit self-serving for NVIDIA, because they sell more GPUs that way, but it’s true. So even today, when we’re at Hopper H100s, H200s — and even though Blackwell is coming, which is 10 or 100 times better — people are buying the Hopper because it’s so much more efficient than what they have. In the meantime, they’re going to save more on power expense than they are in buying and replacing. Then, when Blackwell becomes available, they’ll replace the Hopper with Blackwell, and that’s sufficient for them in a dollar sense, which helps with the power issue. That’s the way we have to tackle it. We have to look at the dollars involved and make it attractive for people to expend less power based on the dollars that go to the bottom line for the company.”
Meeting the AI energy/power challenges
Meeting the current and upcoming energy and power demands from large-scale deployments of AI, creates three challenges. “One is how to deliver power,” said Woo. “There’s a lot of talk in the news about nuclear power, or newer ways of supplying nuclear power-class amounts of power. Two is how to deal with the thermals. All these systems are not just trying to become more powerful. They’re doing it in small spaces. You’re anticipating all this power, and you’ve got to figure out how to cool all of that. Three involves opportunities for co-design, making the hardware and the software work together to gain other efficiencies. You try to find ways to make better use of what the hardware is giving you through software. Then, on the semiconductor side of things, supplying power is really challenging, and one of the biggest things that’s going on right now in data centers is the move to a higher voltage supply of power.”
At the very least, product development teams must consider energy efficiency at initial stages of the development process.
“You cannot really address energy efficiency at the tail end of the process, because by then the architecture has been defined and many design decisions have already been made,” said Synopsys’ Ruby. “Energy efficiency in some sense is an equal opportunity challenge, where every stage in the development process can contribute to energy efficiency, with the understanding that earlier stages can have a bigger impact than later stages. Collectively, every seemingly small decision can have a profound impact on a chip’s overall power consumption.”
A ‘shift-left’ methodology, designing hardware and writing software simultaneously and early enough in the development process can have a profound effect on energy efficiency. “This includes decisions such as overall hardware architecture, hardware versus software partitioning, software and compiler optimizations, memory subsystem architecture, application of SoC level power management techniques such as dynamic voltage and frequency scaling (DVFS) – to name just a few,” he said. It also requires running realistic application workloads to understand the impact.
That’s only part of the problem. The mindset around sustainability also needs to change. “We should be thinking about it, but I don’t think the industry as a whole is doing that,” said Sharad Chole, chief scientist at Expedera “It’s only about cost at the moment. It’s not about sustainability, unfortunately.”
But as generative AI models and algorithms become more stable, the costs can become more predictable. That includes how many data center resources will be required, and ultimately it can include how much power will be needed.
“Unlike previous iterations of model architectures, where architectures were changing and everyone had slightly different tweaks, the industry-recognized models for Gen AI have been stable for quite a long time,” Chole said. “The transformer architecture is the basis of everything. And there is innovation in terms of what support needs to be there for workloads, which is very useful.”
The is a good understanding of what needs to be optimized, as well, which needs to be balanced against the cost of retraining a model. “If it’s something like training a 4 billion- or 5 billion-parameter model, that’s going to take 30,000 GPUs three months,” Chole said. “It’s a huge cost to pay.”
Once those formulas are established, then it becomes possible to determine how much power will be needed to run the generative AI models when they’re implemented.
“OpenAI has said it can predict the performance of its model 3.5 and model 4 while projecting the scaling laws onto growth of the model versus the training dataset,” he explained. “That is very useful, because then the companies can plan that it’s going to take them 10 times more computation, or three times more data sets, to be able to get to the next generation accuracy improvement. These laws are still being used, and even though they were developed for a very small set of models, they can scale well in terms of the model insights into this. The closed-source companies that are developing the models — for example, OpenAI, Anthropic, and others are developing models that are not open — can optimize in a way that we don’t understand. They can optimize for both training as well as the deployment of the model, because they have better understanding of it. And because they’re investing billions of dollars into it, they must have better understanding of how it needs to be scaled. ‘In the next two years, this is how much funding I need to raise.’ It is very predictable. That allows users to say, ‘We are going to set this much compute. We’re going to need to build this many data centers, and this is how much power I’m going to need.’ It is planned quite well.”
Stranded power
A key aspect of managing the increasing power demands of large-scale AI involves data center design and utilization.
“The data center marketplace is extremely inefficient, and the inefficiency is a consequence of the split between the two market spaces of the building infrastructure and the EDA side where the applications run,” said Hassan Moezzi, founder of Future Facilities, which was acquired by Cadence in July 2022. “People talk about the power consumption and the disruption that it’s bringing to the marketplace. The AI equipment, like NVIDIA has, is far more power-hungry perhaps than the previous CPU-based products, and the equivalency is not there because no matter how much processing capability you throw at the marketplace, the market wants more. No matter how good and how efficiently you make your chips and technology, that’s not really where the power issue comes from. The power issue comes from the divide.”
According to Cato Digital, in 2021, 105 gigawatts of power was created for data centers, but well over 30% of that was never used, Moezzi said. “This is called stranded capacity. The data center is there to give you the power to run your applications. That’s the only reason you build these very expensive buildings and run them at huge costs. And the elephant in the room is the stranded capacity. However, if you speak to anybody in the data center business, especially on the infrastructural side, and you say, ‘stranded capacity,’ they all nod, and say they know about it. They don’t talk about it because they assume this is only about over-provisioning to safeguard risk. The truth is that some of it is over-provisioning deliberately, which is stranded capacity. But they do over-provisioning because they don’t know what’s going on inside the data center from a physics point of view. The 30%-plus statistic doesn’t do the situation justice in the enterprise marketplace, which is anybody who’s not hyperscale, since those companies are more efficient given their engineering orientation, and they take care of things. But the enterprises, the CoLos, the government data centers, they are far more inefficient. This means if you buy a megawatt of capacity — or you think you bought a megawatt — you will be lucky as an enterprise to get 60% of that. In other words, it’s more than 30%.”
This is important because a lot of people are jumping up and down about environmental impacts of data centers and the grids being tapped out. “But we’re saying you can slow this process down,” Moezzi said. “You can’t stop data centers being built, but you can slow it down by a huge margin by utilizing what you’ve already got as stranded capacity.”
Conclusion
Generative AI is unstoppable, and attempts to slow it are unrealistic, given its rapid spread and popularity. But it can be significantly more efficient than it is today, and this is where economics will drive the industry. What’s clear, though, is there is no single solution for making this happen. It will be a combination of factors, from more efficient processing to better AI models that can achieve sufficiently accurate results using less power, and utilizing the power that is available today more effectively.


 
  • Like
  • Fire
  • Love
Reactions: 17 users

Bravo

If ARM was an arm, BRN would be its biceps💪!


EXTRACT
Screenshot 2024-11-16 at 11.22.51 am.png





 
  • Like
  • Love
  • Fire
Reactions: 26 users

Bravo

If ARM was an arm, BRN would be its biceps💪!
I don't think this has been posted previously. Mentions the use of SpiNNaaker2 board.


PAL Robotics’ Kangaroo Biped Robot Joins Project PRIMI​

HRT Badge Humanoid Robotics Technology
7 months ago
3 min read
kangaroo pal robotics primi project

In our rapidly changing world, there’s a growing demand for robots that are more intuitive and interactive. The EU Project PRIMI strives to meet this demand by creating robots capable of understanding and interacting with humans more effectively. This initiative aims to enhance the cognitive abilities of robots, enabling them to be more responsive and adaptable in social settings.

Introducing the EU Project PRIMI​

The EU Project PRIMI is focusing on integration of advanced neuromorphic technologies into robotic systems. This project brings experts from various fields to enhance the theoretical and practical aspects of robotics, making significant steps towards robots that can operate alongside humans in complex environments.
kangaroo-pal-robotics-primi-project-2-1024x651.jpg

Kangaroo robot’s debut in Collaborative Projects​

Kangaroo, the latest biped robot from PAL Robotics, is making its first debut in the world of collaborative projects by being part of the PRIMI Initiative. Despite being new to the scene, Kangaroo is equipped with advanced neuromorphic computing and sensing capabilities, making it an ideal candidate for this high-profile project.

PAL Robotics role in PRIMI: Work Packages and Goals​

PAL Robotics participate in the PRIMI project by taking on specific roles aimed at advancing robotic capabilities. Their tasks are focused on integrating and developing key technologies that enhance the interaction between humans and robots.
In the realm of cognitive robotics, the focus lies on enhancing the social interaction capabilities of the Kangaroo robot through integration with ROS4HRI (Robot Operating System for Human-Robot Interaction) and alongside the iCub robot. This initiative includes the development of abstract reasoning and a Theory of Mind, pivotal for comprehending and predicting human actions.
The engineering and development process of the Kangaroo robot employ sophisticated design principles with a focus on neuromorphic technologies. This involves integrating Whole-Body Control with a cognitive architecture leveraging neuromorphic models, thereby enabling a new realm of interaction capabilities in humanoid robots.
Additionally, co-design efforts extend to the neuromorphic computing infrastructure, with the development of a Sensorimotor Board based on the innovative SpiNNaker2 Board. This infrastructure enhances the robot’s processing capabilities, crucial for seamless hardware and software integration. Kangaroo will incorporate event-based cameras and bio-inspired vision sensors, offering advantages like high dynamic range and minimal motion blur. However, this necessitates the development of new algorithms tailored to exploit the sensor’s unique properties.

Looking ahead, PRIMI aims to integrate these technologies through iterative prototypes and laboratory demonstrators, with a focus on refining interaction and cooperation abilities in dynamic settings. Clinical pilot studies involving neuromorphic humanoid robots like Kangaroo will validate prototypes in robot-led physical rehabilitation of stroke survivors.
The outcomes of the PRIMI project are poised to set new standards in interactive robotics, fostering enhanced efficiency in human-robot collaborations, improved safety in shared environments, and groundbreaking contributions to cognitive robotics.
PAL Robotics is deeply engaged in collaborative projects spanning healthcare, Ambient Assisted Living, smart cities, and more. For further insights into PAL Robotics and their involvement in collaborative initiatives, visit the PAL Robotics website and feel free to reach out with any inquiries.

 
  • Like
  • Love
  • Fire
Reactions: 11 users

Diogenese

Top 20
I don't think this has been posted previously. Mentions the use of SpiNNaaker2 board.


PAL Robotics’ Kangaroo Biped Robot Joins Project PRIMI​

HRT Badge Humanoid Robotics Technology
7 months ago
3 min read
kangaroo pal robotics primi project

In our rapidly changing world, there’s a growing demand for robots that are more intuitive and interactive. The EU Project PRIMI strives to meet this demand by creating robots capable of understanding and interacting with humans more effectively. This initiative aims to enhance the cognitive abilities of robots, enabling them to be more responsive and adaptable in social settings.

Introducing the EU Project PRIMI​

The EU Project PRIMI is focusing on integration of advanced neuromorphic technologies into robotic systems. This project brings experts from various fields to enhance the theoretical and practical aspects of robotics, making significant steps towards robots that can operate alongside humans in complex environments.
kangaroo-pal-robotics-primi-project-2-1024x651.jpg

Kangaroo robot’s debut in Collaborative Projects​

Kangaroo, the latest biped robot from PAL Robotics, is making its first debut in the world of collaborative projects by being part of the PRIMI Initiative. Despite being new to the scene, Kangaroo is equipped with advanced neuromorphic computing and sensing capabilities, making it an ideal candidate for this high-profile project.

PAL Robotics role in PRIMI: Work Packages and Goals​

PAL Robotics participate in the PRIMI project by taking on specific roles aimed at advancing robotic capabilities. Their tasks are focused on integrating and developing key technologies that enhance the interaction between humans and robots.
In the realm of cognitive robotics, the focus lies on enhancing the social interaction capabilities of the Kangaroo robot through integration with ROS4HRI (Robot Operating System for Human-Robot Interaction) and alongside the iCub robot. This initiative includes the development of abstract reasoning and a Theory of Mind, pivotal for comprehending and predicting human actions.
The engineering and development process of the Kangaroo robot employ sophisticated design principles with a focus on neuromorphic technologies. This involves integrating Whole-Body Control with a cognitive architecture leveraging neuromorphic models, thereby enabling a new realm of interaction capabilities in humanoid robots.
Additionally, co-design efforts extend to the neuromorphic computing infrastructure, with the development of a Sensorimotor Board based on the innovative SpiNNaker2 Board. This infrastructure enhances the robot’s processing capabilities, crucial for seamless hardware and software integration. Kangaroo will incorporate event-based cameras and bio-inspired vision sensors, offering advantages like high dynamic range and minimal motion blur. However, this necessitates the development of new algorithms tailored to exploit the sensor’s unique properties.

Looking ahead, PRIMI aims to integrate these technologies through iterative prototypes and laboratory demonstrators, with a focus on refining interaction and cooperation abilities in dynamic settings. Clinical pilot studies involving neuromorphic humanoid robots like Kangaroo will validate prototypes in robot-led physical rehabilitation of stroke survivors.
The outcomes of the PRIMI project are poised to set new standards in interactive robotics, fostering enhanced efficiency in human-robot collaborations, improved safety in shared environments, and groundbreaking contributions to cognitive robotics.
PAL Robotics is deeply engaged in collaborative projects spanning healthcare, Ambient Assisted Living, smart cities, and more. For further insights into PAL Robotics and their involvement in collaborative initiatives, visit the PAL Robotics website and feel free to reach out with any inquiries.

Hi Bravo,

This Arvix paper discusses SpiNNaker2. It uses ARM Cortex M4F. ARM have in-house AI in Helium and Ethos.

https://arxiv.org/pdf/2103.08392

1731732692221.png


The second generation SpiNNaker2 scales down technology from 130nm CMOS to 22nm FDSOI CMOS [5], while also introducing a number of new features. Adaptive body biasing (ABB) in this 22nm FDSOI process node delivers cutting-edge power consumption [6]. With dynamic voltage and frequency scaling, the energy consumption of the PEs scales with the spiking activity computed on the cores [7], [8]. The Arm Cortex-M4 cores employed for SpiNNaker2 integrate a single - precision floating point unit, thus extending the fixed-point arithmetic of the first generation SpiNNaker. Computationwise, SpiNNaker2 retains the processor-based flexibility of the first generation system [9], while adding additional numerical accelerators to speed up common operations [10]–[12]. In the current prototype described in this paper, another accelerator has been added, a 16 by 4 array of 8 bit multiply-accumulate units (MAC), enabling faster 2D convolution and matrix-matrix multiplication [13].
 
  • Like
  • Sad
  • Fire
Reactions: 6 users

Frangipani

Top 20
A paper showing AKIDA 1000 being integrated into an on-board processor to help detect fugitive methane emissions from ageing oil and gas infrastructures facilitating operators to locate and mitigate these leaks, to order to help address the global climate crisis. The processor works in conjunction with NASA’s core Flight System (cFS).
View attachment 72916


EXTRACT FROM PAGE 6

.1 BrainSat neuromorphic processor for CubeSats
One of the objectives of our work was to propose an al-
gorithm that could be executed on the novel neuromorphic
on-board processor (OBP) developed by BrainSat [12],
demonstrating its capability to serve small satellite mis-
sion needs and showing the potential of AI-based edge
computing. The OBP was specifically designed accord-
ing to mission requirements established for the monitoring
of point-source methane emissions with a 6U CubeSat, as
detailed in [13].
The designed OBP includes two PC104 modules, con-
nected through a mezzanine connector. It integrates both
CPU and FPGA capabilities and can cater for on board
computer functions, payload data processing and down-
link management. The OBP is equipped with the Akida
1000 neuromorphic processor, selected for its design ma-
turity, real-time processing capabilities and flight heritage.
The chip is tailored for event-based processing, featuring
80 Neuromorphic Processing Units (NPUs) with 100 KB
of SRAM each, supporting up to 1.2 million virtual neu-
rons and 10 billion virtual synapses with up to 8-bit pre-
cision, ideal for inference-heavy tasks.
An FPGA pro-
vides glue logic to implement necessary data protocols
and serve as a soft CPU for OBC functions. Additionally,
a 12GB flight-proven Micron Solid State Device (SSD) is
included to provide non-volatile on-board memory. This
sub-system is constrained to a 0.5 U volume and is esti-
mated to consume a low power of less than 4 W.
The processor’s software architecture was designed to
minimize resource consumption. It is equipped with Real-
Time Executive for Multiprocessor Systems (RTEMS) as
its Real Time Operating System (RTOS), on top of which
NASA’s core Flight System (cFS) is used. cFS provides a
flight proven product on which OBP functions are built us-
ing open-source applications. For more information about
the BrainSat architecture, refer to our co-published pro-
ceeding [12]




To save those of you some time who would like to find out more about the above mentioned co-publication on the BrainSat hardware architecture by seven other SGAC Small Satellite Group members the authors collaborated with - you can just click on the three posts of mine I referred back to in below post

https://thestockexchange.com.au/threads/brn-discussion-ongoing.1/post-439236

as I had already covered both papers a month ago:

03809C04-5BD3-4E8E-AF05-3FE7BA135D3C.jpeg

57BEEFBE-7B12-4BFB-852F-2B5B592112DB.jpeg

FCF1C438-3D8E-40C0-8D2D-DFE359B01EC8.jpeg
 
  • Like
  • Love
  • Fire
Reactions: 16 users

Mt09

Regular
To save those of you some time who would like to find out more about the above mentioned co-publication on the BrainSat hardware architecture by seven other SGAC Small Satellite Group members the authors collaborated with - you can just click on the three posts of mine I referred back to in below post

https://thestockexchange.com.au/threads/brn-discussion-ongoing.1/post-439236

as I had already covered both papers a month ago:

View attachment 72943
View attachment 72944
View attachment 72945
Congrats.
 
  • Haha
  • Like
Reactions: 11 users

Earlyrelease

Regular
Perth crew.
Yip its is that time of the year again for the obligatory xmas drinks.
Wednesday 11 December is booked for the 4pm-4.30pm start, I need numbers as they now take my credit card and charge $500 if I don't get the numbers (and not wing it like normal), so just PM if you are interest. Let us pray that because of the big break between drinks we can do as we did for the $1 dollar party and on the day (19 Jan2022) it was a $2 party.

BRN 1st.jpeg
 
  • Like
  • Love
  • Fire
Reactions: 29 users
Top Bottom