BRN Discussion Ongoing

Esq.111

Fascinatingly Intuitive.
Chippers,

The fleeting interior view of this chariot is new , 🧠 🍟 inside ????.



Regards,
Esq.
 
  • Like
  • Fire
  • Haha
Reactions: 16 users
How does BRN fit in with the up coming Chiplets revolution anyone care to enlighten me ?.
much appreciated
 
  • Like
Reactions: 1 users
Apparently a day ago according to google. .

Quite liked a little bit under sales (highlighted).


Senior Machine Learning Engineer​

BrainChip, Inc.
Laguna Hills, CA 92653


SUMMARY:
A principal goal for the Sr. AI/ML Engineering is to bridge core engineering activities into commercialization efforts. This includes technical analyzing and building models, starting with the pipeline, training and deployment to Akida Edge based hardware. This is a hands-on position, working as team member in resolving higher level technical and customer relations issues. This position needs to be able to understand and maintain working, technical knowledge of our client’s core technologies in order to provide full top level technical support for BrainChip model zoo. The Akida model zoo is targeted to support and solve problems for computer vision (classification, object detection), audio processing (keyword spotting), sensor fusion, and anomaly detection. You must be able to communicate effectively at the engineering level with the internal engineers and management teams as well as external customers, understand their objectives and needs to develop and articulate solutions to address customers’ requirements.

ESSENTIAL JOB DUTIES AND RESPONSIBILITIES:

ENGINEERING


  • Work with Sales and Engineering members, to support customers and help develop technical solutions for BrainChip Akida Neuromorphic IP, NPU System-on-Chip (NSoC) and software that enables full front to back solutions, from training to edge-based deployment.
  • Helps define deliverables, and work in tandem with the customer engagement and engineering teams to produce and deliver workflows, products and services for both BrainChip current and future customers as well as eco-systems partners.
  • Work closely with Product, Sales, Engineering and Advanced Research to optimize efficiency and effectiveness in technology handoff for commercialization and delivery.
  • As a team member gaining fundamental knowledge related to machine learning functions and field deployments of NPU IP and use cases. Be a critical part of the customer engineering team, developing workflows and procedures for implementing models into a process so that customers have an easy well documented process.
  • Working with the other AI/ML and HW/SW team members, to clearly define the hardware and s/w tool roadmap with critical milestones, dependencies to meet milestones that are agreed upon.
MARKETING
  • Participate in strategy and execute continuous benchmarking activities to ensure market competitiveness; identify strengths/weaknesses; feedback to organization for improvement.
  • Along with Sales, lead the definition, design, and launch of customer centric evaluation boards and or modules to highlight company value-add and differentiation.
  • Part of the team that performs basic application layer s/w coding, debugging and optimizing process from building models to edge deployment specifically targeted towards edge-based design with serious constraint on power and latency.
SALES
  • Train and deploy models based on public and custom datasets to implement a real-life demonstration system to correlate assumptions related to ease of use, performance, and applications.
  • Be the first to back up sales and field technical team sales team of system architects and AI/ML engineers.
  • Handle basic level of application advice.
  • In conjunction with Product Marketing Team, deliver technical content for customer collateral (and consumption)
  • Serve as a catalyst for growth in the marketplace by representing BrainChip at technical conferences and events to stay up-to-date with the latest technologies, and trends impacting the AI Industry.
  • Working knowledge of design processes and methodology; (ie ISO, Automotive Qualification)
  • Ensure all technical documentation is customer- friendly and consumable.
GENERAL
  • Develop service level standards focused on response times, issue resolution, service quality, and customer satisfaction.
  • Establish policies and procedures that produce high quality customer support and that reflect industry best practices.
  • Ensure systems are in place and are utilized to capture and report on service metrics, including any customer feedback or trends in product or service issues.
  • Manage resource decision-making, planning, and best practices.
  • Align technical customer support activities and initiatives to support and enhance the objectives of the organization.
  • Take ownership of unresolved technical issues, and bridge with Sales and Engineering teams to solve and/or develop solutions.
  • Manage customer technical relationships as required and work with internal resources to ensure a high level of customer satisfaction.
  • Communicate orally with customers, management, and other co-workers, both individually and in front of a group.

QUALIFICATIONS:

To perform this job successfully, an individual must be able to perform each essential duty satisfactorily. The requirements listed below are representative of the knowledge, skill, and ability required. Reasonable accommodations may be made to enable individuals with disabilities to perform the essential functions.

Education/Experience:

  • B.S. in Computer Science, or EE; will consider experience in lieu of education.
  • Requires a very high technical acumen and understanding of machine learning and deep learning experience using Python, TensorFlow. Should have hands on experience in being a part of small team and various matrixes from engineering, corporate applications and marketing.
  • Firsthand experience with object classification, object detection, face recognition, keyword spotting a plus and other ML networks and deep understanding.
  • Ideally should come from AI focused company either from engineering development group or an application engineering group for a AI product
  • Fist hand knowledge of SDK like CMSIS, Edge Impulse or other AI/ML deployment platforms for the edge.
  • Experience with Git version control system
  • Experience in CPUs, Tools and methodologies, machine learning, C++ programming
  • PCB design, layout, and bring-up experience a plus.
  • Has been a part of a small tiger team, with strict deliverables and milestones
  • Has been in front line customer support role.
  • Comfortable as a player coach—this is a very small team and the member needs to spend 80% doing and 20% contributing to high level objectives
 
  • Like
  • Fire
  • Love
Reactions: 48 users

Sirod69

bavarian girl ;-)
ANT61 on LinkedIn:

Intelligent robots are the key for a booming economy in space and back home.
Neuromorphic computers will redefine what is possible that’s why we’ve partnered with BrainChip to bring this technology to space with our Beacon that restores satellites after anomaly in orbit and our Brain computer that is at the core of our in-orbit docking and refueling technology.

 
  • Like
  • Love
  • Fire
Reactions: 52 users

mrgds

Regular
Chippers,

The fleeting interior view of this chariot is new , 🧠 🍟 inside ????.



Regards,
Esq.

Geez.............. i hope they paid that dude plenty to sit inside that container for i dont know how long ............ :eek:
 
  • Haha
  • Like
Reactions: 8 users

RobjHunt

Regular
Heading off to Pomgolia in a couple of months and would like to have a little more spending money to visit our first little nipper, well not so little anymore being 30. @Bravo may you don those running shoes, you know the ones with the new laces. 😉🤭 And go for a wander?
Thank you 🇬🇧👋
 
  • Fire
  • Like
Reactions: 2 users
How does BRN fit in with the up coming Chiplets revolution anyone care to enlighten me ?.
much appreciated
A good question for Rob and Nandan over at Wavious 😬
 
  • Haha
  • Like
  • Thinking
Reactions: 6 users

GStocks123

Regular
  • Like
  • Fire
  • Love
Reactions: 27 users
Hi @GStocks123
My thoughts are this position will be a massive asset to the company. With his/her skill set we’ll be paying top dollar but you get what you pay for; so hoping no whinging about allocating shares to pump up the salary.

Much needed and IMO could have got someone in for this earlier. This is very much a race for market share and if you snooze you lose.

On another note; hoping we’re involved in this with Teksun:


1715170771018.jpeg




Cheers
 
  • Like
  • Fire
  • Love
Reactions: 37 users

Frangipani

Top 20
My Monday musings (feel free to skip if you are not into speculative dot joining):

Dr. Gregor Lenz (https://lenzgregor.com/) recently left SynSense (on good terms AFAICT from his below LinkedIn post) and co-founded a Toulouse-based start-up called Neurobus (https://neurobus.space, registered in mid-April 2023) together with serial entrepreneur Florian Corgnou, who is the new company’s CEO. Lenz is its CTO, and this is what he said about Neurobus in a LinkedIn post (see below) two weeks ago: “We’re going to build a strong European company that brings neuromorphic solutions to satellites and space!”

Dr. Simeon Bamford (https://sim.me.uk/#contact) is the start-up’s Director of Engineering & Board Member. Incidentally he was CTO of iniLabs, when Brainchip signed a joint development and marketing agreement with them way back in 2016. (On January 1, 2018, iniVation AG formally incorporated the Dynamic Vision Sensor (DVS) business and engineering teams of iniLabs GmbH.) He is currently a researcher in event-based perception for robotics at the Italian Institute of Technology in Genoa as well as a consultant for neuromorphic engineering start-ups.

Blast from the past:


Now, the question regarding Neurobus is of course: Friend or foe?

Obviously, with Gregor Lenz coming from SynSense (the other commercial application-focused iniLabs spin-off company, besides iniVation), one could readily assume that integrating some of SynSense’s neuromorphic tech (most likely Speck, the event-driven neuromorphic vision SoC) would be his first choice in developing neuromorphic solutions for space, but the fact that SynSense effectively became a Chinese company three years ago IMO would at present prove to be a rather unsurmountable obstacle to what Neurobus is aiming to achieve and to stay true to their self-proclaimed values: “Sustainable development, exchange between European countries, security for our citizens and democratic rights”. Especially in the sensitive field of defense, which is one of the application domains Neurobus lists alongside satellite communications and earth observation.

Also, how are they going to offer neuromorphic solutions for space without a commercially available neuromorphic chip/IP? 🤔 Will they just experiment with dev kits or research chips for the time being? The likes of NASA, ESA, AFRL or RAAF have been dealing directly with the relevant companies and academic institutions for their neuromorphic space experiments, so I’d suspect Neurobus is going to offer added value, a package solution so to say, which will save those customers, who do not have in-house development capabilities, time and money.

On the other hand: Does Neurobus’s value of being “European at heart” possibly signify that their neuromorphic tech is being entirely developed within Europe? Which would effectively rule out Akida? As well as Loihi? But even a potential future competitor of Brainchip such as GrAI Matter Labs, while headquartered in Europe, has an office in Silicon Valley, so would they qualify?

As for the types of event cameras that are going to be used in space, those by iniVation or by Prophesee come to mind. A European academic research group that recently modified one by iniVation for use in space is the team from DTU Space (Technical University of Denmark), whose THOR-DAVIS neuromorphic camera is currently being experimented with onboard the ISS. But wouldn’t they in turn found their own spin-off to commercialise their tech rather than licence it to a third party? 🧐

Another caveat: Gregor Lenz is also one of the co-founders of the Open Neuromorphic platform. I recall reading somewhere a while ago that some of the other members around Jason Eshraghian from UC Santa Cruz (also an Open Neuromorphic co-founder, who recently showed up in a post by @Fullmoonfever as one of the authors of a paper mistakenly listing Akida as being analog in a table surtitled “A benchmark of neuromorphic chips” - https://thestockexchange.com.au/threads/brn-discussion-ongoing.1/post-338409) were dismissing Akida in their Discord chat - does anyone here happen to follow their discussions and could give us an update on whether their judgement on Akida has since changed? Or was it possibly just a case of sour grapes?

Also, could somebody with a LinkedIn account please do me a favour and scan the comments under Gregor Lenz’s and Florian Corgnou’s respective Neurobus birth announcements and check out whether or not any familiar names come up congratulating them on their baby? I am merely able to read the first couple of comments on GL’s LinkedIn page, but can neither access his co-founder’s LinkedIn page nor that of Simeon Bamford at all. Neither do I have a Twitter, pardon X account, so I can sadly no longer read any of those posts either, which could give some further hints of whether or not we could possibly be involved.

Any thoughts on Neurobus, especially from those of you tech-savvier than me, are of course very welcome…

View attachment 45467
View attachment 45526

View attachment 45468


View attachment 45469

View attachment 45470

View attachment 45471

What an intriguing group photo: Our VP of Sales, EMEA, Alf Kuchenbuch, arm in arm with Florian Corgnou and Gregor Lenz, the CEO resp CTO of Neurobus! 😉
I’ve noticed a couple of LinkedIn likes being exchanged in recent weeks, too.

I’d say: stay tuned… 😊

5610B52C-F824-4D53-A868-8E3C2FE3DA73.jpeg



https://thestockexchange.com.au/threads/brn-discussion-ongoing.1/post-415663

“Or was it Gregor Lenz, who left Synsense in mid-2023 to co-found Neurobus (“At Neurobus we’re harnessing the power of neuromorphic computing to transform space technology”) and is also one of the co-founders of the Open Neuromorphic community? He was one of the few live viewers of Cristian Axenie’s January 15 online presentation on the TinyML Vision Zero San Jose Competition (where his TH Nürnberg team, utilising Akida for their event-based visual motion detection and tracking of pedestrians, had come runner-up), and asked a number of intriguing questions about Akida during the live broadcast.”




By the way, Neurobus have had a couple of interesting job openings lately, which have been online for quite a while now (https://jobs.stationf.co/companies/neurobus):



to develop “a mobile EEG headband that can decode brain signals using low-power neuromorphic AI, to monitor mental states such as fatigue or stress”


(payload for satellites)


(Gregor Lenz is an expert in that field!)


(payload for satellites)
 
  • Like
  • Fire
  • Love
Reactions: 43 users

IloveLamp

Top 20
  • Like
  • Love
  • Fire
Reactions: 42 users

Frangipani

Top 20
Always interesting thoughts and dot joining by all on this forum and I’m going to chip in given the researching by many on TSE linked to NDA’s, partnerships, employee movements etc

Was it TSE forum members that initiated departure of two salesman a lot quicker given highlighting Wavious and their names? Investigation then occurred by BRN management to determine fact and fiction?

You would think with both of them leaving there would be a non compete clause in their contracts but maybe BRN management didn’t think this was necessary when setting up their conditions of employment etc.

Also did the BRN BoD know that key sales , revenue etc. is coming anyway by end of 2024 / early 2025 so called Rob’s and Nandan’s bluff on leaving and the rest is history? You can always leave on good terms regardless how it ended up.

The above can and does happen with many companies and how they find out employees are looking elsewhere etc. Sometimes it works out and sometimes it doesn’t when management have to make a call (if they have a choice).

Pure speculation on what happened but as Tech has said he has moved onto another challenge whether that is at Wavious or elsewhere. As always DYOR.

Have a great week !!

It does make you wonder. TSE members discover 2 key sales staff are (possibly) moonlighting on BRN or its partners. Company spokesperson denies it, and weeks later they both leave under a cloud of suspicion. Was management alerted to the alleged moonlighting by TSExers and pulled the trigger on their employment? One wonders. If anyone knows, his holiness would know , and I don’t mean Tech 😂. He’s just a naughty boy 😁
A good question for Rob and Nandan over at Wavious 😬

Hi The Pope and TheUnfairAdvantage,

it appears Wavious has been out of business since 2022 and the company’s former employees have since moved on. Intriguingly, quite a few of them ended up at Apple…


A4F10F53-54D1-4F12-8BA6-9031CD653175.jpeg



Rob Telson does have a connection to Wavious, though, but he wasn’t moonlighting - the screenshot of his LinkedIn bio taken on March 26, the day the rumour first surfaced, is evidence of his role as Strategic Advisor to Wavious from 2017-2022, concurrent with his respective full-time jobs. So his link to Wavious had never been a secret at BrainChip, it seems…


https://thestockexchange.com.au/threads/brn-discussion-ongoing.1/post-417136

8B672544-CEE9-4EE3-B432-A238FD8B75F7.jpeg


https://thestockexchange.com.au/threads/brn-discussion-ongoing.1/post-417147
953B4062-3193-492D-8EE5-4556DC860627.jpeg



https://thestockexchange.com.au/threads/brn-discussion-ongoing.1/post-417154
15E347BD-4DB2-419F-BC04-895306B1A655.jpeg


I highly suspect something is fishy with that zoominfo Wavious Employee Directory, which started the rumour, even though I am afraid I can’t explain how the names of several other (ex-) BrainChip staff and that of Mauro Diamant ended up in that directory as well. In Rob Telson’s case, there is at least a proven link to Wavious, while the company was still in business.



https://thestockexchange.com.au/threads/brn-discussion-ongoing.1/post-417060

C0EC5A4A-62AB-41D9-A02F-683B60104861.jpeg



https://thestockexchange.com.au/threads/brn-discussion-ongoing.1/post-417069

078709F5-D28B-494F-8DB9-A2225FB865B0.jpeg


Regards

Frangipani
 

Attachments

  • 1CDC945A-24A8-4252-A569-32A5BDC9A4D0.jpeg
    1CDC945A-24A8-4252-A569-32A5BDC9A4D0.jpeg
    314 KB · Views: 50
Last edited:
  • Like
  • Fire
  • Thinking
Reactions: 17 users
Nice to see Porsche across the possibilities of neuromorphic.

That'd be a nice hook up too

HERE


Federico Magno
has been Member of the
Executive Board of Porsche
Consulting since 2017.
He heads the company’s
Operations, Brand & Sales and
Development & Technology
units. He also leads the
Automotive, Aerospace and
Transportation units. Federico
Magno studied economics at
Bocconi University in Milan.

Excerpt from recent Porsche Engineering Magazine.

—MAGNO: Over the long term, additional big leaps in innovation are already in sight. Neuromorphic systems that mimic the structures and functions of the human brain could play a key role in the ongoing development of AI technologies in the future. These chip-based systems could be more efcient and faster than conventional processors, especially when processing sensory data. They could make decisions in real time—thereby rivaling even the decisiveness of humans. Systems of this nature are designed to better process complex and unstructured data of the kind found in the real world. That will shape the future of mobility, and not just in the form of fully autonomous, level 5 vehicles, but also in the way mo-bility will be organized and the new business models that will emerge as a result. At Porsche Consulting, we already use AI in a wide range of areas such as sales, development, production, and even marketing. And we know that it has the power to revolutionize business processes— at least if it’s used right. And
that, too, will take a certain amount of courage.
 
  • Like
  • Fire
  • Love
Reactions: 22 users
Hi The Pope and TheUnfairAdvantage,

it appears Wavious has been out of business since 2022 and the company’s former employees have since moved on. Intriguingly, quite a few of them ended up at Apple…


View attachment 62455


Rob Telson does have a connection to Wavious, though, but he wasn’t moonlighting - the screenshot of his LinkedIn bio taken on March 26, the day the rumour first surfaced, is evidence of his role as Strategic Advisor to Wavious from 2017-2022, concurrent with his respective full-time jobs. So his link to Wavious had never been a secret at BrainChip, it seems…


https://thestockexchange.com.au/threads/brn-discussion-ongoing.1/post-417136

View attachment 62443

https://thestockexchange.com.au/threads/brn-discussion-ongoing.1/post-417147
View attachment 62444


https://thestockexchange.com.au/threads/brn-discussion-ongoing.1/post-417154
View attachment 62445

I highly suspect something is fishy with that zoominfo Wavious Employee Directory, which started the rumour, even though I am afraid I can’t explain how the names of several other (ex-) BrainChip staff and that of Mauro Diamant ended up in that directory as well. In Rob Telson’s case, there is at least a proven link to Wavious, while the company was still in business.



https://thestockexchange.com.au/threads/brn-discussion-ongoing.1/post-417060

View attachment 62453


https://thestockexchange.com.au/threads/brn-discussion-ongoing.1/post-417069

View attachment 62452

Regards

Frangipani
You don’t miss much. Nice work 🥸
 
  • Like
Reactions: 9 users

Slade

Top 20
 
  • Like
  • Love
  • Fire
Reactions: 50 users

CHIPS

Regular
1715184513951.png


BrainChip Adds Penn State to Roster of University AI Accelerators​



Laguna Hills, Calif. – May 8, 2024BrainChip Holdings Ltd (ASX: BRN, OTCQX: BRCHF, ADR: BCHPY), the world’s first commercial producer of ultra-low power, fully digital, event-based, neuromorphic AI IP, today announced that Pennsylvania State University has joined the BrainChip University AI Accelerator Program, ensuring students have the tools and resources necessary to help develop leading-edge intelligent AI solutions.

Penn State is a top-ranked research university founded with a mission of high-quality teaching, expert research, and global service. The Neuromorphic Computing Lab located in Penn State’s School of Electrical Engineering and Computer Science aims to create a new type of computer that can learn and operate with brain-scale efficiency. In joining the BrainChip University AI Accelerator Program, EECS students will now have access to cutting-edge neuromorphic technology that will directly affect their communities and solve big problems that may positively impact humanity.

BrainChip’s University AI Accelerator Program provides platforms and guidance to students at higher education institutions with AI engineering programs. Students participating in the program have access to real-world, event-based technologies offering unparalleled performance and efficiency to advance their learning through graduation and beyond.

“As part of Penn State’s Neuromorphic Computing Lab, we are dedicated to bridging the gap between nanoelectronics, neuroscience and machine learning,” said Abhronil Sengupta, EECS Assistant Professor. “By joining BrainChip’s University AI Accelerator Program, we are better positioned to provide our students with resources needed to enable further research and study into neuromorphic computing. By leveraging BrainChip’s technology with our inter-disciplinary approach to data science and AI, we ensure students are ready to develop solutions for the world’s most pressing issues.”

BrainChip’s neural processor, Akida™ IP is an event-based technology that is inherently lower power when compared to conventional neural network accelerators. Lower power affords greater scalability and lower operational costs. BrainChip’s Akida supports incremental learning and high-speed inference in a wide variety of use cases. Among the markets that BrainChip’s Essential AI technology will impact are the next generation of smart cars, smart homes of today and tomorrow, and industrial IoT.

“For universities like Penn State that pride themselves on delivering a world-class education, forming partnerships with leaders like BrainChip is an ideal way to give students access to frontier technology,” said Tony Lewis, CTO of BrainChip. “We hope by making Neuromorphic Event Based technology readily available, we can give students hands on experience with a new paradigm in computation and open fundamentally new research directions in engineering. Neuromorphic event-based computing may be a solution to the inefficiencies inherent in conventional AI computation that is of growing concern to the public. It’s important that students engage early and with the right tools. BrainChip is happy to help.”

Penn State University joins Arizona State University, Carnegie Mellon University, Rochester Institute of Technology, University of Oklahoma, University of Virginia, and University of Western Australia in the accelerator program. Those at institutions of higher education interested in how they can become members of BrainChip’s University AI Accelerator Program can find more details at https://brainchip.com/brainchip-university-ai-accelerator/.
 
  • Like
  • Fire
  • Love
Reactions: 68 users

Frangipani

Top 20
Give it a break please. It's not a competition. Heaven forbid you should find out you are not perfect either!!!

Was it your tired out running gag (pun intended) that gave you the bizarre idea I might see myself participating in a competition?

I know very well I am not perfect - fun fact: nobody is - but so what? Does that mean I am not allowed to comment on the umptieth silly error our company has made over the past couple of months in their communication with shareholders, potential future investors and the general public? Errors that could likely have been prevented by implementing a thorough double-checking procedure (or in case there is already one in place, by assigning someone with better proofreading skills to the job)? This issue has been raised time and again. Sadly, it has basically become a running joke itself.

I also voiced my opinion on what I perceive to be an unsatisfactory podcast format. Are you really happy with a series of prepared statements being read out verbatim from a transcript by our CEO instead of a real conversation taking place?

The way the podcast was presented (and I haven’t even touched upon the way we found out about two key employees having left the company or mentioned those weird clicking / tapping (microphone?) sounds that are audible throughout the podcast, particularly while Tony Dawe is speaking) and all those careless little mistakes in social media posts etc (such as typos, getting years wrong, mixing up the Japanese and Korean flags, uploading a wrong photo, namely that of a podcast guest’s much younger namesake, confusing conference host cities etc) do not align with the professional image that befits our company.

Luckily, the fantastic graphic design and the catchy slogans do.
 
  • Like
  • Fire
  • Love
Reactions: 17 users

Frangipani

Top 20

Another group picture taken at the Morpheus 2024 ESA Workshop on Edge AI and Neuromorphic Hardware Accelerators, this time featuring the two EDGX Co-Founders, CEO Nick Destrycker (in the middle) and CTO Wouter Benoot (second from right):

476ABC0B-2B73-41DA-9821-F5C07A1BCA88.jpeg
 
  • Like
  • Love
  • Fire
Reactions: 27 users

Frangipani

Top 20

TinyML: Why the Future of Machine Learning is Tiny and Bright​

by Shvetank Prakash, Emil Njor, Colby Banbury, Matthew Stewart, Vijay Janapa Reddi on May 6, 2024 | Tags: deep learning, Machine Learning, TinyML
TinyML: Why the Future of Machine Learning is Tiny and Bright

At the dawn of the 21st century, Mark Weiser envisioned a world where computers would weave themselves into the fabric of everyday life, becoming indistinguishable from it. This prophecy of ubiquitous computing has not only materialized but has evolved beyond Weiser’s initial predictions, particularly with the advent of Tiny Machine Learning (TinyML).
TinyML sits at the intersection of machine learning and embedded systems. It is the application of ML algorithms on small, low-power devices, offering powerful AI capabilities like low-latency decision-making, energy efficiency, data privacy, and reduced bandwidth costs at the extreme edge of the network.

TinyML models already impact how we interact with technology. Keyword-spotting systems activate assistants like Amazon’s Alexa or Apple’s Siri. TinyML also offers a variety of bespoke, innovative solutions that contribute to people’s welfare. In hearing aids, TinyML enables filtering noisy environments to assist patients with hearing impairments live more engaging lives. TinyML has also been used to detect deadly malaria-carrying mosquitoes, paving the way for millions of lives to be saved. These applications merely scratch the surface of TinyML.

This blog post provides an overview of TinyML. By exploring the evolution of TinyML algorithms, software, and hardware, we highlight the field’s progress, current challenges, and future opportunities.

Evolution of TinyML Algorithms​

Early TinyML solutions consisted of simple ML models like decision trees and SVMs. But since then, the field has progressed to deep-learning-based TinyML models, typically small, efficient convolutional neural networks. However, there are still challenges to overcome. Small devices’ limited memory and processing power constrain the complexity of models that can be used. Tiny Transformers (e.g., Jelčicová and Verhelst, Yang et al.) and recurrent neural networks are used less frequently due to memory constraints. However, emerging applications are pushing this area of research.

To enable efficient model design without large engineering costs, many models are automatically tailored via Neural Architecture Search for various hardware platforms (e.g., MicroNets, MCUNetV1&2, µNAS, Data Aware NAS). Additionally, TinyML models are frequently quantized to lower-precision integer formats (e.g., int8) to reduce their size and leverage fast vectorized operations. In some cases, sub-byte quantization, such as ternary (3-bit) or binary (2-bit), is used for maximum efficiency. While TinyML primarily targets inference, on-device training offers continual learning and enhanced privacy, so research along these lines has started to emerge (e.g., PockEngine, MCUNetV3).

Evolution of TinyML Software​

The ML software stack of embedded systems is often highly tuned and specialized for the target platform, making it challenging to deploy ML on these systems at scale. Various chip architectures, instruction sets, and compilers are targeted at different power scales and application markets. Therefore, TinyML frameworks must be portable, seamlessly compiling and running across various architectures while providing an easy path to optimize code for platform-specific features to meet the stringent compute latency requirements.

Frameworks like TensorFlow Lite for Microcontrollers (TFLM) attempt to address these issues. TFLM is an inference interpreter that works for various hardware targets. It has little overhead, which can be avoided via compiled runtimes in severely constrained applications (e.g., microTVM, tinyEngine, STM32CubeAI, and CMSIS-NN). But this comes at the cost of portability. TinyML systems also use lightweight real-time operating systems (OS) (e.g., FreeRTOS), but given the tight constraints, applications are often deployed bare metal without an OS.

Evolution of TinyML Hardware​

Initial efforts in TinyML focused on running algorithms natively on existing, stock MCU hardware. The limited on-chip memory (~kBs) and often non-existent off-chip memory made this challenging and drove research on efficient model design. ARM Cortex-M class processors (e.g., Zhang et al., Lai et al.) are a popular choice for many TinyML applications, offering attractive design points for low-power IoT systems from the slim Cortex-M0/M0+ to the more advanced Cortex-M4/M7 with SIMD and FPU ISA extensions. Running TinyML on DSPs has been another early approach due to the low-power consumption and optimized ISAs for signal processing (e.g., Gautschi et al.).

As ML algorithm complexity has increased, accelerator hardware for embedded devices is emerging. The ARM’s Ethos-U microNPU family of processors, starting with Ethos-U55, can deliver up to 90% energy reduction in about 0.1 mm2. Other examples include Google’s Edge TPU and Syntiant’s Neural Decision Processors. The latter can perform inference for less than 10mW in casesand offer 100x efficiency and 10-30x throughput compared to existing low-power MCUs. TinyML solutions are now integrated into larger systems, such as SoCs like the always-on, low-power AI sensing hubs in mobile devices.

Advancing TinyML: Opportunities for Future Innovations​

The growth of TinyML is evidenced by the increasing number of research publications, including hardware and software (extracted from Google Scholar), as shown below. The graph exhibits a noticeable “hockey stick effect,” indicating a significant acceleration in TinyML research and development in recent years. This rapid growth is further supported by the increasing number of yearly MLPerf Tiny performance and energy benchmark results, indicating the growing availability of commercialized specialized systems and solutions for consumers.

tinyml_trends.png

TinyML is growing and there are numerous challenges. Addressing them is crucial for the widespread adoption and practical application of TinyML in various domains, from IoT and wearables to industrial sensors and beyond.

Emerging Applications

Augmented and virtual reality are budding application spaces with intense performance demands requiring even smaller devices to run more complex ML applications with new abilities (e.g., EdgePC, Olfactory Computing). Orbital computing is also an exciting area of research for TinyML because local data processing is needed to avoid massive communication overheads from space (e.g., Orbital Edge Computing, Space Microdatacenters).

Energy Efficiency

Energy harvesting, intermittent computing, and smart power management (e.g., Sabovic et al., Gobieski et al., Hou et al.) are necessary to reduce power and make “batteryless” TinyML standards. Moreover, brain-inspired, neuromorphic computing is being developed to be extremely energy-efficient, particularly for tasks like pattern recognition and sensory event data processing. This approach contrasts with traditional Von-Neumann architectures (e.g., NEBULA, TSpinalFlow), and companies like BrainChip have already brought neuromorphic computing to commercial applications.

Cost-Efficiency

Cost is another critical element. Non-recurring engineering costs can be non-starters for TinyML applications. There has been initial work to see how Generative AI can be used for TinyML accelerator design to help minimize these expensive overheads. Other works exploring more economical alternatives to silicon, such as printed and flexible electronics (e.g., Bleier et al., Mubarik et al., FlexiCores), have demonstrated the ability to run simple ML algorithms using emerging technologies. Not only do flexible electronics provide ultra-low-cost solutions, but their form factor and extremely lightweight nature are suitable for many medical applications. However, brain-computer interface research (e.g., HALO, SCALO) has highlighted the current challenges of running complex processing algorithms (including ML-based signal processing) to decode or stimulate neural activity due to extreme constraints.

Privacy and Ethical Considerations

Privacy is critical as TinyML devices become widespread, especially in applications dealing with personal information. To address this, the paradigm of Machine Learning Sensors segregates sensor input data and ML processing from the wider system at the hardware level, offering increased privacy and accuracy through a modular approach. This then introduces the need for datasheets for ML Sensors that can provide transparency about functionality, performance, and limitations to enable integration and deployment. But even then, it is important to consider the materiality and risks associated with AI sensors, such as data privacy, security vulnerabilities, and potential misuse, to ensure the development of trustworthy and ethically sound TinyML systems.

Environmental Sustainability Challenges

Is TinyML Sustainable? It would be remiss not to mention that TinyML could lead to an “Internet of Trash” if we do not consider the system’s environmental footprint when developing billions of TinyML devices. The growth of IoT devices and their disposability can lead to a massive increase in electronic waste (e-waste) if not properly addressed. Moreover, the carbon footprint associated with the life cycle of these devices can be significant when accounting for their large-scale deployments. Therefore, assessing the environmental impacts of TinyML will become more important. This situation is analogous to the issue of plastic bags: while a single bag or TinyML device might have a negligible effect on the environment, the cumulative impact of billions of them could be substantial.

Getting Started with TinyML​

TinyML-specific Datasets: Conventional ML datasets are designed for tasks that exceed TinyML’s computational and resource constraints, so the TinyML community has curated specialized datasets to better fit the tasks currently performed by TinyML. Notable examples include the Google Speech Commands dataset for keyword spotting tasks, the Multilingual Spoken Words Corpus containing spoken words in 50 languages, the Visual Wake Words (VWW) dataset for tiny vision models in always-on applications, and the Wake Vision dataset, which is 100 times larger than VWW and specifically curated for person detection in TinyML.

TinyML System Benchmarks: MLCommons developed MLPerf Tiny to benchmark TinyML systems. FPGA frameworks such as hls4ml and CFU Playground take this further by integrating MLPerf Tiny into a full-stack open-source framework, providing the tools necessary for hardware-software co-design. These benchmarks enable researchers and developers to collaborate, share ideas, and drive TinyML forward through friendly competition.

Educational Resources: Several universities, including Harvard, MIT, and the University of Pennsylvania, have developed courses on TinyML. Even massively open online courses (MOOCs) on Coursera and HarvardX exist. The open-source Machine Learning Systems book introduces ML systems through the lens of TinyML; it is open for community contributions via its GitHub repository. Collectively, these resources cover the entire MLOps pipeline in a classroom setting, widening access to applied machine learning, especially through TinyML4D, which supports TinyML in developing countries.

The TinyML Community: At the heart of all this activity is the TinyML Foundation, a vibrant and growing non-profit organization with over 20,000 participants from academia and industry, spread across 41 countries. The foundation sponsors and supports the annual TinyML Research Symposium. to nurture research in this burgeoning area. They also support an active TinyML YouTube channel featuring weekly talks and host annual competitions to bootstrap the community.

Conclusion​

As we look towards the future of TinyML, it’s intriguing to consider how these technologies might reshape our daily lives. Qualcomm’s video, released on April Fools Day in 2008, offers a lighthearted yet thought-provoking glimpse into a world where TinyML, coupled with deeply embedded systems, could open up some wildly new possibilities.





Throughout history, we have seen powerful examples of small things making a significant impact. The transistor revolutionized electronics. The Network of Workstations in the 1990s demonstrated how hundreds of networked workstations could surpass the capabilities of centralized mainframes. Similarly, billions of TinyML models may one day replace our current insatiable trend toward increasingly large ML models. Our cover image serves as a visual metaphor for this potential, depicting many small fish overpowering a larger fish, illustrating the power of TinyML.

TinyML presents a unique opportunity. While tech giants are currently focused on large-scale, resource-intensive ML projects, TinyML offers a vast and accessible playground where individuals and small teams can innovate, experiment, and contribute to advancing machine learning in ways that directly impact people’s lives.

As we continue to develop new TinyML solutions, one thing is clear—the future of ML is tiny and bright!


About the Authors: Shvetank is a third-year Ph.D. student focused on open-source and flexible TinyML hardware architectures. Emil and Colby are third- and fifth-year Ph.D. students, respectively, focused on TinyML benchmarks, datasets, and model design. Matthew is a postdoc focused on the ethical and environmental implications of TinyML deployments in the real world. Vijay is John L. Loeb Associate Professor of Engineering and Applied Sciences at Harvard University who is passionate about TinyML and believes that the future of ML is (indeed) tiny and bright.

Disclaimer: These posts are written by individual contributors to share their thoughts on the Computer Architecture Today blog for the benefit of the community. Any views or opinions represented in this blog are personal, belong solely to the blog author and do not represent those of ACM SIGARCH or its parent organization, ACM.
 
  • Like
  • Fire
  • Love
Reactions: 26 users
Top Bottom