overpup
Regular
embedded IoT on Cortex-M we are much closer to the topIndependent on which regulations will be set, security as Akida can provide it should be a huge advantage.
embedded IoT on Cortex-M we are much closer to the topIndependent on which regulations will be set, security as Akida can provide it should be a huge advantage.
Exceptionally well written. Lots of great content on the Edge Impulse Youtube Channel, often featuring Brainchip Smarts.Daniel Situnayake's
Head of ML Edge Impulse
nn to cpp: What you need to know about porting deep learning models to the edge
Mar 21, 2023
It’s been incredibly exciting to see the big surge of interest in AI at the edge—the idea that we can run sophisticated AI algorithms on embedded devices—over the past couple of weeks. Folks like @ggerganov, who ported Whisper and LLaMA to C++, @nouamanetazi, who ported BLOOM, and @antimatter15, who ported Stanford Alpaca have pulled back the curtain and shown the community that deep neural networks are just code, and they will run on any device that can fit them.
I’ve been building on-device AI for the past four years—first at Google, where we launched TensorFlow Lite for Microcontrollers, and now at Edge Impulse, where I’m Head of Machine Learning—and I’ve written a couple of books along the way. I am thrilled to see so much interest and enthusiasm from a ton of people who are new to the field and have wonderful ideas about what they could build.
Embedded ML is a big field to play in. It makes use of the entire history of computer science and electrical engineering, from semiconductor physics to distributed computing, laid out end to end. If you’re just starting out, it can be a bit overwhelming, and some things may come as a surprise. Here are my top tips for newcomers:
All targets are different. Play to their strengths.
Edge devices span a huge range of capabilities, from GPU-powered workhorses to ultra-efficient microcontrollers. Every unique system (which we call a target) represents a different set of trade-offs: RAM and ROM availability, clock speed, and processor features designed to speed up deep learning inference, along with peripherals and connectivity features for getting data in and out. Mobile phones and Raspberry Pi-style computers are at the high end of this range; microcontrollers are the mid- to low end. There are even purpose-built deep learning accelerators, including neuromorphic chips—inspired by human neurons’ spiking behavior—designed for low latency and energy efficiency.
There are billions of microcontrollers (aka MCUs) manufactured every year; if you can run a model on an MCU, you can run it anywhere. In theory, the same C++ code should run on any device—but in practice, every line of processor has custom instructions that you’ll need to make use of in order to perform computation fast. There are orders-of-magnitude performance penalties for running naive, unoptimized code. To make matters worse, optimization for different deep learning operations varies from target to target, and not all operations are equally supported. A simple change in convolutional stride or filter size may result in a huge performance difference.
The matrix of targets versus models is extraordinarily vast, and traditional tools for optimization are fragmented and difficult to use. Every vendor has their own toolchain, so moving from one target to another is a challenge. Fortunately, you don’t need to hand-craft C++ for every model and target. There are high-level tools available (like Edge Impulse) that will take an arbitrary model and generate optimized C++ or bytecode designed for a specific target. They’ll let you focus on your application and not worry so much about the implementation details. And you’re not always stuck with fixed architectures: you can design a model specifically to run well on a given target.
Compression is lossy. Quantify that loss.
It’s common to compress models so that they fit in the constrained memory of smaller devices and run faster (using integer math) on their limited processors. Quantization is the most important form of compression for edge AI. Other approaches, like pruning, are still waiting for adequate hardware support.
Quantization involves reducing the precision of the model’s weights, for example from 32 to 8 bits. It can routinely get you anything from a 2x to an 8x reduction in model size. Since there’s no such thing as a free lunch, this shrinkage will result in reduced model performance. Quantization results in forgetting, as explained in this fantastic paper by my friend @sarahookr (https://arxiv.org/abs/1911.05248). As the model loses precision, it loses performance on the “long tail” of samples in its dataset, especially those that are infrequent or underrepresented.
Forgetting can lead to serious problems, amplifying any bias in the dataset, so it’s absolutely critical to evaluate a quantized model against the same criteria as its full-sized progenitor in order to understand what was lost. For example, a quantized translation model may lose its abilities unevenly: it might “forget” languages or words that occur less frequently.
Typically, you can get a roughly 4x reduction in size (from 32 to 8 bits) with potentially minimal performance impact (always evaluate), and without doing any additional training. If you quantize deeper than 8 bits, it’s generally necessary to do so during training, so you’ll need access to the model’s original dataset.
A fun part of edge AI product design is figuring out how to make clever use of models that have been partially compromised by their design constraints. Model output is just one contextual clue that you can use to understand a given situation. Even an unreliable model can contribute to an overall system that feels like magic.
Devices have sensors. Learn how to use them.
While it’s super exciting to see today’s large language models ported to embedded devices, they’re only scratching the surface of what’s possible. Edge devices can be equipped with sensors—everything from cameras to radar—that can give them contextual information about the world around them. Combined with deep learning, sensor data gives devices incredible insight into everything from industrial processes to the inner state of a human body.
Today’s large, multimodal models are built using web-scraped data, so they’re biased towards text and vision. The sensors available to an embedded device go far beyond that—you can capture motion, audio, any part of the EM spectrum, gases and other chemicals, and human biosignals, including EEG data representing brain activity! I’m most excited to see the community make use of this additional data to train models that have far more insight than anything possible on the web.
Raw sensor data is highly dimensional and noisy. Digital signal processing algorithms help us sift the signal from the noise. DSP is an incredibly important part of embedded engineering, and many edge processors have on-board acceleration for DSP. As an ML engineer, learning basic DSP gives you superpowers for handling high frequency time series data in your models.
We can’t rely on pre-trained models.
A lot of ML/AI discourse revolves around pre-trained models, like LLaMA or ResNet, which are passed around as artifacts and treated like black boxes. This approach works fine with data with universal structural patterns, like language or photographs, since any user can provide compatible inputs. It falls apart when the structure of data starts to vary from device to device.
For example, imagine you’ve built an edge AI device with on-board sensors. The model, calibration, and location of these sensors, along with any signal processing, will affect the data they produce. If you capture data with one device, train a model, and then share it with another developer who has a device with different sensors, the data will be different and the model may not work.
Devices are infinitely variable, and as physical objects, they can even change over time. This makes pre-trained models less useful for AI at the edge. Instead, we train custom models for every application. We often use pre-trained models as feature extractors: for example, we might use a pre-trained MobileNet to obtain high-level features from an image sensor, then input those into a custom model—alongside other sensor data—in order to make predictions.
Making on-device the norm.
I’m confident that edge AI will enable a world of ambient computing, where our built environment is imbued with subtle, embodied intelligence that improves our lives in myriad ways—while remaining grounded in physical reality. It’s a refreshing new vision, diametrically different to the narrative of inevitable centralization that has characterized the era of cloud compute.
The challenges, constraints, and opportunities of embedded machine learning make it the most fascinating branch of computer science. I’m incredibly excited to see the field open up to new people with diverse perspectives and bold ideas.
Our goal at Edge Impulse is to make this field accessible to everyone. If you’re an ML engineer, we have tools to help you transform your existing models into optimized C++ for pretty much any target—and use digital signal processing to add sensor data to the mix. If you’re an embedded engineer or domain expert otherwise new to AI, we take the mystery out of the ML parts: you can upload data, train a model, and deploy it as a C++ library without needing a PhD. It’s easy to get started, just take a look at our docs.
It’s been amazing to see the gathering momentum behind porting deep learning models to C++, and it comes at an exciting time for us. Inside the company, I’ve been personally leading an effort to make this kind of work quick, easy, and accessible for every engineer. Watch this space: we’ll soon be making some big announcements that will open up our field even more.
Warmly,
Dan
Daniel Situnayake's blog
- My thoughts on embedded machine learning.
Intellisense's intelligent radio frequency (RF) system solutions enable wireless devices and platforms to sense and learn the characteristics of the communications environment in real time, providing enhanced communication quality, reliability and security. By integrating BrainChip’s Akida™ neuromorphic processor, Intellisense can deliver even more advanced, yet energy efficient, cognitive capabilities to its RF system solutions.“We are excited to partner with BrainChip and leverage their state-of-the-art neuromorphic technology”
Tweet this
BrainChip’s Neuromorphic Technology Enables Intellisense Systems to Address Needs for Next-Generation Cognitive Radio Solutions
March 21, 2023 05:30 PM Eastern Daylight Time
LAGUNA HILLS, Calif.--(BUSINESS WIRE)--BrainChip Holdings Ltd (ASX: BRN, OTCQX: BRCHF, ADR: BCHPY), the world’s first commercial producer of ultra-low power, fully digital, event-based, neuromorphic AI IP, today announced that Intellisense Systems Inc. has selected its neuromorphic technology to improve the cognitive communication capabilities on size, weight and power (SWaP) constrained platforms (such as spacecraft and robotics) for commercial and government markets.
Intellisense's intelligent radio frequency (RF) system solutions enable wireless devices and platforms to sense and learn the characteristics of the communications environment in real time, providing enhanced communication quality, reliability and security. By integrating BrainChip’s Akida™ neuromorphic processor, Intellisense can deliver even more advanced, yet energy efficient, cognitive capabilities to its RF system solutions.
One such project is the development of a new Neuromorphic Enhanced Cognitive Radio (NECR) device to enable autonomous space operations on platforms constrained by size, weight and power (SWaP). Intellisense’s NECR technology provides NASA numerous applications and can be used to enhance the robustness and reliability of space communication and networking, especially cognitive radio devices. Smart sensing algorithms will be implemented on neuromorphic computing hardware, including Akida, and then integrated with radio frequency modules as part of a Phase II prototype.
"We are excited to partner with BrainChip and leverage their state-of-the-art neuromorphic technology," said Frank T. Willis, President and CEO of Intellisense. "By integrating BrainChip's Akida processor into our cognitive radio solutions, we will be able to provide our customers with an unparalleled level of performance, adaptability and reliability."
BrainChip's Akida processor is a revolutionary computing architecture that is designed to process neural networks and machine learning algorithms at ultra-low power consumption, making it ideal for edge computing applications. By utilizing this cutting-edge technology, Intellisense will be able to deliver cognitive radio solutions that are faster, more efficient and more reliable than ever before.
"Intellisense provides advanced sensing and display solutions and we are thrilled to be partnering with them to deliver the next generation of cognitive radio capabilities," said Sean Hehir, CEO of BrainChip. "Our Akida processor is uniquely suited to address the demanding requirements of cognitive radio applications, and we look forward to continue partnering with Intellisense to deliver cutting-edge embedded processing with AI on-chip to their customers."
NFC has emerged as a de facto standard in the digital economy and touches many aspects of daily life. Fintech, such as mobile point-of-sale (mPoS) terminals and contactless payment, IoT, asset tracking, and wireless charging are highlights of NFC’s increasing presence. Headquartered in Graz, Austria, Panthronics has been offering advanced NFC chipsets and software that are easy to apply, innovative, small-in-size, and highly efficient for payment, IoT, and NFC wireless charging. Renesas and Panthronics have been addressing the rising demand of NFC as partners since 2018. Acquiring Panthronics’ competitive NFC technology will provide Renesas with in-house capability to instantly capture growing and emerging market opportunities for NFC.“We see tremendous opportunities for Panthronics’ NFC connectivity technology to benefit our customers in growing areas that span across fintech, IoT, and automotive spheres.”
Tweet this
The 55-minute microsoft founder interview is fantastic. He states, he shifted his workload back to AI fulltime a year or two again, due to all the AI break throughs.Bill Gates: AI is most important tech advance in decades
The former Microsoft boss says AI is the second revolutionary technology he's seen in his lifetime.www.bbc.com
Ok, so what you are saying is that you don’t know, but you have assumptions. I wouldn’t say that because you can’t prove one thing that it has to be someone else. Sometimes it’s ok to simply accept that we don’t know. Going if Prophesees words Akida is the best solution to process their sensory input. But if Qualcomm have a preference in what they want to use then I don’t think Prophesee would argue with them over it. But I am inclined to believe that Prophesee have given data points to showcase what Akida can do to Qualcomm. Just an opinion.We were watching the Qualcomm demonstration of Snapdragon with blur free camera advancements at CES via Prophesee. Assumptions were made that Akida could easily deliver this major advancement but this was never proven and therefore alternatives may have been in play and Synsense were the other Prophesee client offering blur free action shots. Others have speculated that Qualcomm may have other in house abilities to produce blur free content. But the belief still remains that Prophesee has Akida which was described as the missing piece of the puzzle for Prophesee to fulfill its goal in bringing market leading sports pictures with no blur of ball or racquet.
If I were the CTO of Brainchip I would publish a video with a test within two weeks with the title something like e-fuel detection vs diesel or gasoline. Some politicians here argue and say that this is technically difficult to realize. So far there have been impressive fun videos about wine or beer. I understand the background with food. But I would do it now and release it globally in two weeks at the latest.A compromise proposal by the EU Commission regarding e-fuels states that the vehicle must recognize whether e-fuels are being refueled or conventional fuels. I know of a sensor ~system that can taste and smell. A new field for Akida when it comes? Brand new, submitted today.
Sounds like a big-big deal correct? More fuel in the Brainchip pipeline?Renesas to Acquire Panthronics to Extend Connectivity Portfolio with Near-Field Communication Technology
Enhanced Connectivity Portfolio to Capture Growing Market Opportunities for Fintech, IoT, Asset Tracking, and Wireless Charging
March 22, 2023 03:00 AM Eastern Daylight Time
TOKYO--(BUSINESS WIRE)--Renesas Electronics Corporation (TSE:6723), a premier supplier of advanced semiconductor solutions, today announced its wholly owned subsidiary has entered into a definitive agreement with the shareholders of Panthronics AG (“Panthronics”), a fabless semiconductor company specializing in high-performance wireless products, under which Renesas will acquire Panthronics in an all-cash transaction. The acquisition will enrich Renesas’ portfolio of connectivity technology, extending its reach into high-demand Near-Field Communication (NFC) applications in fintech, IoT, asset tracking, wireless charging, and automotive applications.
NFC has emerged as a de facto standard in the digital economy and touches many aspects of daily life. Fintech, such as mobile point-of-sale (mPoS) terminals and contactless payment, IoT, asset tracking, and wireless charging are highlights of NFC’s increasing presence. Headquartered in Graz, Austria, Panthronics has been offering advanced NFC chipsets and software that are easy to apply, innovative, small-in-size, and highly efficient for payment, IoT, and NFC wireless charging. Renesas and Panthronics have been addressing the rising demand of NFC as partners since 2018. Acquiring Panthronics’ competitive NFC technology will provide Renesas with in-house capability to instantly capture growing and emerging market opportunities for NFC.
Combining Panthronics’ NFC technology with Renesas’ broad product portfolio and security functions in microcontrollers (MCU) / microprocessors (MPU) will provide Renesas’ wide customer base with a multitude of options to create innovative, ready-to-market NFC system solutions. Renesas and Panthronics have already launched four joint designs of NFC system solutions to date. These include solutions catering for mPoS terminals, wireless charging, and wall box smart metering platforms. The companies have also developed an NFC connectivity board that is fully integrated into the Renesas Quick-Connect Studio ecosystem, which allows customers to add features quickly and easily to MCU development boards. This enables a “plug and play” addition of full-featured, high-end NFC connectivity. Several more systems for PoS, IoT, wireless charging, and mobile are in development. Furthermore, the merits of Panthronics’ technology are also expected to be leveraged for Renesas’ automotive solutions, such as digital key management.
“Connectivity has been a priority area of ours, expanding and differentiating the realm of solutions we offer,” said Hidetoshi Shibata, President and CEO of Renesas. “We see tremendous opportunities for Panthronics’ NFC connectivity technology to benefit our customers in growing areas that span across fintech, IoT, and automotive spheres.”
The acquisition has been unanimously approved by the board of directors of Renesas and is expected to close by the end of the calendar year 2023, subject to required regulatory approval and customary closing conditions.
About Renesas Electronics Corporation
Renesas Electronics Corporation (TSE: 6723) empowers a safer, smarter and more sustainable future where technology helps make our lives easier. A leading global provider of microcontrollers, Renesas combines our expertise in embedded processing, analog, power and connectivity to deliver complete semiconductor solutions. These Winning Combinations accelerate time to market for automotive, industrial, infrastructure and IoT applications, enabling billions of connected, intelligent devices that enhance the way people work and live. Learn more at renesas.com. Follow us on LinkedIn, Facebook, Twitter, YouTube, and Instagram.
Cautionary note regarding forward-looking statements
This announcement may contain certain statements that are, or may be deemed to be, forward-looking statements with respect to the financial condition, results of operations and business of Renesas and/or Panthronics and/or the combined group following completion of the Acquisition and certain plans and objectives of Renesas with respect thereto. These forward-looking statements can be identified by the fact that they do not relate to historical or current facts. Forward-looking statements also often use words such as ‘anticipate’, ‘target’, ‘continue’, ‘estimate’, ‘expect’, ‘‘forecast’, ‘intend’, ‘may’, ‘plan’, ‘goal’, ‘believe’, ‘hope’, ‘aims’, ‘continue’, ‘could’, ‘project’, ‘should’, ‘will’ or other words of similar meaning. These statements are based on assumptions and assessments made by Renesas and/or Panthronics (as applicable) in light of their experience and perception of historical trends, current conditions, future developments and other factors they believe appropriate. By their nature, forward-looking statements involve risk and uncertainty, because they relate to events and depend on circumstances that will occur in the future and the factors described in the context of such forward-looking statements in this announcement could cause actual results and developments to differ materially from those expressed in or implied by such forward-looking statements. Although it is believed that the expectations reflected in such forward-looking statements are reasonable, no assurance can be given that such expectations will prove to be correct and you are therefore cautioned not to place undue reliance on these forward-looking statements which speak only as at the date of this announcement.
Forward-looking statements are not guarantees of future performance. Such forward-looking statements involve known and unknown risks and uncertainties that could significantly affect expected results and are based on certain key assumptions. Many factors could cause actual results to differ materially from those projected or implied in any forward-looking statements. Due to such uncertainties and risks, readers are cautioned not to place undue reliance on such forward-looking statements, which speak only as of the date of this announcement. Neither Renesas nor Panthronics undertake any obligation to update or revise any forward-looking statement as a result of new information, future events or otherwise, except as required by applicable law.
There are several factors which could cause actual results to differ materially from those expressed or implied in forward-looking statements. Among the factors that could cause actual results to differ materially from those described in the forward-looking statements are changes in the global, political, economic, business and competitive environments, market and regulatory forces, future exchange and interest rates, changes in tax rates and future business combinations or dispositions. If any one or more of these risks or uncertainties materializes or if any one or more of the assumptions prove incorrect, actual results may differ materially from those expected, estimated or projected. Such forward looking statements should therefore be construed in the light of such factors.
No member of the Renesas group or the Panthronics group nor any of their respective associates, directors, officers, employers or advisers, provides any representation, assurance or guarantee that the occurrence of the events expressed or implied in any forward-looking statements in this announcement will actually occur.
Except as expressly provided in this announcement, no forward-looking or other statements have been reviewed by the auditors of the Renesas group or the Panthronics group. All subsequent oral or written forward-looking statements attributable to any member of the Renesas group or the Panthronics group, or any of their respective associates, directors, officers, employers or advisers, are expressly qualified in their entirety by the cautionary statement above.
Hi Dhm.We were watching the Qualcomm demonstration of Snapdragon with blur free camera advancements at CES via Prophesee. Assumptions were made that Akida could easily deliver this major advancement but this was never proven and therefore alternatives may have been in play and Synsense were the other Prophesee client offering blur free action shots. Others have speculated that Qualcomm may have other in house abilities to produce blur free content. But the belief still remains that Prophesee has Akida which was described as the missing piece of the puzzle for Prophesee to fulfill its goal in bringing market leading sports pictures with no blur of ball or racquet.
Lol BARD was asked what months come after January and February and it said marchuary, Apriluary, mayuary, Juneuary… and so on, you get the picture hahaBard: Google's rival to ChatGPT launches for over-18s
The tech giant is rolling out its new AI chatbot, called Bard, to users in the US and UK first.www.bbc.com
Google has started rolling out its AI chatbot Bard, but it is only available to certain users and they have to be over the age of 18.
Unlike its viral rival ChatGPT, it can access up-to-date information from the internet and has a "Google it" button which accesses search.
I noticed a few people have been searching chatGPT for news on brainchip and have sourced some unknown information, but I think it was only relevant upto 2021, least if you use bard it searches are upto date.
Very interesting, since Brainchip is friends with some/many!
Agree. A new series of updated demo videos with the same theme would be professional. Get rid of the old cringe worthy videos. How are computer scientists, engineers and product delevopers supposed to convince managers that this is the future. I was watching the videos from Texas instruments and they were impressive. Time to show what this technology can do.If I were the CTO of Brainchip I would publish a video with a test within two weeks with the title something like e-fuel detection vs diesel or gasoline. Some politicians here argue and say that this is technically difficult to realize. So far there have been impressive fun videos about wine or beer. I understand the background with food. But I would do it now and release it globally in two weeks at the latest.
Or do you think that Akida can't recognize e-fuels?
____
Decisions are made soon whether we like them or not. We can show politicians that it is possible and with that they can make decisions. I just can't imagine that the Brainchiper nerds can't prove that. Investment very low and attention maximum.
___
Some of you know them personally. Send them this and they should also send it directly to the EU Commission.