BRN Discussion Ongoing

Damo4

Regular
Although I hate to say this, unless we see multiple IP contracts or significant revenue on our financial statements share price will continue to get manipulated hard... (it gets manipulated even with strong financials lol)

I also get frustrated time to time and complained a lot about lack of updates to share holders but glad to see that the managements main focus was to build partnerships & joining ecosystem to be the key player & industry standard for the Edge AI sector.

As they stated in the Annual Report and 2nd Gen Platform announcement, they will be focusing on executing the IP agreements and generating revenue growth. The rerate should be coming soon with new IP agreements IMO :)

Mr Hehir added, “The development of the second generation of Akida was strongly
influenced by our customers’ feedback and driven by our extensive market engagement.
We have recently expanded our sales organisation to become truly global and we are
focused on executing more IP licence agreements and generating revenue growth over
coming years.

The way I see it is customer acquisition, sales and then revenue and growth take time.
It's not like we have a Theranos box, this is proven tech.
Unfortunately if anyone wants a sharp increase in SP in the next year they are likely to be disappointed.
With an investment horizon of over a year, which really isn't that long, I personally don't care about the SP attacks, and as I posted in the other thread, this is a great buying opportunity. The progress this company has achieved in the last 12-24 months, whilst having an SP just as low is easy money IMO.
 
  • Like
  • Fire
Reactions: 15 users

Mccabe84

Regular
The way I see it is customer acquisition, sales and then revenue and growth take time.
It's not like we have a Theranos box, this is proven tech.
Unfortunately if anyone wants a sharp increase in SP in the next year they are likely to be disappointed.
With an investment horizon of over a year, which really isn't that long, I personally don't care about the SP attacks, and as I posted in the other thread, this is a great buying opportunity. The progress this company has achieved in the last 12-24 months, whilst having an SP just as low is easy money IMO.
I would be happy with another 2 IP deals signed by the end of year. Surely that’s not asking to much
 
  • Like
  • Fire
Reactions: 15 users

IloveLamp

Top 20
Screenshot_20230308_084944_LinkedIn.jpg
Screenshot_20230308_084934_LinkedIn.jpg
 
  • Like
  • Fire
  • Love
Reactions: 32 users

TechGirl

Founding Member
Morning All,

So much exciting news going on.

Our new Integration Partner Teksun seem like a great fit for us with endless usecases...

Teksun recent blog post below





IoT | AI | Embedded Product Development Company

How AI and IoT are Transforming Smart Homes?​


Feb 23 2023

Artificial Intelligence IoT Product Engineering
How AI and IoT are Transforming Smart Homes Primary image.


AI and the Internet of Things are driving the expansion of smart home markets. As home automation solutions have gotten more inexpensive, smart living with automation and integrated AI-IoT is no longer considered a luxury. Local hardware or cloud-based intelligence can be used to provide smart home control.


According to a recent study, the smart home market is expected to develop at a 27.01% annual rate and reach a value of $537 billion by 2030.AI is one of the driving forces behind this expansion.

As AI continues to expand automation’s capabilities, such as replicating human decision-making and anticipating human behavior, it offers huge benefits in terms of convenience and smart support.

AI in Smart Homes​

The application of AI in managing the smart home infrastructure helps gather data from the home automation devices, anticipate user behavior, give maintenance data, and aid better data security and privacy. Because of its ability to do certain activities automatically for the user, its presence in home automation enables us to control our home appliances, safeguard our houses, and so on by reducing the need for human intervention.
This automation relies heavily on the data acquired by the devices and trained on utilizing a range of machine learning and deep learning methods. Smart home-linked devices provide the data, and the AI learns from the data to do particular activities without human interaction.
For example, Teksun thermostats learn automatically from their customers’ behavior on how to operate and then utilize that information to adjust the temperatures when someone is home or go energy efficient when no one is home.

The Internet of Things in Smart Homes​

IoT allows connected devices, vehicles, buildings, and other items implanted with software, sensors, and the internet to communicate with one another and may either be operated remotely or relay data to a distant user via AI. With the help of AI, these linked devices can monitor the status of every device connected to the same network and offer real-time data.

Important Considerations for Any Smart Home System​

How-AI-and-IoT-are-Transforming-Smart-Homes-Secondary-image.jpg

1. Data security and privacy are the two most important issues that any AI and IoT-enabled smart home should solve. Every connected device leaves digital traces of personal data that must be kept safe and secure.

2. Proper AI and IoT integration enables devices to perform more automatically and with expanded features. Security cameras, for example, often warn of threats automatically, but with correct AI integration, they will proactively alert humans to take charge of the situation when something goes wrong.

3. Interoperability is a critical issue that must be addressed by any home automation tool. Smart home devices should be made interoperable so that new use cases such as energy saving, appliance diagnostics, disaster damage prevention, and so on can be applied to the same smart devices.

4. Better customer service is an essential component of any organization. People living in smart homes may face issues inside their IoT environment, ranging from minor troubleshooting to major data protection concerns. Companies that deliver superior customer service will always be ahead of the competition.

5. Incorporating voice commands will allow the user to save time, and money and alleviate certain laborious activities. Voice control of devices and home appliances should be prioritized because providing user-friendly services always benefits the business.

How will the convergence of AI and IoT affect smart homes?​

AI in smart homes can translate raw sensor data from connected smart devices into beneficial behavior designs in our daily lives. AI-enabled gadgets understand the patterns of the renters and forecast the best experience. It will not turn on the heating, fan, or lights if there is no one in the house, and it will automatically lock the doors if there is no one in the house.
A perfect scenario would be for a user to prepare meals in a smart oven or stove while AI checks the meal’s internal temperature. If the meal reaches the ideal temperature, the AI can lower the cooking temperature to prevent it from burning. The AI would notify the user when the meal was ready to be taken from the oven or burner.
Artificial intelligence (AI) may be able to learn and anticipate a user’s desires. For example, a smart kitchen may be set up before a client user reaches home to begin cooking.
The promise of IoT and AI isn’t restricted to new homes; there are a variety of options that allow current devices, such as switches, to be converted to Smart Switches and old air conditioners to be updated to provide remote access via Smart Apps or AI-based on cloud servers, among other things.
Wireless solutions facilitate deployment, requiring no major electrical or common labor to get the user to Smart Living. Almost any present switch, air conditioner, or light can be converted to IoT-enabled via various brand-agnostic retrofit methods.
The combination of AI and IoT in the smart home is a winning combo for tech-savvy households. AI-enabled personalization, rather than historical usage, can assist your home in keeping track of how you go about your everyday routine. AI and smart home automation have come to a crossroads. Significant gains will be realized as technology progresses and more device integration becomes available.

In Nutshell​

Technology is changing and combining Smart Home requirements on a large scale. As the number of connected devices outnumbers humans, the concept of a smarter, more convenient home is gaining traction. Home automation has virtually endless applications.
Smart Homes, which blend AI and IoT, appeal to the technologically savvy while cutting energy expenditures and enhancing security. As a result, smart homes enable and safeguard the next level of technological existence.
The Internet of Things (IoT) and artificial intelligence (AI) are here to stay and will dramatically improve Smart Home automation.
 
  • Like
  • Fire
  • Love
Reactions: 39 users

Cardpro

Regular
The way I see it is customer acquisition, sales and then revenue and growth take time.
It's not like we have a Theranos box, this is proven tech.
Unfortunately if anyone wants a sharp increase in SP in the next year they are likely to be disappointed.
With an investment horizon of over a year, which really isn't that long, I personally don't care about the SP attacks, and as I posted in the other thread, this is a great buying opportunity. The progress this company has achieved in the last 12-24 months, whilst having an SP just as low is easy money IMO.
IMO, the biggest risk for BrainChip is that the technology might not be adopted by the industry. (Significantly de-risked now IMO)

Having proven technology & being the best solution doesn't necessarily mean it will be adopted by the industry which is the reason why our management was focused on establishing multiple partnerships with industry leaders & joining key ecosystems.

I disagree that sharp increase won't happen in next year, if we land multiple IP agreements this will further validate that our technology will be adopted by the industry and it will be reflected on the SP both in the short term and long term.
 
Last edited:
  • Like
  • Fire
  • Love
Reactions: 18 users

TechGirl

Founding Member
Teksun Machine Learning section on their website, again right up our ally & they do Natural Language Processing amongst others.


MACHINE LEARNING


We assist you in developing and deploying personalized and data-intensive solutions based on Machine Learning Services, to let you counter business challenges.

Instilling Intelligence​


Teksun delivers you the new-age apps empowered with pattern recognition, artificial intelligence, and mathematical predictability, which collectively provide you higher scalability. Our technical developers are experts in optimally utilizing and placing machine learning in anomaly detection, algorithm design, future forecasting, data modeling, spam filtering, predictive analytics, product recommendations, etc.

Get You First Consultation for FREE

Our Offerings

The offerings that we present here are just a gist of options and alternatives that we have for you inside the box. Catch sight of these to know the scope of our services:

null

Deep
Learning​

null

Predictive
Analytics​

null

Image
Analytics​

null

Video
Analytics​

null

Natural
Language
Processing



We also provide for Neural Network Development and Machine Learning Solutions. Looking for a better start for your project! Partner with our expert consultants to draft out the premier ways of undertaking it.

Get Started​

It’s an apt time to take-off with us!



What makes us unique

The unique is our ability to serve you in a ceaseless manner, with real-time updates of every project phase.

1​


We provide Machine Learning Consulting, assisting you all the way from project initiation to deployment.

2​


We furnish you with Supervised/Unsupervised ML services on both structured and unstructured data.

3​


Our experts undertake different algorithms and models to cater you the required service such as NLP, Decision Trees, etc.

4​


The tools and technologies used by us are the best in the market, a few of which can be named MongoDB, Cassandra, and so on.

5​


Our constantly updated and wide range of AI Models impart your business with high performance & scalability.

6​


Our experts undertake a personalized approach while delivering you the finest of Machine Learning Services.




Take a Look at

QA & Project Execution


Hire Developer​

Develop with the industry masters!
It’s the selection of technologies that carves out its full potential. Our top developers, assure your Machine Learning solutions of the finest tools as per the project and budget needs.


Industry we serve

We bring across a broad gamut of services, along with a versatile approach. Hence we are also able to facilitate a wide foot of industries, whether it be Forensic, Financial, Healthcare, Defence, or any other.
Consumer Electronics

Consumer Electronics​


Wearable Devices

Wearable​


Industrial Automation

Industry 4.0​


Biotech Solutions

Biotech​


Home Automation

Home Automation​


Agritech Solutions

Agritech​


null

Security & Surveillance​


Health Care System Design

Health Care​


null

Drones & Autonomy​


Automated testing

Automotive​



Every project needs different kind of attention and service. Our highly experienced consultants and technicians arrange for tailor-made plans and strategies to manage your varied projects.

Kick-Off Project​

Surge on your success journey!
 
  • Like
  • Fire
  • Love
Reactions: 35 users

hotty4040

Regular
Great reminder of why.
Do your own research.
Ignore the manipulators of markets.

Logic says if you want to buy a great company at a low price so will institutions but they have the ability and resources to influence the market and profit from lending to short traders.

BrainChip Introduces Second-Generation Akida Platform​

Introduces Vision Transformers and Spatial-Temporal Convolution for radically fast, hyper-efficient and secure Edge AIoT products, untethered from the cloud
Laguna Hills, Calif. – March 6, 2023
BrainChip Holdings Ltd (ASX: BRN, OTCQX: BRCHF, ADR: BCHPY), the world’s first commercial producer of ultra-low power, fully digital, neuromorphic AI IP, today announced the second generation of its Akida™ platform that drives extremely efficient and intelligent edge devices for the Artificial Intelligence of Things (AIoT) solutions and services market that is expected to be $1T+ by 2030. This hyper-efficient yet powerful neural processing system, architected for embedded Edge AI applications, now adds efficient 8-bit processing to go with advanced capabilities such as time domain convolutions and vision transformer acceleration, for an unprecedented level of performance in sub-watt devices, taking them from perception towards cognition.
The second-generation of Akida now includes Temporal Event Based Neural Nets (TENN) spatial-temporal convolutions that supercharge the processing of raw time-continuous streaming data, such as video analytics, target tracking, audio classification, analysis of MRI and CT scans for vital signs prediction, and time series analytics used in forecasting, and predictive maintenance. These capabilities are critically needed in industrial, automotive, digital health, smart home and smart city applications. The TENNs allow for radically simpler implementations by consuming raw data directly from sensors – drastically reduces model size and operations performed, while maintaining very high accuracy. This can shrink design cycles and dramatically lower the cost of development.
Another addition to the second generation of Akida is Vision Transformers (ViT) acceleration, a leading edge neural network that has been shown to perform extremely well on various computer vision tasks, such as image classification, object detection, and semantic segmentation. This powerful acceleration, combined with Akida’s ability to process multiple layers simultaneously and hardware support for skip connections, allows it to self-manage the execution of complex networks like RESNET-50 completely in the neural processor without CPU intervention and minimizes system load.
The Akida IP platform has a unique ability to learn on the device for continuous improvement and data-less customization that improves security and privacy. This, combined with the efficiency and performance available, enable very differentiated solutions that until now have not been possible. These include secure, small form factor devices like hearable and wearable devices, that take raw audio input, medical devices for monitoring heart and respiratory rates and other vitals that consume only microwatts of power. This can scale up to HD-resolution vision solutions delivered through high-value, battery-operated or fanless devices enabling a wide variety of applications from surveillance systems to factory management and augmented reality to scale effectively.
“We see an increasing demand for real-time, on-device, intelligence in AI applications powered by our MCUs and the need to make sensors smarter for industrial and IoT devices,” said Roger Wendelken, Senior Vice President in Renesas’ IoT and Infrastructure Business Unit. “We licensed Akida neural processors because of their unique neuromorphic approach to bring hyper-efficient acceleration for today’s mainstream AI models at the edge. With the addition of advanced temporal convolution and vision transformers, we can see how low-power MCUs can revolutionize vision, perception, and predictive applications in wide variety of markets like industrial and consumer IoT and personalized healthcare, just to name a few.”
“Advancements in AI require parallel advancements in on-device learning capabilities while simultaneously overcoming the challenges of efficiency, scalability, and latency,” said Richard Wawrzyniak, principal analyst at Semico Research. “BrainChip has demonstrated the ability to create a truly intelligent edge with Akida and moves the needle even more in terms of how Edge AI solutions are developed and deployed. The benefits of on-chip AI from a performance and cost perspective are hard to deny.”
“Our customers wanted us to enable expanded predictive intelligence, target tracking, object detection, scene segmentation, and advanced vision capabilities. This new generation of Akida allows designers and developers to do things that were not possible before in a low-power edge device,” said Sean Hehir, BrainChip CEO. “By inferring and learning from raw sensor data, removing the need for digital signal pre-processing, we take a substantial step toward providing a cloudless Edge AI experience.”
Akida’s software and tooling further simplifies the development and deployment of solutions and services with these features:
  • An efficient runtime engine that autonomously manages model accelerations completely transparent to the developer
  • MetaTF™ software that developers can use with their preferred framework, like TensorFlow/Keras, or development platform, like Edge Impulse, to easily develop, tune, and deploy AI solutions.
  • Supports all types of Convolutional Neural Networks (CNN), Deep Learning Networks (DNN), Vision Transformer Networks (ViT) as well as Spiking Neural Networks (SNNs), future-proofing designs as the models get more advanced.
Akida comes with a Models Zoo and a burgeoning ecosystem of software, tools, and model vendors, as well as IP, SoC, foundry and system integrator partners. BrainChip is engaged with early adopters on the second generation IP platform. General availability will follow in Q3’ 2023.
See what they’re saying:
“At Prophesee, we are driven by the pursuit of groundbreaking innovation addressing event-based vision solutions. Combining our highly efficient neuromorphic-enabled Metavision sensing approach with Brainchip’s Akida neuromorphic processor holds great potential for developers of high-performance, low-power Edge AI applications. We value our partnership with BrainChip and look forward to getting started with their 2nd generation Akida platform, supporting vision transformers and TENNs,” said Luca Verre, Co-Founder and CEO at Prophesee.
Luca Verre, Co-Founder and CEO, Prophesee
“BrainChip and its unique digital neuromorphic IP have been part of IFS’ Accelerator IP Alliance ecosystem since 2022,” said Suk Lee, Vice President of Design Ecosystem Development at IFS. “We are keen to see how the capabilities in Akida’s latest generation offerings enable more compelling AI use cases at the edge”
Suk Lee, VP Design Ecosystem Development, Intel Foundry Services
“Edge Impulse is thrilled to collaborate with BrainChip and harness their groundbreaking neuromorphic technology. Akida’s 2ndgeneration platform adds TENNs and Vision Transformers to a strong neuromorphic foundation. That’s going to accelerate the demand for intelligent solutions. Our growing partnership is a testament to the immense potential of combining Edge Impulse’s advanced machine learning capabilities with BrainChip’s innovative approach to computing. Together, we’re forging a path toward a more intelligent and efficient future,” said Zach Shelby, Co-Founder and CEO at Edge Impulse.
Zach Shelby, Co-Founder and CEO, Edge Impulse
“BrainChip has some exciting upcoming news and developments underway,” said Daniel Mandell, Director at VDC Research. “Their 2nd generation Akida platform provides direct support for the intelligence chip market, which is exploding. IoT market opportunities are driving rapid change in our global technology ecosystem, and BrainChip will help us get there.”
Daniel Mandell, Director, VDC Research
“Integration of AI Accelerators, such as BrainChip’s Akida technology, has application for high-performance RF, including spectrum monitoring, low-latency links, distributed networking, AESA radar, and 5G base stations,” said John Shanton, CEO of Ipsolon Research, a leader in small form factor, low power SDR technology.
John Shanton, CEO, Ipsolon Research
“Through our collaboration with BrainChip, we are enabling the combination of SiFive’s RISC-V processor IP portfolio and BrainChip’s 2nd generation Akida neuromorophic IP to provide a power-efficient, high capability solution for AI processing on the Edge,” said Phil Dworsky, Global Head of Strategic Alliances at SiFive. “Deeply embedded applications can benefit from the combination of compact SiFive Essential™ processors with BrainChip’s Akida-E, efficient processors; more complex applications including object detection, robotics, and more can take advantage of SiFive X280 Intelligence™ AI Dataflow Processors tightly integrated with BrainChip’s Akida-S or Akida-P neural processors.”
Phil Dworsky, Global Head of Strategic Alliances, SiFive
“Ai Labs is excited about the introduction of BrainChip’s 2nd generation Akida neuromorphic IP, which will support vision transformers and TENNs. This will enable high-end vision and multi-sensory capability devices to scale rapidly. Together, Ai Labs and BrainChip will support our customers’ needs to address complex problems,” said Bhasker Rao, Founder of Ai Labs. “Improving development and deployment for industries such as manufacturing, oil and gas, power generation, and water treatment, preventing costly failures and reducing machine downtime.”
Bhasker Rao, Founder, Ai Labs
“We see an increasing demand for real-time, on-device, intelligence in AI applications powered by our MCUs and the need to make sensors smarter for industrial and IoT devices,” said Roger Wendelken, Senior Vice President in Renesas’ IoT and Infrastructure Business Unit. “We licensed Akida neural processors because of their unique neuromorphic approach to bring hyper-efficient acceleration for today’s mainstream AI models at the edge. With the addition of advanced temporal convolution and vision transformers, we can see how low-power MCUs can revolutionize vision, perception, and predictive applications in a wide variety of markets like industrial and consumer IoT and personalized healthcare, just to name a few.”
Roger Wendelken, Senior Vice President IoT and Infrastructure Business Unit, Renesas
“We see a growing number of predictive industrial (including HVAC, motor control) or automotive (including fleet maintenance), building automation, remote digital health equipment and other AIoT applications use complex models with minimal impact to product BOM and need faster real-time performance at the Edge” said Nalin Balan, Head of Business Development at Reality ai, a Renesas company. “BrainChip’s ability to efficiently handle streaming high frequency signal data, vision, and other advanced models at the edge can radically improve scale and timely delivery of intelligent services.”
Nalin Balan, Head of Business Development, Reality.ai, a Renesas Company
“Advancements in AI require parallel advancements in on-device learning capabilities while simultaneously overcoming the challenges of efficiency, scalability, and latency,” said Richard Wawrzyniak, Principal Analyst at Semico Research. “BrainChip has demonstrated the ability to create a truly intelligent edge with Akida and moves the needle even more, in terms of how Edge AI solutions are developed and deployed. The benefits of on-chip AI from a performance and cost perspective are hard to deny.”
Richard Wawrzyniak, Principal Analyst, Semico Research
“BrainChip’s cutting-edge neuromorphic technology is paving the way for the future of artificial intelligence, and Drexel University recognizes its immense potential to revolutionize numerous industries. We have experienced that neuromorphic compute is easy to use and addresses real-world applications today. We are proud to partner with BrainChip and advancing their groundbreaking technology, including TENNS and how it handles time series data, which is the basis to address a lot of complex problems and unlocking its full potential for the betterment of society,” said Anup Das, Associate Professor and Nagarajan Kandasamy, Interim Department Head of Electrical and Computer Engineering, Drexel University.
Anup Das, Associate Professor, Drexel University
“Our customers wanted us to enable expanded predictive intelligence, target tracking, object detection, scene segmentation, and advanced vision capabilities. This new generation of Akida allows designers and developers to do things that were not possible before in a low-power edge device,” said Sean Hehir, BrainChip CEO. “By inferring and learning from raw sensor data, removing the need for digital signal pre-processing, we take a substantial step toward providing a cloudless Edge AI experience.”
Sean Hehir, CEO, BrainChip

My opinion only DYOR
FF

AKIDA BALLISTA
Thanks FF, for putting some real life perspective back into the conversation, ( SO MUCH HAPPENING )

AKIDA BALLISTA



hotty...
 
  • Like
  • Fire
Reactions: 9 users

IloveLamp

Top 20
  • Like
  • Fire
  • Love
Reactions: 36 users

TECH

Regular

BrainChip
@BrainChip_inc


In this Digital CxO Leadership Insights series video, Mike Vizard talks with Nandan Nayampally, CMO BrainChip, about how a new class of processors will advance artificial intelligence (AI) at the edge https://digitalcxo.com/video/leadership-insights-ai-at-the-edge/…
@DigCxO

@mvizard

Thanks Sirod69, that was an excellent interview and once again, Nandan spoke very well, I personally think he could have mentioned the word
Sparsity, in regards to low power, running extremely cool, and learning on the fly, as in, continually learning to become even more efficient on it's own....but very happy having Nandan on our team.

Have a nice evening....Chris (Tech) ;)
 
  • Like
  • Love
Reactions: 22 users

Bravo

If ARM was an arm, BRN would be its biceps💪!
Rise and shine



Thanks for the wake-up call @Rocket577 but unfortunately I slept like a log through the Cerence Conference and they haven't put a webcast or transcript of it up on their website yet. But don't worry, I'll be keeping my eyes peeled for it.

While I'm at it, I thought I might use this opportunity to remind everyone why I'm completely obsessed with Cerence and why I'm 99.999999999999999999999999999999999999999999999999999% convinced that we'll be incorporated in the "Cerence Immersive Companion" due in FY23/24. Aside from the other zillion odd posts I've managed to devote to Cerence, of which this one is a pretty good example #43,639, here is yet another post to add to the pile.

For some context, Nils Shanz is the Chief Product Officer at Cerence. But prior to joining Cerence he was at Mercedes. And it was Nils who was responsible for user interaction and voice control on the Vision EQXX voice control system (the one that incorporated BrainChip’s technology to make the wake word detection 5-10 times faster than convention voice control systems).

Check out this LinkedIn post from Nils when he was at Mercedes. It says "this is a demo to show the performance of our voice assistant in the #EQS: no Wake-up word needed to start a conversation & plenty of use-cases in less than 45 seconds". You can click the link below to watch the demo. But you can also see that there is a comment from Holger Quast (Product Strategy and Innovation at Cerence).

The other is a screen-shot of a testimonial from Daimler on Cerence's website.

As I say, just add this post to the list until we get proof irrefutable, which won't be too far away IMO.

SCerence.png


MER.png






 
  • Like
  • Fire
  • Love
Reactions: 45 users

Steve10

Regular
View attachment 31526

I assume the 22 is a typo and meant to be 2023

STMicroelectronics & Lacroix Group were on my radar the other day.

Akida could be in ovens monitoring the fan or in people flow detectors.

Case study 1: AI solution for people counting sensor​

Making buildings smarter is one of the big challenges of today's companies to improve their efficiency. The people flow counting sensor developed by Schneider Electric in partnership with STMicroelectronics enables the counting of the number of people. It also detects whether they are crossing a virtual line in both directions, using a large field of view and a small resolution thermal sensor.

This prototype can count in real-time and with a high level of accuracy the restaurant's attendance, while running on the standard STM32 microcontroller. This is achieved thanks to the artificial intelligence algorithm embedded on the STM32 microcontroller and the use of a thermal infrared technology.

1678234966505.jpeg

Case study 2: Low-power predictive maintenance + AI at the Edge​


Lacroix Group and its ecosystem are building the future of industrial electronics, in the design and production of industrial embedded systems and connected objects. At the heart of its smart industry strategy, Lacroix Electronics is now experimenting with predictive maintenance on its own assembly lines with the help of STMicroelectronics and its AI ecosystem.

The first trial of the condition monitoring technology is being done on the reflow oven of an automated line that solders component on PCB boards.

1678234335570.png



Artificial Intelligence @ ST​



STM32Cube function pack for high performance STM32 with artificial intelligence (AI) application for Computer Vision.
1678234417200.png




Artificial Intelligence (AI) condition monitoring function pack for STM32Cube.
1678234439092.png


STM32Cube function pack for ultra-low power IoT node with artificial intelligence (AI) application based on audio and motion sensing.
1678234461918.png





Give your product an Edge​

Simple, fast, optimized. Our extensive solutions
for embedded AI.​


A set of tools to enable Edge AI
on STM32 MCU, MPU and smart sensors​

Embedded AI can improve many solutions in a simple, fast, and cost-effective way.
Predictive maintenance, IoT products, smart buildings, asset tracking, people counting and more.
Learn how these applications can become smarter by making data meaningful with machine learning!




Customers:
1678235254664.png
1678235268462.png
1678235280192.png
1678235290647.png

 

Attachments

  • 1678234405343.png
    1678234405343.png
    14.1 KB · Views: 61
  • Like
  • Fire
  • Love
Reactions: 22 users

stuart888

Regular
The mighty chip is getting some much deserved media attention.




BrainChip Unveils Its Second-Generation Akida Platform, Now Boasting Vision Transformer Acceleration​

Brainchip's Akida 2.0 gains some impressive new features, along with a three-tier launch strategy scaling up to 128 nodes and 50 TOPS.​







BrainChip has announced the launch of its second-generation Akida processor family, designed for high-efficiency artificial intelligence at the edge, adding Temporal Event-Based Neural Net (TENN) support and optional vision transformer acceleration on top of the company's existing spiking neural network capabilities.
"Our customers wanted us to enable expanded predictive intelligence, target tracking, object detection, scene segmentation, and advanced vision capabilities. This new generation of Akida allows designers and developers to do things that were not possible before in a low-power edge device," claims BrainChip's chief executive officer Sean Hehir of the next-generation design. "By inferring and learning from raw sensor data, removing the need for digital signal pre-processing, we take a substantial step toward providing a cloudless Edge AI experience."
BrainChip has announced Akida 2.0, its second-generation edge-AI accelerator — now offering TENN and vision transformer support. (📷: BrainChip)

BrainChip has announced Akida 2.0, its second-generation edge-AI accelerator — now offering TENN and vision transformer support. (📷: BrainChip)

BrainChip began offering development kits for its first-generation Akida AKD1000 neural network processors in October 2021, building two kits around the user's choice of a Shuttle x86 PC or a Raspberry Pi. Ease of use took a leap earlier this year when the company announced the fruit of its partnership with Edge Impulse to bring Akida support to the latter's machine learning platform — offering what Edge Impulse co-founder and chief executive officer Zach Shelby described as a "powerful and easy-to-use solution for building and deploying machine learning models on the edge."
The promise of the Akida platform, which was developed based on the operation of the human brain, is high performance at a far greater efficiency than its rivals — when, at least, the problem to be solved can be defined as a spiking neural network. It's this efficiency which has seen BrainChip primarily position its Akida hardware for use at the edge, accelerating on-device machine learning in power-sensitive applications.
The company has confirmed plans to launch Akida 2.0 in three tiers, topping out at the Akida-P family with up to 50 TOPS of compute. (📷: BrainChip)

The company has confirmed plans to launch Akida 2.0 in three tiers, topping out at the Akida-P family with up to 50 TOPS of compute. (📷: BrainChip)

The second-generation Akida platform brings with it high-efficiency eight-bit processing and support for Temporal Event-Based Neural Nets (TENNs), giving it the ability to consume raw real-time streaming data from sensors, including video sensors. This, the company claims, provides "radically simpler implementations" for tasks including video analytics, target tracking, audio classification, and even vital sign prediction in medical imaging analysis.
BrainChip's Akida refresh also brings with it support for accelerating vision transformers, as an optional component that can be discarded if not required, as primarily used for image classification, object detection, and semantic segmentation. Combined with Akida's ability to process multiple layers at once, the company claims the new parts will allow for complete self-management and execution of even relatively complex networks like RESNET-50 — without the host device's processor having to get involved at all.

The new features come alongside BrainChip's earlier promises of dramatic efficiency gains through the use of spiking neural networks. (📹: BrainChip)
The company has confirmed that it will be licensing the Akida IP in three product classes: Akida-E will focus on high energy efficiency with a view to being embedded alongside, or as close as possible, to sensors and offering up to 200 giga-operations per second (GOPS) across one to four nodes; Akida-S will be for integration into microcontroller units and systems-on-chip (SoCs), hitting up to 1 tera-operations per second (TOPS) across two to eight nodes; and Akida-P will target the mid- to high-end, and will be the only tier to offer the optional vision transformer acceleration, scaling between eight and 128 nodes with a total performance of up to 50 TOPS.
While the part launches to unnamed "early adopters" today, though, BrainChip isn't quite ready to start selling them to the public — promising instead that second-generation Akida processors will be available in the third quarter of 2023 with as-yet unannounced pricing. More information is available on the BrainChip website.
machine learning
artificial intelligence
Yeah-Yeah to Brainchip employees making this happen! 🍹

Akida speaks AXI 4.0, interesting. There could be clues in the interface data transmission. Standardization is part of the complexity/slowness to implement, so this is good news.

The whole Edge IoT is going to explode. I assume AXI 4.0 is the key to the device connecting kingdom, easing adoption. Just trying to learn.

1678234372302.png


1678234650300.png


https://www.xilinx.com/products/intellectual-property/axi.html

 
  • Like
  • Fire
  • Love
Reactions: 23 users

stuart888

Regular
Wow.

It's a must watch Video, very informative. Fantastic to have Nandan as CMO.

It's great to be a shareholder 🏖
Only when needed is the key! Fantastic video, thanks a bunch Learning.

Energy efficient SNN spiking smarts!

1678236206502.png
 
  • Like
  • Fire
Reactions: 21 users

Diogenese

Top 20
Will Renesas do an Oliver Twist?

We see an increasing demand for real-time, on-device, intelligence in AI applications powered by our MCUs and the need to make sensors smarter for industrial and IoT devices,” said Roger Wendelken, Senior Vice President in Renesas’ IoT and Infrastructure Business Unit. “We licensed Akida neural processors because of their unique neuromorphic approach to bring hyper-efficient acceleration for today’s mainstream AI models at the edge. With the addition of advanced temporal convolution and vision transformers, we can see how low-power MCUs can revolutionize vision, perception, and predictive applications in wide variety of markets like industrial and consumer IoT and personalized healthcare, just to name a few.”

... even better than DRP-AI.
 
  • Like
  • Love
  • Fire
Reactions: 66 users

Evermont

Stealth Mode
Will Renesas do an Oliver Twist?

We see an increasing demand for real-time, on-device, intelligence in AI applications powered by our MCUs and the need to make sensors smarter for industrial and IoT devices,” said Roger Wendelken, Senior Vice President in Renesas’ IoT and Infrastructure Business Unit. “We licensed Akida neural processors because of their unique neuromorphic approach to bring hyper-efficient acceleration for today’s mainstream AI models at the edge. With the addition of advanced temporal convolution and vision transformers, we can see how low-power MCUs can revolutionize vision, perception, and predictive applications in wide variety of markets like industrial and consumer IoT and personalized healthcare, just to name a few.”

... even better than DRP-AI.

Wouldn't that be a nice message to the market.
 
  • Like
  • Fire
  • Love
Reactions: 25 users

Bravo

If ARM was an arm, BRN would be its biceps💪!
Interesting write up


I don't think I've ever seen this map before.

View attachment 31498

Yes @MadMayHam, and I thought it was very interesting that Si-Five specify that they want their X280 Intelligence Series to be tightly integrated with either Akida-S or Akida-P neural processors.




Screen Shot 2023-03-08 at 11.37.5.png



Screen Shot 2023-03-08 at 12.14.08 .png
 
Last edited:
  • Like
  • Fire
  • Love
Reactions: 33 users

Diogenese

Top 20
Thanks for the wake-up call @Rocket577 but unfortunately I slept like a log through the Cerence Conference and they haven't put a webcast or transcript of it up on their website yet. But don't worry, I'll be keeping my eyes peeled for it.

While I'm at it, I thought I might use this opportunity to remind everyone why I'm completely obsessed with Cerence and why I'm 99.999999999999999999999999999999999999999999999999999% convinced that we'll be incorporated in the "Cerence Immersive Companion" due in FY23/24. Aside from the other zillion odd posts I've managed to devote to Cerence, of which this one is a pretty good example #43,639, here is yet another post to add to the pile.

For some context, Nils Shanz is the Chief Product Officer at Cerence. But prior to joining Cerence he was at Mercedes. And it was Nils who was responsible for user interaction and voice control on the Vision EQXX voice control system (the one that incorporated BrainChip’s technology to make the wake word detection 5-10 times faster than convention voice control systems).

Check out this LinkedIn post from Nils when he was at Mercedes. It says "this is a demo to show the performance of our voice assistant in the #EQS: no Wake-up word needed to start a conversation & plenty of use-cases in less than 45 seconds". You can click the link below to watch the demo. But you can also see that there is a comment from Holger Quast (Product Strategy and Innovation at Cerence).

The other is a screen-shot of a testimonial from Daimler on Cerence's website.

As I say, just add this post to the list until we get proof irrefutable, which won't be too far away IMO.

View attachment 31550

View attachment 31551





Hi @Bravo ,

Here are a couple of Cerence patent applications:

US2022415318A1 VOICE ASSISTANT ACTIVATION SYSTEM WITH CONTEXT DETERMINATION BASED ON MULTIMODAL DATA

1678237664902.png


1678237624897.png

A vehicle system for classifying spoken utterance within a vehicle cabin as one of system-directed and non-system directed may include at least one microphone to detect at least one acoustic utterance from at least one occupant of the vehicle, at least one camera to detect occupant data indicative of occupant behavior within the vehicle corresponding to the acoustic utterance, and a processor programmed to receive the acoustic utterance, receive the occupant data, determine whether the occupant data is indicative of a vehicle feature, classify the acoustic utterance as a system-directed utterance in response to the occupant data being indicative of a vehicle feature, and process the acoustic utterance.



WO2020142717A1 METHODS AND SYSTEMS FOR INCREASING AUTONOMOUS VEHICLE SAFETY AND FLEXIBILITY USING VOICE INTERACTION

1678238534634.png



The specifications seem oblivious of SNNs.
 
  • Like
  • Sad
  • Fire
Reactions: 10 users
Has somebody already posted the nvisio newsletter? Just noticed an email that was received about 10 hrs ago.?
Edit... Just seen @Tothemoon24 post
 
Last edited:
  • Like
Reactions: 3 users
Some bigger buying in market has just started again. Buyers now double the sellers! Some line wiping just occurred at .55

The sneaky mass accumulation is continuing as they know BRN is going to fly and these are giveaway share prices.
 
  • Like
  • Fire
  • Love
Reactions: 33 users

Bravo

If ARM was an arm, BRN would be its biceps💪!
SAY WHAT?????


Tenstorrent talking about SNN's in an article dated 12 May 2020
!!!

Remember that we're in like flynn with Si-Five Intelligence x280 seeing that they have just specified that they want their X280 Intelligence Series to be tightly integrated with either Akida-S or Akida-P neural processors. And Tenstorrent have licensed Si-Five Intelligence x280 as a platform for its Tensix NPU.



cat-what.gif






Tenstorrent Is Changing the Way We Think About AI Chips​

Tenstorrent Is Changing the Way We Think About AI Chips

GPUs and CPUs are reaching their limits as far as AI is concerned. That’s why Tenstorrent is creating something different.
Chris Wiltz | May 12, 2020

SUGGESTED EVENT


GPUs and CPUs are not going to be enough to ensure a stable future for artificial intelligence. “GPUs are essentially at the end of their evolutionary curve,” Ljubisa Bajic, CEO of AI chip startup Tenstorrent told Design News. “[GPUs] have done a great job; they’ve pushed the field to to the point where it is now. But in order to make any kind of order of magnitude type jumps GPUs are going to have to go.”
Tenstorrent20Grayskull_0.png
Tenstorrent's Grayskull processor is capable of operating at up to 368 TOPS with an architecture much different than any CPU or GPU (Image source: Tenstorrent)

Bajic knows quite a bit about GPU technology. He spent some time at Nvidia, the house that GPUs built, working as senior architect. He’s also spent a few years working as an IC designer and architect at AMD. While he doesn’t think companies like Nvidia are going away any time soon, he thinks it’s only a matter of time before the company releases an AI chip product that is not a GPU.

But an entire ecosystem of AI chip startups is already heading in that direction. Engineers and developers are looking at new, novel chip architectures capable of handling the unique demands of AI and its related technologies – both in data centers and the edge.
Bajic is the founder of one such company – Toronto-based Tenstorrent, which was founded in 2016 and emerged from stealth earlier this year. Tenstorrent’s goal is both simple and largely ambitious – creating chip hardware for AI capable of delivering the best all around performance in both the data center and the edge. The company has created its own proprietary processor core called the Tensix, which contains a high utilization packet processor, a programmable SIMD, a dense math computational block, along with five single-issue RISC cores. By combining Tensix cores into an array using a network on a chip (NoC) Tenstorrent says it can create high-powered chips that can handle both inference and training and scale from small embedded devices all the way up to large data center deployments.

The company’s first product Grayskull (yes, that is a He-Man reference) is a processor targeted at inference tasks. According to company specs, Grayskull is capable of operating at up to 368 tera operations per second (TOPS). To put that into perspective as far as what Grayskull could be capable of, consider Qualcomm’s AI Engine used in its latest SoCs such as the Snapdragon 865. The Qualcomm engine offers up to 15 TOPS of performance for various mobile applications. A single Grayskull processor is capable of handling the volume of calculations of about two dozen of the chips found in the highest-end smartphones on the market today.
Tenstorrent20Grayskull20PCI20card_0.png
The Grayskull PCIe card (Image source: Tenstorrent)

Nature Versus Neural
If you want to design a chip that mimics cognition then taking cues from the human brain is the obvious way to go. Whereas AI draws a clear functional distinction between training (learning a task) and inference (implementing or acting on what’s been learned), the human brain does no such thing.
“We figured if we're going after imitating Mother Nature that we should really do a good job of it and not not miss some key features,” Bajic said. “If you look at the natural world, there’s the same architecture between small things and big things. They can all learn; it's not inference or training. And they all achieve extreme efficiency by relying on natural sparsity, so only a small percentage of the neurons in the brain are doing anything at any given time and which ones are working depends on what you're doing.”
Bajic said he and his team wanted to build a computer would have all these features and also not compromise on any of them. “In the world of artificial neural networks today, there are two camps that have popped up,” he said. “One is CPUs and GPUs and all the startup hardware that's coming up. They tend to be doing dense matrix math on hardware that's built for it, like single instructional, multiple data [SIMD] machines, and if they're scaled out they tend to talk over Ethernet. On the flip side you've got the spiking artificial neural network, which is a lot less popular and has had a lot less success in in broad applications.”

Spiking neural networks (SNNs) more closely mimic the functions of biological neurons, which send information via spikes in electrical activity. “Here people try to simulate natural neurons almost directly by writing out the differential equations that describe their operation and then implementing them as close we can in hardware,” Bajic explained. “So to an engineer this comes down to basically having many scalar processor cores connected to the scalar network.”
This is very inefficient from a hardware standpoint. But Bajic said that SNNs have an efficiency that biological neurons have in that only a certain percentage of neurons are activated depending on what the neural net is doing – something that’s highly desirable in terms of power consumption in particular.
“Spiking neural nets have this conditional efficiency, but no hardware efficiency. The other end of the spectrum has both. We wanted to build a machine that has both,” Bajic said. “We wanted to pick a place in the spectrum where we could get the best of the both worlds.”
Behind the Power of Grayskull
With that in mind there are four overall goals Tenstorrent is shooting for in its chip development – hardware efficiency, conditional efficiency, storage efficiency, and a high degree of scalability (exceeding 100,000 chips).
“So how did we do this? We implemented a machine that can run fine grain conditional execution by factoring the computation from huge groups of numbers to computations of small groups, so 16 by 4 or 16 by 16 groups to be precise,” Bajic said.

“We enable control flow on these groups with no performance penalty. So essentially we can run small matrices and we can put “if” statements around them and decide whether to run them at all. And if we’re going to run them we can decide whether to run them in reduced precision or full precision or anywhere in between.”
He said this also means rethinking the software stack. “The problem is that the software stacks that a lot of the other companies in the space have brought out assume that there's a fixed set of dimensions and a fixed set of work to run. So in order to enable adaptation at runtime normally hardware needs to be supportive of it and the full software stack as well.
“So many decisions that are currently made at compile time for us are moved into runtime so that we can accept exactly the right sized inputs. That we know exactly how big stuff is after we've chosen to eliminate some things at runtime so there's a fairly large software challenge to keep up with what the hardware enables.”
Tenstorrent%20Tensix%20core%20structure_0.jpg
(Image source: Tenstorrent)

Creating an architecture that can scale to over 100,000 nodes means operating at a scale where you can’t have a shared memory space. “You basically need a bunch of processors with private memory,” Bajic said. “Cache coherency is another thing that's impossible to scale for across more than a couple hundred nodes, so that had to go as well.”
Bajic explained that each of Tenstorrent’s Tensix cores is really a grid of five single-issue RISC covers that are networked together. Each Tensix is capable of roughly 3 TOPS of compute.
“All of our processors can pretty much be viewed as packet processors,” Bajic said. “The way that works on a single processor level is that you have a core and every one of them has a megabyte of SRAM. Packets arrive into buffers in this SRAM, which triggers software to fetch them and run a hardware unpacketization engine – this removes all the packet framing, interprets what it means, and decompresses the packet so it leaves compressed at all times, except when it’s being computed on.

“It essentially recreates that little tensor that made the packet. We run a bunch of computations on those tensors and eventually we're ready to to send them onward. What happens then is they get repacketized, recompressed, deposited into SRAM, and then from there our network functionality picks them up and forwards them to all the other cores that they need to go to under the directional compiler.”
While Tenstorrent is rolling out Grayskull it is actively developing its second Tensix core-based processor, dubbed Wormhole. Tenstorrent is targeting a Fall 2020 release for Wormhole and says it will focus even more on scale. “It’s essentially built around the same architecture [as Grayskull], but it has a lot of Ethernet links on it for scaling out,” Bajic said. “It's not going be a PCI card chip – it’s the same architecture, but for big systems.”
Searching for the iPhone Moment
There are a lot of lofty goals for AI on the horizon. Researchers and major companies alike are hoping new chip hardware will help along the path toward big projects like Level 5 autonomous cars all the way to some idea of general artificial intelligence.
Bajic agrees with these ideas, but he also believes that there’s a simple matter of cost savings that makes chips like the ones being developed by his company an attractive commodity.
“The metric that everybody cares about is this concept of total cost of ownership (TCO),”he said. “If you think of companies like Google, Microsoft, and Amazon, these are big organizations that run an inordinate amount of computing machinery and spend a lot of money doing it. Essentially they calculate the cost of everything to do with running a computer system over some set of years including how much the machine costs to begin with – the upfront cost, how much it costs to pipe through wires and cooling so that you can live with its power consumption, and the cost of how much the power itself costs. They add all of that together and get this TCO metric.
“For them minimizing that metric is important because they spend billions of dollars on this. Machine learning and AI has become a very sizable percentage of all their compute activity and it’s trending towards becoming half of all that activity in the next couple years. So if your hardware can perform, say, 10 times better then it's a very meaningful financial indicator. If you can convince the market that you've got an order of magnitude in TCO advantage that is going persist for a few years, it's a super powerful story. It's a completely valid premise to build a business around, but it's kind of an optimization thing as opposed to something super exciting.”
For Bajic those more exciting areas come in the form of large scale AI projects like using machine learning to track diseases and discover vaccines and medications as well as in emerging feels such as emotional AI and affective computing. “Imagine if you had a device on your wrist that could interpret all of your mannerisms and gestures. As you’re sitting there watching a movie it could tell if you’re bored or disgusted and change the channel. Or it could automatically order food if you appear to be hungry – something pretty intelligent that can also be situationally aware,” he said.
“The key engine that enables this level of awareness is an AI, but at this point these solutions are too power hungry and too big to put on your wrist or to put anywhere that can follow you. By providing an architecture that will give an order of magnitude boost you can start unlocking whole new technologies and creating things that will an impact on the level of the first iPhone release.”

 
Last edited:
  • Like
  • Fire
  • Thinking
Reactions: 36 users
Top Bottom