BRN Discussion Ongoing

STM32 AI MCU for washing machine applications.



Smart, low cost solution enabling AI for home appliances with an MCU.

MCU market is huge with approx. 30B MCU's shipping every year.

The global washing machine and dryers unit shipments are forecast to grow to almost 170 million units by 2025. From 2012 to 2021 the market has experienced continuous growth.

170M units per year x 30c royalty = $51M pa revenue for 100% market share.

If 10% of 30B MCU's are enabled with AI = 3B pa x 30c royalty = $900M revenue for 100% market share.


Thanks for all the research you post @Steve10 . It’s very insightful and you often offer perspective with numbers to compare as well. Much appreciated!

I recall, I think it was the start of 2022 PVDM being asked to predict 10 products/technologies for 2022 and washing machines was on the list.

Cheers

:)
 
  • Like
  • Fire
  • Love
Reactions: 30 users

dippY22

Regular
Happy St. Patrick's Day greetings from the USA. Given the crummy market action overall BRCHF should be considered to have had a green day, even if it was a meh in reality.
So, to celebrate I'll have whatever these guys are drinking.....

 
  • Like
  • Haha
Reactions: 11 users

I’m feelin it ……

3DEE9AA9-4159-4E97-8418-CAD3C1F15455.jpeg
 
  • Like
  • Fire
  • Love
Reactions: 20 users
Perpetuation in action

1679098550940.png

1679098848523.png



1679098651758.png


1679098728934.png
 
  • Like
  • Fire
  • Love
Reactions: 77 users

Flenton

Regular
D.I.L.L.I.G.A.F ...................... up to speed on this one @Rocket577 :)

AKIDA BALLISTA
Saw that guy live about a month a go. Laughed the whole night through.
 
  • Like
Reactions: 3 users
Last edited:
  • Haha
  • Like
Reactions: 7 users

Boab

I wish I could paint like Vincent
  • Like
  • Love
  • Fire
Reactions: 22 users

zeeb0t

Administrator
Staff member
Hello everyone, sorry to interrupt the conversation. I was just wondering how things have been going since Dreddb0t was introduced. I've noticed that it has dealt with all reported posts since inception.

Also, please keep in mind that if you come across any content that you think violates the rules, you can click on the report button and Dreddb0t will make a decision within a minute, regardless of the time or day.
 
  • Like
  • Love
  • Fire
Reactions: 57 users

Diogenese

Top 20
Hello everyone, sorry to interrupt the conversation. I was just wondering how things have been going since Dreddb0t was introduced. I've noticed that it has dealt with all reported posts since inception.

Also, please keep in mind that if you come across any content that you think violates the rules, you can click on the report button and Dreddb0t will make a decision within a minute, regardless of the time or day.
Like cricket umpires, they are doing a good job if no one notices them.
 
  • Like
  • Haha
  • Love
Reactions: 22 users
Come on Littleshort tells us what Michael said???

Tell us please the suspense is killing me.😂🤣😂

Hi @Fact Finder - his comment was simply tagging in Blake Eastman from Nonverbal Group, which in itself is interesting - in terms of the founder of this particular company being tagged into a BrainChip

More interestingly is the inclusion of Kyongsik Yun on the Nonverbal Group website

Kyongsik specialises in machine learning at NASA Jet Propulsion Laboratory.

Kyongsik Yun is a technologist at the Jet Propulsion Laboratory, California Institute of Technology, and a senior member of the American Institute of Aeronautics and Astronautics (AIAA).

His research focuses on building brain-inspired technologies and systems, including deep learning computer vision, natural language processing, brain-computer interfaces, and noninvasive remote neuromodulation


What does it all mean? What is the link? I have no idea. The website is not phone friendly, maybe it is more clear viewing on a PC?


1679104302609.png




1679104610893.png


1679104717375.png

1679104838193.png
 
  • Like
  • Fire
Reactions: 40 users

HopalongPetrovski

I'm Spartacus!
Hello everyone, sorry to interrupt the conversation. I was just wondering how things have been going since Dreddb0t was introduced. I've noticed that it has dealt with all reported posts since inception.

Also, please keep in mind that if you come across any content that you think violates the rules, you can click on the report button and Dreddb0t will make a decision within a minute, regardless of the time or day.
All good Z.
Can you see if he can do anything for the share price?
Maybe smack down some of the shortee’’s? 🤣
 
  • Haha
  • Like
  • Fire
Reactions: 21 users
Hi @Fact Finder - his comment was simply tagging in Blake Eastman from Nonverbal Group, which in itself is interesting - in terms of the founder of this particular company being tagged into a BrainChip

More interestingly is the inclusion of Kyongsik Yun on the Nonverbal Group website

Kyongsik specialises in machine learning at NASA Jet Propulsion Laboratory.

Kyongsik Yun is a technologist at the Jet Propulsion Laboratory, California Institute of Technology, and a senior member of the American Institute of Aeronautics and Astronautics (AIAA).

His research focuses on building brain-inspired technologies and systems, including deep learning computer vision, natural language processing, brain-computer interfaces, and noninvasive remote neuromodulation


What does it all mean? What is the link? I have no idea. The website is not phone friendly, maybe it is more clear viewing on a PC?


View attachment 32527



View attachment 32528

View attachment 32529
View attachment 32533
Thanks @thelittleshort

In desperation I resorted to following the science which has served me well across my investments.

As a result I found this very recently published paper out of China.

While I have included the link I don’t think one needs to read more than the enclosed abstract to understand why AKIDA second generation with vision transformers has excited so many particularly when you appreciate that until AKIDA 2nd gen even Brainchip said that the reason they included CNN in their SNN design was that CNN had a clear advantage in vision processing.

Bringing together SNN with vision transformers means according to this research paper this is no longer the case:

Deep Spiking Neural Networks with High Representation Similarity Model Visual Pathways of Macaque and Mouse​

Liwei Huang, Zhengyu Ma, Liutao Yu, Huihui Zhou, Yonghong Tian
arXiv preprint arXiv:2303.06060, 2023
Deep artificial neural networks (ANNs) play a major role in modeling the visual pathways of primate and rodent. However, they highly simplify the computational properties of neurons compared to their biological counterparts. Instead, Spiking Neural Networks (SNNs) are more biologically plausible models since spiking neurons encode information with time sequences of spikes, just like biological neurons do. However, there is a lack of studies on visual pathways with deep SNNs models. In this study, we model the visual cortex with deep SNNs for the first time, and also with a wide range of state-of-the-art deep CNNs and ViTs for comparison. Using three similarity metrics, we conduct neural representation similarity experiments on three neural datasets collected from two species under three types of stimuli. Based on extensive similarity analyses, we further investigate the functional hierarchy and mechanisms across species. Almost all similarity scores of SNNs are higher than their counterparts of CNNs with an average of 6.6%. Depths of the layers with the highest similarity scores exhibit little differences across mouse cortical regions, but vary significantly across macaque regions, suggesting that the visual processing structure of mice is more regionally homogeneous than that of macaques. Besides, the multi-branch structures observed in some top mouse brain-like neural networks provide computational evidence of parallel processing streams in mice, and the different performance in fitting macaque neural representations under different stimuli exhibits the functional specialization of information processing in macaques. Taken together, our study demonstrates that SNNs could serve as promising candidates to better model and explain the functional hierarchy and mechanisms of the visual system.
View at arxiv.org

This reveal of course does more than just confirm the reason why there is such excitement but it also gives weight to Peter van der Made’s statement that with the release AKIDA 2nd gen the about 3 year lead would extend out to about 5 YEARS.

Once Edge Impulse starts to publicly demonstrate the vision leap made possible by AKIDA 2nd gen things will get very exciting I suspect.

My opinion only DYOR
FF

AKIDA BALLISTA
 
  • Like
  • Fire
  • Love
Reactions: 37 users
D

Deleted member 118

Guest
Missed day 1 and 2 highlights



 
  • Like
Reactions: 6 users
An article about TSMC's thoughts about China vs US and it's impact on the rest of the World, and how Taiwan are caught up in the middle of it all:


"China’s semiconductor industry still lags behind that of Taiwan by about five or six years in terms of technology, according to Chang." Morris Chang is the founder of TSMC.
 
  • Like
  • Fire
Reactions: 20 users

TopCat

Regular
So much great information and links being posted here lately that I’m sure this has already been posted and I just can’t keep up , but just in case it hasn’t been.


DALLAS, March 15, 2023 /PRNewswire/ -- To build on innovations that advance intelligence at the edge, Texas Instruments (TI) (Nasdaq: TXN) today introduced a new family of six Arm® Cortex®-based vision processors that allow designers to add more vision and artificial intelligence (AI) processing at a lower cost, and with better energy efficiency, in applications such as video doorbells, machine vision and autonomous mobile robots.

This new family, which includes the AM62A, AM68A and AM69A processors, is supported by open-source evaluation and model development tools, and common software that is programmable through industry-standard application programming interfaces (APIs), frameworks and models. This platform of vision processors, software and tools helps designers easily develop and scale edge AI designs across multiple systems while accelerating time to market. For more information, see www.ti.com/edgeai-pr.

"In order to achieve real-time responsiveness in the electronics that keep our world moving, decision-making needs to happen locally and with better power efficiency," said Sameer Wasson, vice president, Processors, Texas Instruments. "This new processor family of affordable, highly integrated SoCs will enable the future of embedded AI by allowing for more cameras and vision processing in edge applications."
 
  • Like
  • Fire
  • Love
Reactions: 20 users

Diogenese

Top 20
Thanks @thelittleshort

In desperation I resorted to following the science which has served me well across my investments.

As a result I found this very recently published paper out of China.

While I have included the link I don’t think one needs to read more than the enclosed abstract to understand why AKIDA second generation with vision transformers has excited so many particularly when you appreciate that until AKIDA 2nd gen even Brainchip said that the reason they included CNN in their SNN design was that CNN had a clear advantage in vision processing.

Bringing together SNN with vision transformers means according to this research paper this is no longer the case:

Deep Spiking Neural Networks with High Representation Similarity Model Visual Pathways of Macaque and Mouse​

Liwei Huang, Zhengyu Ma, Liutao Yu, Huihui Zhou, Yonghong Tian
arXiv preprint arXiv:2303.06060, 2023
Deep artificial neural networks (ANNs) play a major role in modeling the visual pathways of primate and rodent. However, they highly simplify the computational properties of neurons compared to their biological counterparts. Instead, Spiking Neural Networks (SNNs) are more biologically plausible models since spiking neurons encode information with time sequences of spikes, just like biological neurons do. However, there is a lack of studies on visual pathways with deep SNNs models. In this study, we model the visual cortex with deep SNNs for the first time, and also with a wide range of state-of-the-art deep CNNs and ViTs for comparison. Using three similarity metrics, we conduct neural representation similarity experiments on three neural datasets collected from two species under three types of stimuli. Based on extensive similarity analyses, we further investigate the functional hierarchy and mechanisms across species. Almost all similarity scores of SNNs are higher than their counterparts of CNNs with an average of 6.6%. Depths of the layers with the highest similarity scores exhibit little differences across mouse cortical regions, but vary significantly across macaque regions, suggesting that the visual processing structure of mice is more regionally homogeneous than that of macaques. Besides, the multi-branch structures observed in some top mouse brain-like neural networks provide computational evidence of parallel processing streams in mice, and the different performance in fitting macaque neural representations under different stimuli exhibits the functional specialization of information processing in macaques. Taken together, our study demonstrates that SNNs could serve as promising candidates to better model and explain the functional hierarchy and mechanisms of the visual system.
View at arxiv.org

This reveal of course does more than just confirm the reason why there is such excitement but it also gives weight to Peter van der Made’s statement that with the release AKIDA 2nd gen the about 3 year lead would extend out to about 5 YEARS.

Once Edge Impulse starts to publicly demonstrate the vision leap made possible by AKIDA 2nd gen things will get very exciting I suspect.

My opinion only DYOR
FF

AKIDA BALLISTA
Rule 1 of flat-pack assembly:
När du har provat allt annat, läs instruktionerna.
 
Last edited:
  • Haha
  • Like
  • Love
Reactions: 21 users
Hi @Fact Finder - his comment was simply tagging in Blake Eastman from Nonverbal Group, which in itself is interesting - in terms of the founder of this particular company being tagged into a BrainChip

Confirmation from Michael below that it’s a simple case that he is mates with both Adnan Boz from NVIDIA and Blake Eastman from Nonverbal Group

The silver lining is Michael has over 16,000 followers - so the BrainChip post will be seen by many who would potentially not have otherwise


1679108659190.png
 
Last edited:
  • Like
  • Fire
  • Love
Reactions: 31 users
These researchers in the Netherlands set out to create an autonomous drone using Loihi.

They succeeded and published this paper on 13.3.23 which is great but also proved Loihi is by implication no match for AKIDA 1, 1.5 or 2. As a side note they also rule out Nvidia's Jetson Nano as ever being in the race for a whole lot of reasons the least of which is needing 5 watts to 10 watts of power. They also pour cold icy water on analogue SNN coming to their rescue:

DISCUSSION AND CONCLUSION
We presented the first fully neuromorphic vision-to-control pipeline for controlling a freely flying drone. Specifically, we trained a spiking neural network that takes in high-dimensional raw event-based camera data and produces low-level control commands. Real-world experiments demonstrated a successful sim-to-real transfer: the drone can accurately follow various ego-motion setpoints, performing hovering, landing, and lateral maneuvers—even under constant yaw rate.
Our study confirms the potential of a fully neuromorphic vision-to-control pipeline by running on board with an execution frequency of 200 Hz, spending only 27 μJ per network inference. However, there are still important hurdles on the way to reaping the full system benefits of such a pipeline, embedding it on extremely lightweight (e.g., <30 g) drones.
For reaching the full potential, the entire drone sensing, pro- cessing, and actuation hardware should be neuromorphic, from its accelerometer sensors to the processor and motors. Such hardware is currently not available, so we have limited ourselves to the vision-to-control pipeline, ending at thrust and attitude commands. Concerning the neuromorphic processor, the biggest advancement could come from improved I/O bandwidth and interfacing options. The current processor could not be connected to the event-based camera directly via AER, and with our advanced use case, we reached the limits of the number of spikes that can be sent to and received from the neuromorphic processor at the desired high execution frequency. This is also the reason that we have limited ourselves to a linear network controller: the increase in input spikes needed to encode the setpoint and attitude inputs would substantially reduce the ex- ecution frequency of the pipeline. Ultimately, further gains in terms of efficiency could be obtained when moving from digital neuromorphic processors to analog hardware, but this will pose even larger development and deployment challenges.
Despite the above-mentioned limitations, the current work presents a substantial step towards neuromorphic sensing and processing for drones. The results are encouraging, because they show that neuromorphic sensing and processing may bring deep neural networks within reach of small autonomous robots. In time this may allow them to approach the agility, versatility and robustness of animals such as flying insects”


webicon_green.png
https://arxiv.org/pdf/2303.08778

Add what ANT61 is publishing and Brainchip has truly crushed the opposition however most have not yet received the memo to present themselves to the crematorium.

While AKIDA 1.0, 1.5 & 2.0 all address the shortcomings that Loihi posed including direct compatibility with DVS camera feeds just imagine what AKIDA technology and Prophesee's vision sensor will be capable off. And this is before the AKIDA with vision transformers steps up to the plate.

My opinion only DYOR
FF

AKIDA BALLISTA
 
  • Like
  • Love
  • Fire
Reactions: 48 users

Diogenese

Top 20
So much great information and links being posted here lately that I’m sure this has already been posted and I just can’t keep up , but just in case it hasn’t been.


DALLAS, March 15, 2023 /PRNewswire/ -- To build on innovations that advance intelligence at the edge, Texas Instruments (TI) (Nasdaq: TXN) today introduced a new family of six Arm® Cortex®-based vision processors that allow designers to add more vision and artificial intelligence (AI) processing at a lower cost, and with better energy efficiency, in applications such as video doorbells, machine vision and autonomous mobile robots.

This new family, which includes the AM62A, AM68A and AM69A processors, is supported by open-source evaluation and model development tools, and common software that is programmable through industry-standard application programming interfaces (APIs), frameworks and models. This platform of vision processors, software and tools helps designers easily develop and scale edge AI designs across multiple systems while accelerating time to market. For more information, see www.ti.com/edgeai-pr.

"In order to achieve real-time responsiveness in the electronics that keep our world moving, decision-making needs to happen locally and with better power efficiency," said Sameer Wasson, vice president, Processors, Texas Instruments. "This new processor family of affordable, highly integrated SoCs will enable the future of embedded AI by allowing for more cameras and vision processing in edge applications."
At least up to mid-2021, with one exception, TI's patents indicate a conviction that ML was a software thing.

The exception, US2022108203A1 MACHINE LEARNING HARDWARE ACCELERATOR 2020-10-01, was little more enlightened, adhering to the von Neumann architecture:

1679110578752.png



In a memory device, a static random access memory (SRAM) circuit includes an array of SRAM cells arranged in rows and columns and configured to store data. The SRAM array is configured to: store a first set of information for a machine learning (ML) process in a lookup table in the SRAM array; and consecutively access, from the lookup table, information from a selected set of the SRAM cells along a row of the SRAM cells. A memory controller circuit is configured to select the set of the SRAM cells based on a second set of information for the ML process.

[0004] In another aspect, a system includes one or more microprocessors coupled to a memory circuit. The memory circuit includes static random access memory (SRAM) circuit including an array of SRAM cells arranged in rows and columns and configured to store data, the SRAM array configured to: store a first set of information for a machine learning (ML) process in a lookup table in the SRAM array; and consecutively access, from the lookup table, information from a selected set of the SRAM cells along a row of the SRAM cells. A memory controller circuit is configured to select the set of the SRAM cells based on a second set of information for the ML process
.
 
  • Like
  • Love
Reactions: 10 users
Top Bottom