BRN Discussion Ongoing

D

Deleted member 118

Guest
 
  • Like
  • Thinking
  • Love
Reactions: 17 users

uiux

Regular
  • Like
  • Love
  • Fire
Reactions: 29 users

Bravo

If ARM was an arm, BRN would be its biceps💪!


The comments below the video are classic!

Screen Shot 2022-08-07 at 1.17.03 pm.png


Screen Shot 2022-08-07 at 1.20.29 pm.png
Screen Shot 2022-08-07 at 1.20.19 pm.png

Screen Shot 2022-08-07 at 1.20.04 pm.png
 

Attachments

  • 1659842496395.png
    1659842496395.png
    36.6 KB · Views: 140
  • Like
  • Haha
  • Love
Reactions: 46 users
D

Deleted member 118

Guest
 
  • Like
  • Fire
Reactions: 10 users

Proga

Regular
Here we go. NVIDIA Jetson Nano (4GB) doesn't use Akida

NVISO AI App model performance
can be accelerated by average by
3.67x using neuromorphic
computing over a single core ARM
Cortex A57 as found in a NVIDIA
Jetson Nano (4GB).
 
  • Like
  • Fire
Reactions: 4 users
D

Deleted member 118

Guest
Another old video

 
  • Like
  • Fire
Reactions: 7 users

equanimous

Norse clairvoyant shapeshifter goddess
Hi @Diogenese

Not sure if this has been posted before. Looks like this combination will be effective for harsh weather conditions with vision applications.


Spiking neural network (SNN) has attracted much attention due to its powerful spatio-temporal information representation ability. Capsule Neural Network (CapsNet) does well in assembling and coupling features of different network layers. Here, we propose Spiking CapsNet by combining spiking neurons and capsule structures. In addition, we propose a more biologically plausible Spike Timing Dependent Plasticity routing mechanism. The coupling ability is further improved by fully considering the spatio-temporal relationship between spiking capsules of the low layer and the high layer. We have verified experiments on the MNIST, FashionMNIST, and CIFAR10 datasets. Our algorithm still shows comparable performance concerning other excellent SNNs with typical structures (convolutional, fully-connected) on these classification tasks. Our Spiking CapsNet combines SNN and CapsNet’s strengths and shows strong robustness to noise and affine transformation. By adding different Salt-Pepper and Gaussian noise to the test dataset, the experimental results demonstrate that our algorithm is more resistant to noise than other approaches. As well, our Spiking CapsNet shows strong generalization to affine transformation on the AffNIST dataset. Our code is available at https://github.com/BrainCog-X/Brain-Cog.

4. Conclusion​

In this paper, the Spiking CapsNet is proposed by introducing the capsule structure into the modelling of the spiking neural network, which fully combines their spatio-temporal processing capabilities. The cortical minicolumns inspire the capsules to work as the coincidence detectors [14]. The information transmission method based on discrete spike trains is more consistent with the work mechanism of the human brain. Meanwhile, we propose a more biologically plausible STDP routing algorithm inspired by the learning mechanisms of synapses in the brain [2]. The routing algorithm fully considers the part and the whole spatial relationship between the low-level capsule and the high-level capsules, as well as the spike firing time order between the pre-synaptic and post-synaptic neurons. The coupling ability between low-level and high-level spiking capsules is further improved. Compared with other excellent BP-based SNNs [37], our model shows great adaptability to noise and the spatial affine transformation. Although LISNN [4] and HMSNN [44] show excellent robustness to the noise, they cannot handle the spatial affine transformation well. Our model shows strong performance and robustness on the MNIST, FashionMNIST, and CIFAR10 datasets.

1659843266851.png
 
  • Like
  • Fire
Reactions: 11 users

equanimous

Norse clairvoyant shapeshifter goddess
Interesting..


Dilute combustion control using spiking neural networks​

Invention Reference Number​


202104856

Technology Summary​


Technologies directed to dilute combustion control using spiking neural networks is described.

Inventors​

Bryan Maldonado Puente
Buildings & Transportation Science Division

Licensing Contact​


Andrei Zorilescu
zorilescua@ornl.gov

1659844356955.png
 
  • Like
  • Fire
Reactions: 8 users

Bravo

If ARM was an arm, BRN would be its biceps💪!
California’s Department of Motor Vehicles (DMV) has accused Tesla of falsely advertising its Autopilot and Full Self-Driving (FSD) features.


Screen Shot 2022-08-07 at 1.55.15 pm.png
 
  • Like
  • Wow
  • Fire
Reactions: 21 users

equanimous

Norse clairvoyant shapeshifter goddess

Yahoo Finance had 93.9M monthly unique visitors in November 2020​

 
  • Like
  • Fire
Reactions: 20 users

equanimous

Norse clairvoyant shapeshifter goddess
hi @zeeb0t Is it possible to have calendar schedule and reminders on this platform for events?

1659844911739.png
 
  • Like
  • Fire
Reactions: 20 users

equanimous

Norse clairvoyant shapeshifter goddess
Its actually crazy to think that Akida has a lot more room for performance and algorithm optimizations coupled with the fact when Neuromorphic architecture will be built for its intention as suggested in the latest podcast will make it even more revolutionary.

The fact its so adaptable to existing products with instant power reduction and performance improvement makes it a no Brainer to use.

Cant wait for the next upgraded edition version to be released and its results.
 
  • Like
  • Fire
  • Love
Reactions: 21 users
D

Deleted member 118

Guest
 
  • Like
  • Love
Reactions: 4 users

Slymeat

Move on, nothing to see.

So Samsung are “working on” something that has already existed, in Akida, for over a year now. Thanks for spreading the word on neuromorphic computing I suppose.
 
  • Like
  • Haha
  • Love
Reactions: 14 users
This been posted? Link to article in title. @Fullmoonfever has previously posted an article by ABI Research


Driver Monitoring Systems Shipments Will Jump 487.5% Between 2022 and 2027, Increasing Safety and Creating a Lucrative Monetization Opportunity for Carmakers



New York, New York - July 28, 2022

According to global technology intelligence firm ABI Research, shipments of vehicles featuring camera-based Driver Monitoring Systems (DMS) will jump from 8 million in 2022 to 47 million in 2027, more than 50% of global new vehicle sales.

These systems offer reliable real-time driver distraction monitoring as means to prevent accidents. While mainly driven by regulation, they also enable a range of infotainment-related features that will provide carmakers with the opportunity to recoup their investments.

Because DMS will become mandated, carmakers, especially in the mass market, were initially interested in deploying the minimal EU General Safety Regulation (GSR) requirements. However, standard mandated ADAS features drive an additional cost into the vehicle that OEMs cannot quickly or easily recuperate

“Hence, envisioning additional use cases that use the available sensor technology has become imperative. With the realization that monetization opportunities could be realized with the same DMS hardware and minor incremental software investment, most carmakers' DMS RFQs now request two to three features beyond driver attention monitoring," explains Maite Bezerra, Smart Mobility & Automotive Industry Analyst at ABI Research.

DMS safety-related detection capabilities include drowsiness, distraction, seatbelt use, smoking, and phone use. However, DMS can also support several convenience features. For example, the driver’s head position and gaze direction input can enable Augmented Reality (AR) head-up displays and 3D dashboards to provide information about Points of Interest (e.g., Mercedes' MBUX Travel Knowledge) or to highlight or tone down information in the cockpit, decreasing energy consumption in EVs.

Advanced cognitive load detection capabilities can be used by personal assistants to measure the driver’s stress level, mood, or health and make suggestions or take actions accordingly. Examples include Cerence Co-Pilot, NVIDIA Concierge, and NIO's NOMI. “There is also interest in using the driver's medical status, such as heart and respiration rates, to determine stress level and medical condition after accidents,” Bezerra points out.

Expanding the DMS scope to Occupant Monitoring Systems (OMSs) within the same camera is another clear trend due to the broad range of monetizable use cases enabled by camera-based OMSs. According to Bezerra, “OMSs primary use case is detecting children or pets left behind, but input can be used to enhance passenger safety and convenience.

For example, the camera can detect incorrect use of seatbelts, and the occupant's position in the car can be used to regulate airbag deployment more effectively. Regarding convenience, the camera can be used for selfies, video conferences, remote vehicle motoring, and multi-user in-cabin and media content customization.”

ABI Research forecasts that nearly 10 million vehicles will be shipped with single-camera DMS and OMS, offered by companies including Seeing Machines, Cipia, Tobii, and Jungo, in 2028. "Moving forward, DMS and OMS will be critical sensors enabling next-generation automotive HMI and UX. Machine Learning (ML), Artificial Intelligence (AI), multimodal input and output channels, and unprecedented integration with vehicle sensors, domains, location data, and other IoT devices will be combined to provide an intuitive, humanized, and seamless in-car user experience," Bezerra concludes.

These findings are from ABI Research's Next-Generation Automotive HMI application analysis report. This report is part of the company's Smart Mobility & Automotive research service, which includes research, data, and ABI Insights. Based on extensive primary interviews, Application Analysisreports present an in-depth analysis of key market trends and factors for a specific technology.

About ABI Research
ABI Research is a global technology intelligence firm delivering actionable research and strategic guidance to technology leaders, innovators, and decision makers around the world. Our research focuses on the transformative technologies that are dramatically reshaping industries, economies, and workforces today.
 
Last edited:
  • Like
  • Fire
  • Love
Reactions: 35 users

Proga

Regular
This been posted? Link to article in title. @Fullmoonfever has previously posted an article by ABI Research


Driver Monitoring Systems Shipments Will Jump 487.5% Between 2022 and 2027, Increasing Safety and Creating a Lucrative Monetization Opportunity for Carmakers



New York, New York - July 28, 2022

According to global technology intelligence firm ABI Research, shipments of vehicles featuring camera-based Driver Monitoring Systems (DMS) will jump from 8 million in 2022 to 47 million in 2027, more than 50% of global new vehicle sales.

These systems offer reliable real-time driver distraction monitoring as means to prevent accidents. While mainly driven by regulation, they also enable a range of infotainment-related features that will provide carmakers with the opportunity to recoup their investments.

Because DMS will become mandated, carmakers, especially in the mass market, were initially interested in deploying the minimal EU General Safety Regulation (GSR) requirements. However, standard mandated ADAS features drive an additional cost into the vehicle that OEMs cannot quickly or easily recuperate

“Hence, envisioning additional use cases that use the available sensor technology has become imperative. With the realization that monetization opportunities could be realized with the same DMS hardware and minor incremental software investment, most carmakers' DMS RFQs now request two to three features beyond driver attention monitoring," explains Maite Bezerra, Smart Mobility & Automotive Industry Analyst at ABI Research.

DMS safety-related detection capabilities include drowsiness, distraction, seatbelt use, smoking, and phone use. However, DMS can also support several convenience features. For example, the driver’s head position and gaze direction input can enable Augmented Reality (AR) head-up displays and 3D dashboards to provide information about Points of Interest (e.g., Mercedes' MBUX Travel Knowledge) or to highlight or tone down information in the cockpit, decreasing energy consumption in EVs.

Advanced cognitive load detection capabilities can be used by personal assistants to measure the driver’s stress level, mood, or health and make suggestions or take actions accordingly. Examples include Cerence Co-Pilot, NVIDIA Concierge, and NIO's NOMI. “There is also interest in using the driver's medical status, such as heart and respiration rates, to determine stress level and medical condition after accidents,” Bezerra points out.

Expanding the DMS scope to Occupant Monitoring Systems (OMSs) within the same camera is another clear trend due to the broad range of monetizable use cases enabled by camera-based OMSs. According to Bezerra, “OMSs primary use case is detecting children or pets left behind, but input can be used to enhance passenger safety and convenience.

For example, the camera can detect incorrect use of seatbelts, and the occupant's position in the car can be used to regulate airbag deployment more effectively. Regarding convenience, the camera can be used for selfies, video conferences, remote vehicle motoring, and multi-user in-cabin and media content customization.”

ABI Research forecasts that nearly 10 million vehicles will be shipped with single-camera DMS and OMS, offered by companies including Seeing Machines, Cipia, Tobii, and Jungo, in 2028. "Moving forward, DMS and OMS will be critical sensors enabling next-generation automotive HMI and UX. Machine Learning (ML), Artificial Intelligence (AI), multimodal input and output channels, and unprecedented integration with vehicle sensors, domains, location data, and other IoT devices will be combined to provide an intuitive, humanized, and seamless in-car user experience," Bezerra concludes.

These findings are from ABI Research's Next-Generation Automotive HMI application analysis report. This report is part of the company's Smart Mobility & Automotive research service, which includes research, data, and ABI Insights. Based on extensive primary interviews, Application Analysisreports present an in-depth analysis of key market trends and factors for a specific technology.

About ABI Research
ABI Research is a global technology intelligence firm delivering actionable research and strategic guidance to technology leaders, innovators, and decision makers around the world. Our research focuses on the transformative technologies that are dramatically reshaping industries, economies, and workforces today.
camera-based Driver Monitoring Systems (DMS) will jump from 8 million in 2022 to 47 million in 2027 - that's per year. All the while, vehicles will be transitioning to EV's where minimal power consumption becomes crucial. Therefore, most will run using Akida. Every DMS inside an EV will.
 
  • Like
  • Fire
  • Love
Reactions: 24 users

equanimous

Norse clairvoyant shapeshifter goddess

TinyML: Harnessing Embedded Machine Learning at the Edge​

Embedded World 2022​

By Carolyn MathasLast updated Jun 15, 2022

Share

TinyML delivers machine learning to the Edge, where battery-powered MCU-based embedded devices perform ML tasks in real time. Machine learning is a subset of Artificial Intelligence, and tiny machine learning (TinyML) uses machine learning algorithms that are processed locally on embedded devices.
TinyML makes it possible to run machine learning models on the smallest microcontrollers (MCU’s). By embedding ML, each microcontroller gains substantial intelligence without the need for transferring data over the cloud to make decisions.
TinyML is designed to solve the power and space aspects of embedding AI into these devices. By embedding it into small units of hardware, deep learning algorithms can train networks on the devices, shrinking device size and eliminating the latency of sending data to the cloud.
TinyML also eradicates the need to recharge the devices manually or change batteries because of power constraints. Instead, you have a device that runs at less than one milliwatt, operates on a battery for years, or uses energy harvesting. The idea behind TinyML is to make it accessible, foster mass proliferation, and scale it to virtually trillions of inexpensive and independent sensors, using 32-bit microcontrollers that go for $0.50 or less.
Another TinyML advantage is the blending of voice interfaces and visual signals, allowing devices to understand when you are looking at a machine and eliminating background noises such as people or equipment in industrial settings.
Let’s Backtrack
What exactly is TinyML? The tiny machines used in TinyML are task-specific MCUs. They run on ultra-low power, provide almost immediate analysis (very low latency), feature embedded machine-learning algorithms, and are pretty inexpensive.
TinyML delivers artificial intelligence to ubiquitous MCUs and IoT devices, performing on-device analytics on the huge amount of data they collect, exactly where they reside. TinyML optimizes ML models on edge devices. When data is kept on an edge device, it minimizes the risk of being compromised. TinyML smart edge devices make inferences without an internet connection.
What happens when we embed TinyML algorithms?
  • A less resource-intensive inference of a pre-trained model is used rather than full model training.
  • Neural networks behind TinyML models are pruned, removing some synapses and neurons.
  • Quantization reduces the bit size so that the model takes up less memory, requires less power, and runs faster—with minimal impact on accuracy.
TinyML is bringing deep learning models to microcontrollers. Deep learning in the cloud is already successful, and many applications require device-based inference. Internet availability is not necessarily a given for other apps, such as drone rescue missions. In healthcare, HIPPA regulations add to the difficulty of safely sending data to the cloud. Delays (latency) caused by the roundtrip to the cloud are game stoppers for applications that require real-time ML inference.
Where is TinyML Used?
TinyML aims to bring machine learning to the Edge, where battery-powered MCU-based embedded devices perform ML tasks in real time. Applications include:
  • Keyword spotting
  • Object recognition and classification
  • Audio detection
  • Gesture recognition
  • Machine monitoring
  • Machine predictive maintenance
  • Retail inventory
  • Real-time monitoring of crops or livestock
  • Personalized patient care
  • Hearing aid hardware
TinyML is making its way into billions of microcontrollers, enabling previously impossible applications.
The Future
Today, TinyML represents a fast-growing field of machine learning technologies and applications, including hardware, algorithms, and software capable of performing on-device sensor data analytics and enabling “always-on” battery-operated edge devices. TinyML brings enhanced capabilities to already established edge-computing and IoT systems with low cost, low latency, small power, and minimal connectivity requirements.
While conventional machine learning continues to become more sophisticated and resource-intensive systems, TinyML addresses the other end of the spectrum. It represents an exact and current opportunity for developers to be involved.
Check out the two opportunities at Embedded World, and learn how you can capitalize on TinyML now.
Bringing TinyML to RISC-V With Specialized Kernels and a Static Code Generator Approach on June 21 at 11:00 – 12:45 and An Introduction to TinyML: Bringing Deep Learning to Ultra-Low Power Micro-Controllers, part of Session 8.1—Autonomous & Intelligent Systems—Embedded Machine Learning Hardware on June 22, 10:00 – 13:00.
Then add to your knowledge FREE by going to:
To say TinyML is catching on is an understatement. It’s blowing up headlines, including these that appeared within two weeks:
Given the ease of access to the technology, the power to capitalize on TinyML is here and now. Implementing such technology on MCUs and IoT devices changes people’s lives for the better”
 
  • Like
  • Fire
  • Love
Reactions: 23 users

Dozzaman1977

Regular
Good morning Brainchip supporters,

I can confirm that I have followed up on my effort to get an update on how things have been progressing with the Akida 2000 development, over say, the last 10/12 months.

The matter is highly sensitive, how do I know this ?

Having sought some (new) information from the "most knowledgeable staff member" nothing has been forthcoming, which I totally respect, and that is our answer in a nutshell, all progress is highly confidential.

An update may appear, but not at this point in time, it purely indicates that very important things are at play.

We all know that it's a long runway now, so much more patience is required, and until some other company knocks us off our perch, well, we
remain the NUMBER 1 PLAYER WORLDWIDE IN THIS SPACE...get used to it, it feels great being a shareholder (stolen phrase).

Love Brainchip...Tech x
It might be a long runway but hopefully the wheels on the plane are just about ready to lift off. I guess we will know by the results of the 4C at the end of year and the first in 2023.
Flying Take Off GIF by Delta Air Lines
 
  • Like
  • Fire
Reactions: 20 users
D

Deleted member 118

Guest
Says confidential and private on the 1st page. I wonder if someone forgot to put a password on there google drive. Saved to my google drive now. Good to see they also have many NDAs. Very exciting times if you ask me.





Must be talking about Mercedes with 2 million car sales a year

580B01BA-D544-4AFB-AE97-FE2C58872A3C.png
.


And does it link us with ZF via nvisio

 
  • Like
  • Fire
  • Love
Reactions: 13 users

goodvibes

Regular

Asynchronous Coded Electronic Skin (ACES)​



We are developing sensory integrated artificial brain system that mimics biological neural networks, which can run on a power-efficient neuromorphic processor, such as Intel’s Loihi chip and Brainchip’s Akida neural processor. This novel system integrates ACES and vision sensors, equipping robots with the ability to draw accurate conclusions about the objects they are grasping based on the data captured by the sensors in real-time - while operating at a power level efficient enough to be deployed directly inside the robot.

Copy from a german forum…hadnt heared of it.
 
  • Like
  • Love
Reactions: 13 users
Top Bottom