BRN Discussion Ongoing

equanimous

Norse clairvoyant shapeshifter goddess

Yahoo Finance had 93.9M monthly unique visitors in November 2020​

 
  • Like
  • Fire
Reactions: 20 users

equanimous

Norse clairvoyant shapeshifter goddess
hi @zeeb0t Is it possible to have calendar schedule and reminders on this platform for events?

1659844911739.png
 
  • Like
  • Fire
Reactions: 20 users

equanimous

Norse clairvoyant shapeshifter goddess
Its actually crazy to think that Akida has a lot more room for performance and algorithm optimizations coupled with the fact when Neuromorphic architecture will be built for its intention as suggested in the latest podcast will make it even more revolutionary.

The fact its so adaptable to existing products with instant power reduction and performance improvement makes it a no Brainer to use.

Cant wait for the next upgraded edition version to be released and its results.
 
  • Like
  • Fire
  • Love
Reactions: 21 users
D

Deleted member 118

Guest
 
  • Like
  • Love
Reactions: 4 users

Slymeat

Move on, nothing to see.

So Samsung are “working on” something that has already existed, in Akida, for over a year now. Thanks for spreading the word on neuromorphic computing I suppose.
 
  • Like
  • Haha
  • Love
Reactions: 14 users
This been posted? Link to article in title. @Fullmoonfever has previously posted an article by ABI Research


Driver Monitoring Systems Shipments Will Jump 487.5% Between 2022 and 2027, Increasing Safety and Creating a Lucrative Monetization Opportunity for Carmakers



New York, New York - July 28, 2022

According to global technology intelligence firm ABI Research, shipments of vehicles featuring camera-based Driver Monitoring Systems (DMS) will jump from 8 million in 2022 to 47 million in 2027, more than 50% of global new vehicle sales.

These systems offer reliable real-time driver distraction monitoring as means to prevent accidents. While mainly driven by regulation, they also enable a range of infotainment-related features that will provide carmakers with the opportunity to recoup their investments.

Because DMS will become mandated, carmakers, especially in the mass market, were initially interested in deploying the minimal EU General Safety Regulation (GSR) requirements. However, standard mandated ADAS features drive an additional cost into the vehicle that OEMs cannot quickly or easily recuperate

“Hence, envisioning additional use cases that use the available sensor technology has become imperative. With the realization that monetization opportunities could be realized with the same DMS hardware and minor incremental software investment, most carmakers' DMS RFQs now request two to three features beyond driver attention monitoring," explains Maite Bezerra, Smart Mobility & Automotive Industry Analyst at ABI Research.

DMS safety-related detection capabilities include drowsiness, distraction, seatbelt use, smoking, and phone use. However, DMS can also support several convenience features. For example, the driver’s head position and gaze direction input can enable Augmented Reality (AR) head-up displays and 3D dashboards to provide information about Points of Interest (e.g., Mercedes' MBUX Travel Knowledge) or to highlight or tone down information in the cockpit, decreasing energy consumption in EVs.

Advanced cognitive load detection capabilities can be used by personal assistants to measure the driver’s stress level, mood, or health and make suggestions or take actions accordingly. Examples include Cerence Co-Pilot, NVIDIA Concierge, and NIO's NOMI. “There is also interest in using the driver's medical status, such as heart and respiration rates, to determine stress level and medical condition after accidents,” Bezerra points out.

Expanding the DMS scope to Occupant Monitoring Systems (OMSs) within the same camera is another clear trend due to the broad range of monetizable use cases enabled by camera-based OMSs. According to Bezerra, “OMSs primary use case is detecting children or pets left behind, but input can be used to enhance passenger safety and convenience.

For example, the camera can detect incorrect use of seatbelts, and the occupant's position in the car can be used to regulate airbag deployment more effectively. Regarding convenience, the camera can be used for selfies, video conferences, remote vehicle motoring, and multi-user in-cabin and media content customization.”

ABI Research forecasts that nearly 10 million vehicles will be shipped with single-camera DMS and OMS, offered by companies including Seeing Machines, Cipia, Tobii, and Jungo, in 2028. "Moving forward, DMS and OMS will be critical sensors enabling next-generation automotive HMI and UX. Machine Learning (ML), Artificial Intelligence (AI), multimodal input and output channels, and unprecedented integration with vehicle sensors, domains, location data, and other IoT devices will be combined to provide an intuitive, humanized, and seamless in-car user experience," Bezerra concludes.

These findings are from ABI Research's Next-Generation Automotive HMI application analysis report. This report is part of the company's Smart Mobility & Automotive research service, which includes research, data, and ABI Insights. Based on extensive primary interviews, Application Analysisreports present an in-depth analysis of key market trends and factors for a specific technology.

About ABI Research
ABI Research is a global technology intelligence firm delivering actionable research and strategic guidance to technology leaders, innovators, and decision makers around the world. Our research focuses on the transformative technologies that are dramatically reshaping industries, economies, and workforces today.
 
Last edited:
  • Like
  • Fire
  • Love
Reactions: 35 users

Proga

Regular
This been posted? Link to article in title. @Fullmoonfever has previously posted an article by ABI Research


Driver Monitoring Systems Shipments Will Jump 487.5% Between 2022 and 2027, Increasing Safety and Creating a Lucrative Monetization Opportunity for Carmakers



New York, New York - July 28, 2022

According to global technology intelligence firm ABI Research, shipments of vehicles featuring camera-based Driver Monitoring Systems (DMS) will jump from 8 million in 2022 to 47 million in 2027, more than 50% of global new vehicle sales.

These systems offer reliable real-time driver distraction monitoring as means to prevent accidents. While mainly driven by regulation, they also enable a range of infotainment-related features that will provide carmakers with the opportunity to recoup their investments.

Because DMS will become mandated, carmakers, especially in the mass market, were initially interested in deploying the minimal EU General Safety Regulation (GSR) requirements. However, standard mandated ADAS features drive an additional cost into the vehicle that OEMs cannot quickly or easily recuperate

“Hence, envisioning additional use cases that use the available sensor technology has become imperative. With the realization that monetization opportunities could be realized with the same DMS hardware and minor incremental software investment, most carmakers' DMS RFQs now request two to three features beyond driver attention monitoring," explains Maite Bezerra, Smart Mobility & Automotive Industry Analyst at ABI Research.

DMS safety-related detection capabilities include drowsiness, distraction, seatbelt use, smoking, and phone use. However, DMS can also support several convenience features. For example, the driver’s head position and gaze direction input can enable Augmented Reality (AR) head-up displays and 3D dashboards to provide information about Points of Interest (e.g., Mercedes' MBUX Travel Knowledge) or to highlight or tone down information in the cockpit, decreasing energy consumption in EVs.

Advanced cognitive load detection capabilities can be used by personal assistants to measure the driver’s stress level, mood, or health and make suggestions or take actions accordingly. Examples include Cerence Co-Pilot, NVIDIA Concierge, and NIO's NOMI. “There is also interest in using the driver's medical status, such as heart and respiration rates, to determine stress level and medical condition after accidents,” Bezerra points out.

Expanding the DMS scope to Occupant Monitoring Systems (OMSs) within the same camera is another clear trend due to the broad range of monetizable use cases enabled by camera-based OMSs. According to Bezerra, “OMSs primary use case is detecting children or pets left behind, but input can be used to enhance passenger safety and convenience.

For example, the camera can detect incorrect use of seatbelts, and the occupant's position in the car can be used to regulate airbag deployment more effectively. Regarding convenience, the camera can be used for selfies, video conferences, remote vehicle motoring, and multi-user in-cabin and media content customization.”

ABI Research forecasts that nearly 10 million vehicles will be shipped with single-camera DMS and OMS, offered by companies including Seeing Machines, Cipia, Tobii, and Jungo, in 2028. "Moving forward, DMS and OMS will be critical sensors enabling next-generation automotive HMI and UX. Machine Learning (ML), Artificial Intelligence (AI), multimodal input and output channels, and unprecedented integration with vehicle sensors, domains, location data, and other IoT devices will be combined to provide an intuitive, humanized, and seamless in-car user experience," Bezerra concludes.

These findings are from ABI Research's Next-Generation Automotive HMI application analysis report. This report is part of the company's Smart Mobility & Automotive research service, which includes research, data, and ABI Insights. Based on extensive primary interviews, Application Analysisreports present an in-depth analysis of key market trends and factors for a specific technology.

About ABI Research
ABI Research is a global technology intelligence firm delivering actionable research and strategic guidance to technology leaders, innovators, and decision makers around the world. Our research focuses on the transformative technologies that are dramatically reshaping industries, economies, and workforces today.
camera-based Driver Monitoring Systems (DMS) will jump from 8 million in 2022 to 47 million in 2027 - that's per year. All the while, vehicles will be transitioning to EV's where minimal power consumption becomes crucial. Therefore, most will run using Akida. Every DMS inside an EV will.
 
  • Like
  • Fire
  • Love
Reactions: 24 users

equanimous

Norse clairvoyant shapeshifter goddess

TinyML: Harnessing Embedded Machine Learning at the Edge​

Embedded World 2022​

By Carolyn MathasLast updated Jun 15, 2022

Share

TinyML delivers machine learning to the Edge, where battery-powered MCU-based embedded devices perform ML tasks in real time. Machine learning is a subset of Artificial Intelligence, and tiny machine learning (TinyML) uses machine learning algorithms that are processed locally on embedded devices.
TinyML makes it possible to run machine learning models on the smallest microcontrollers (MCU’s). By embedding ML, each microcontroller gains substantial intelligence without the need for transferring data over the cloud to make decisions.
TinyML is designed to solve the power and space aspects of embedding AI into these devices. By embedding it into small units of hardware, deep learning algorithms can train networks on the devices, shrinking device size and eliminating the latency of sending data to the cloud.
TinyML also eradicates the need to recharge the devices manually or change batteries because of power constraints. Instead, you have a device that runs at less than one milliwatt, operates on a battery for years, or uses energy harvesting. The idea behind TinyML is to make it accessible, foster mass proliferation, and scale it to virtually trillions of inexpensive and independent sensors, using 32-bit microcontrollers that go for $0.50 or less.
Another TinyML advantage is the blending of voice interfaces and visual signals, allowing devices to understand when you are looking at a machine and eliminating background noises such as people or equipment in industrial settings.
Let’s Backtrack
What exactly is TinyML? The tiny machines used in TinyML are task-specific MCUs. They run on ultra-low power, provide almost immediate analysis (very low latency), feature embedded machine-learning algorithms, and are pretty inexpensive.
TinyML delivers artificial intelligence to ubiquitous MCUs and IoT devices, performing on-device analytics on the huge amount of data they collect, exactly where they reside. TinyML optimizes ML models on edge devices. When data is kept on an edge device, it minimizes the risk of being compromised. TinyML smart edge devices make inferences without an internet connection.
What happens when we embed TinyML algorithms?
  • A less resource-intensive inference of a pre-trained model is used rather than full model training.
  • Neural networks behind TinyML models are pruned, removing some synapses and neurons.
  • Quantization reduces the bit size so that the model takes up less memory, requires less power, and runs faster—with minimal impact on accuracy.
TinyML is bringing deep learning models to microcontrollers. Deep learning in the cloud is already successful, and many applications require device-based inference. Internet availability is not necessarily a given for other apps, such as drone rescue missions. In healthcare, HIPPA regulations add to the difficulty of safely sending data to the cloud. Delays (latency) caused by the roundtrip to the cloud are game stoppers for applications that require real-time ML inference.
Where is TinyML Used?
TinyML aims to bring machine learning to the Edge, where battery-powered MCU-based embedded devices perform ML tasks in real time. Applications include:
  • Keyword spotting
  • Object recognition and classification
  • Audio detection
  • Gesture recognition
  • Machine monitoring
  • Machine predictive maintenance
  • Retail inventory
  • Real-time monitoring of crops or livestock
  • Personalized patient care
  • Hearing aid hardware
TinyML is making its way into billions of microcontrollers, enabling previously impossible applications.
The Future
Today, TinyML represents a fast-growing field of machine learning technologies and applications, including hardware, algorithms, and software capable of performing on-device sensor data analytics and enabling “always-on” battery-operated edge devices. TinyML brings enhanced capabilities to already established edge-computing and IoT systems with low cost, low latency, small power, and minimal connectivity requirements.
While conventional machine learning continues to become more sophisticated and resource-intensive systems, TinyML addresses the other end of the spectrum. It represents an exact and current opportunity for developers to be involved.
Check out the two opportunities at Embedded World, and learn how you can capitalize on TinyML now.
Bringing TinyML to RISC-V With Specialized Kernels and a Static Code Generator Approach on June 21 at 11:00 – 12:45 and An Introduction to TinyML: Bringing Deep Learning to Ultra-Low Power Micro-Controllers, part of Session 8.1—Autonomous & Intelligent Systems—Embedded Machine Learning Hardware on June 22, 10:00 – 13:00.
Then add to your knowledge FREE by going to:
To say TinyML is catching on is an understatement. It’s blowing up headlines, including these that appeared within two weeks:
Given the ease of access to the technology, the power to capitalize on TinyML is here and now. Implementing such technology on MCUs and IoT devices changes people’s lives for the better”
 
  • Like
  • Fire
  • Love
Reactions: 23 users

Dozzaman1977

Regular
Good morning Brainchip supporters,

I can confirm that I have followed up on my effort to get an update on how things have been progressing with the Akida 2000 development, over say, the last 10/12 months.

The matter is highly sensitive, how do I know this ?

Having sought some (new) information from the "most knowledgeable staff member" nothing has been forthcoming, which I totally respect, and that is our answer in a nutshell, all progress is highly confidential.

An update may appear, but not at this point in time, it purely indicates that very important things are at play.

We all know that it's a long runway now, so much more patience is required, and until some other company knocks us off our perch, well, we
remain the NUMBER 1 PLAYER WORLDWIDE IN THIS SPACE...get used to it, it feels great being a shareholder (stolen phrase).

Love Brainchip...Tech x
It might be a long runway but hopefully the wheels on the plane are just about ready to lift off. I guess we will know by the results of the 4C at the end of year and the first in 2023.
Flying Take Off GIF by Delta Air Lines
 
  • Like
  • Fire
Reactions: 20 users
D

Deleted member 118

Guest
Says confidential and private on the 1st page. I wonder if someone forgot to put a password on there google drive. Saved to my google drive now. Good to see they also have many NDAs. Very exciting times if you ask me.





Must be talking about Mercedes with 2 million car sales a year

580B01BA-D544-4AFB-AE97-FE2C58872A3C.png
.


And does it link us with ZF via nvisio

 
  • Like
  • Fire
  • Love
Reactions: 13 users

goodvibes

Regular

Asynchronous Coded Electronic Skin (ACES)​



We are developing sensory integrated artificial brain system that mimics biological neural networks, which can run on a power-efficient neuromorphic processor, such as Intel’s Loihi chip and Brainchip’s Akida neural processor. This novel system integrates ACES and vision sensors, equipping robots with the ability to draw accurate conclusions about the objects they are grasping based on the data captured by the sensors in real-time - while operating at a power level efficient enough to be deployed directly inside the robot.

Copy from a german forum…hadnt heared of it.
 
  • Like
  • Love
Reactions: 13 users

hotty4040

Regular
Says confidential and private on the 1st page. I wonder if someone forgot to put a password on there google drive. Saved to my google drive now. Good to see they also have many NDAs. Very exciting times if you ask me.



Makes me feel warm and fuzzy reading all of this, Rocket, especially with Brainchip the main feature IMO. Nice find indeed, I said, nice find indeed. Well done.

Akida Ballista >>>>> I like the " warm and fuzzies " <<<<<

hotty...
 
  • Like
  • Love
  • Fire
Reactions: 14 users
D

Deleted member 118

Guest

D8C3C4C0-3683-499F-B938-50F8047614CF.jpeg


In order to meet the Navy’s need for a spiking neural network testing platform, ChromoLogic proposes to develop a Spiking Neural Network Modeler (SpiNNMo) capable of simulating a variety of neuromorphic hardware platforms. SpiNNMo is able to extract relevant performance parameters from a neuromorphic chip and then predict the chip’s performance on new networks and data. In this way SpiNNMo can predict accuracy, latency and energy usage for a wide variety of hardware platforms on a given neural network and dataset. This will allow the Navy to test the performance of new spiking neural network architectures and chipsets before the chips are widely available and therefore speed neuromorphic adoption.


I wonder if they are talking about Akida 2000 or even 3000
 
  • Like
  • Fire
Reactions: 13 users
Just reading the review of the new Volkswagen Golf R for Australia. Could be old news for our German friends?

There are also some unique aspects to the 10.0-inch central display, particularly in the performance pages.

And if you can’t be bothered touching the screen, you can use hand gestures to adjust certain functions, or you can yell at the car: “Hey Volkswagen, it’s cold in here!”.


This all seems to becoming mainstream or at the very least will soon be expected.

Go Dockers
“Hey Volkswagen, it’s cold in here!”

The use of hand gestures and "natural language" rather than commands, is the sweet and spicy smell of AKIDA..

There would be many Tech companies trying to do it though and many would be succeeding.

I think AKIDA, would offer more simplicity and a lot less power use, so it really comes down to, how good our sales and marketing team, to the OEMs is..

Rob Telson and Jerome Nadel, are the heads of these divisions, so I'm pretty confident, we will do well! 😉..

Seeing companies, bring out things that "look" like AKIDA, doesn't really excite me though.

There are many paper tigers around..

_20220807_174100.JPG
 
  • Like
  • Fire
Reactions: 11 users

View attachment 13532

In order to meet the Navy’s need for a spiking neural network testing platform, ChromoLogic proposes to develop a Spiking Neural Network Modeler (SpiNNMo) capable of simulating a variety of neuromorphic hardware platforms. SpiNNMo is able to extract relevant performance parameters from a neuromorphic chip and then predict the chip’s performance on new networks and data. In this way SpiNNMo can predict accuracy, latency and energy usage for a wide variety of hardware platforms on a given neural network and dataset. This will allow the Navy to test the performance of new spiking neural network architectures and chipsets before the chips are widely available and therefore speed neuromorphic adoption.


I wonder if they are talking about Akida 2000 or even 3000
Definitely are, they will be testing us and other competitors in the firld
 
  • Like
Reactions: 7 users
D

Deleted member 118

Guest
 
  • Like
  • Fire
Reactions: 3 users

MDhere

Regular
Oh come on Bravo, all I see is confident, talented and happy young females, who just happen to be blessed with good looks.
If its any consolation, I'm sure your feet are much more attractive that theirs:p:love:
all us females rock in any language @Bravo 🤣
 
  • Like
  • Love
  • Fire
Reactions: 11 users
D

Deleted member 118

Guest
  • Like
  • Haha
Reactions: 3 users

MDhere

Regular
A good YouTube channel, for keeping up to date, with robotic advances is Pro Robots.



I don't watch the whole vids, but just flip through and watch the interesting bits..
 
  • Like
  • Fire
Reactions: 5 users
Top Bottom