BRN Discussion Ongoing

Tailz6

Emerged
Maybe FF is now working for Brainchip and that's why he's left ๐Ÿคท๐Ÿผโ€โ™‚๏ธ ๐Ÿ˜…
 
  • Like
  • Haha
  • Fire
Reactions: 20 users

Tony Coles

Regular
  • Haha
  • Like
Reactions: 14 users

stuart888

Regular

Machine Learning Sensors: Truly Data-Centric AI

A new approach to embedding machine learning intelligence on edge devices

View attachment 10943
Thanks Sirod69. What a fantastic, understandable-yet-deep article on TinyML Far Edge Sensors with no-cloud (except selected meta computed on the edge, periodically sent to cloud).

Included is a one-sentence description which is Brainchip's Spiking Low Power solution on a micro-board provides:
โ€œAn ML sensor is a self-contained system that utilizes on-device machine learning to extract useful information by observing some complex set of phenomena in the physical world and reports it through a simple interface to a wider system.โ€

Plus, the DataSheet part really made a lot of sense, and a great way to list specific Akida technical details. The IoT Security and Privacy Label (lower part of image below) is a great way to highlight that specific feature "Privacy Inside"!


1657574476948.png
 
  • Like
  • Love
  • Fire
Reactions: 28 users

cosors

๐Ÿ‘€
Thanks Sirod69. What a fantastic, understandable-yet-deep article on TinyML Far Edge Sensors with no-cloud (except selected meta computed on the edge, periodically sent to cloud).

Included is a one-sentence description which is Brainchip's Spiking Low Power solution on a micro-board provides:
โ€œAn ML sensor is a self-contained system that utilizes on-device machine learning to extract useful information by observing some complex set of phenomena in the physical world and reports it through a simple interface to a wider system.โ€

Plus, the DataSheet part really made a lot of sense, and a great way to list specific Akida technical details. The IoT Security and Privacy Label (lower part of image below) is a great way to highlight that specific feature "Privacy Inside"!


View attachment 11115
And you are right!
18 is much more pleasant than 15 on handhelds.
 
  • Like
  • Love
Reactions: 4 users
 
  • Like
  • Fire
  • Love
Reactions: 5 users

Tony Coles

Regular
It is a photo of a very special thing. Stainless steel and multibeam interference. There are deposits abrasive additives because it lays for years in the machine. The light is specially refracted by the nanometer thin deposits like shells or opals. The steel is normal gray. There are multiple pathways for color not just pigments.
WOW! Thats pretty cool, do you have Akida in your phone ๐Ÿ“ž๐Ÿ˜‚
 
  • Like
Reactions: 4 users

cosors

๐Ÿ‘€
Last edited:
  • Fire
  • Love
  • Like
Reactions: 7 users

miaeffect

Oat latte lover

Si๐Ÿ– โค
 
  • Like
  • Fire
  • Love
Reactions: 18 users

stuart888

Regular


Just for you dad as I know your watching.

Thanks from a cousin want-a-be! Immediately, he mentions: Inference vs Machine Learning. Just as Brainchip's Jerome Nadal on his recent interviews.

It is important for newer folks, Brainchip solution learns! It has on the edge Machine Learning (JAST bought from Simon Thorpe team), not just the Inference (standard AI decisioning). Akida learns quickly too, even complex patterns can be found, with just a few or a handful of training passes. The patterns it can learn from can be video, sound, voice, vibration, etc.

Learning Edge Example: It could be used by Mercedes, when a brand new driver sits down in the driver seat. The system would likely ask the new driver to speak its name, and the learning begins just before the ride. If the same driver sits down the next day with glasses on and a scruffy beard, it learns more. Brainchip's solution can learn both the driver's face and their vocal patterns. After a handful of drives, hair styles, etc, Akida would recognize the driver's various looks, vocal sounds. So cool!

Very important.
"Akida Machine Learning Onboard"!

https://community.arm.com/arm-commu...cs-training-vs-inference-whats-the-difference
 
  • Like
  • Love
  • Fire
Reactions: 34 users

Dr E Brown

Regular


How many contain Akida?
 
  • Like
  • Love
  • Fire
Reactions: 36 users
vorago tweet just in case it does not show correctly here.
 
  • Like
  • Fire
  • Love
Reactions: 12 users
lol. It's happened again. JINX
 
  • Haha
  • Like
  • Sad
Reactions: 7 users

Quiltman

Regular
I join the chorus of those who would like to pay homage and say thanks to FactFinder.
I believe he has made the right call, the amount of time he had to be investing to maintain the incredible levels of contribution being made was unhealthy IMHO, almost verging on obsessive. Hopefully after some well earned R&R he will find a way to return with less, but highly informed, contributions to the forum.

I, like many of us here I imagine, had become somewhat guilty of relying on FF research. I. like he, have already convinced myself of the tech, the addressable market and the ability of the high calibre team to deliver for BRN; and eagerly consumed information posted by FactFinder( & others) as we move forward in our commercialisation.

What we actually need is a whole group of "5%- FactFinder's ", delivering the same high level of factual information, but at a rate that can be maintained without our beloved's divorcing us! Some of the contributions I have just read since FF's departure tells me we are well on our way.

The end result of the army of 5%-FactFinder's will be a greater retail holding of high conviction holders ... which will be of benefit to us all when things really start to heat up in 6 - 12 months time. It makes me think of this quote by Ian Cassel.

1657580080639.png


Thanks again FactFinder.
Good luck with your health, your families health & finding balance.
 
  • Like
  • Love
  • Fire
Reactions: 56 users

Dougie54

Regular
My honest opinion we ainโ€™t working with them anymore and are collaborating with either or both Biotone and NASA
I tried searching through there website and everywhere else over night and to me it appears the trial didnโ€™t proceed,yet the website shows the breath device up and running .now Iโ€™m more confused.
 
  • Like
  • Thinking
Reactions: 6 users

Makeme 2020

Regular
Renesas Electronics Corporation
Renesas Electronics Corporation
Enter the terms you wish to search for.
Account

Main navigation​

  • Products
  • Applications
  • Design Resources
  • Sales & Support
  • About
Breadcrumb
  1. Aboutarrow_right
  2. Press Centerarrow_right
  3. Blogsarrow_right
  4. Revolution of Endpoint AI in Embedded Vision Applications


Back to top
Image
Karol Saja

Karol Saja
Staff Engineer



Endpoint AI is a new frontier in the space of artificial intelligence that takes the processing power of AI to the edge. It is a revolutionary way of managing information, accumulating relevant data, and making decisions locally on a device. Endpoint AI employs intelligent functionality at the edge of the network, in other words, it transforms the IoT devices that are used to compute data into smarter tools embedded with AI features. This in turn improves real-time decision-making capabilities and functionalities. The goal is to bring machine-learning based intelligent decision-making physically closer to the source of the data. In this context, Embedded vision shifts to the Endpoint. Embedded vision incorporates more than breaking down images or videos into pixels - it is the means to understand pixels, make sense of what is inside and support making a smart decision based on specific events that transpire. There have been massive endeavours at the research and industry levels to develop and improve AI technologies and algorithms.

What is Embedded Vision?​

Embedded computer vision is a technology that imparts machines with the ability to see i.e., the sense of sight, which enables them to explore the environment with the support of machine-learning and deep-learning algorithms. There are numerous applications across several industries whose functionality relies on computer vision, thus becoming an integral part of technological procedures. In precise terms, computer vision is one of the Artificial Intelligent (AI) fields that enables machines to extract meaningful information from digital multimedia sources to take actions or make recommendations based on the information that has been obtained. Computer vision is, to some extent, akin to the human sense of sight. However, the two differ on several grounds. Human sight has the exceptional ability to understand many and varied things from what it sees. On the other hand, computer vision recognizes only what it has been trained on and what it is designed to do exactly, and that too with an error rate. AI in embedded vision processes trains the machines to perform supposed functions with the least processing time and has an upper edge over human sight in analysing hundreds of thousands of images in a lesser timeframe.
Embedded vision is one of leading technologies with embedded AI utilised in smart endpoint applications in a wide range of consumer and industrial applications. There are a number value-added use cases examples; such as counting/analysing quality of products on a factory line, keeping a tally of people in a crowd, identifying objects, analysing contents of a specific area in the environment to name a few.
Whilst considering the processing of embedded vision applications at the endpoint, the performance of such operation may face some challenges. The data flow from the vision sensing device to the cloud for the purposes of analysing and processing could be very large, and may exceed the network available bandwidth. For instance, a camera with 1920 ร—1080 and operating with 30 FPS (Frame Per Second) may generate about 190 MB/S of data. In addition to privacy concerns, this substantial amount of data contributes to latency during the round trip of data from the edge to the cloud, then back again to the endpoint. These limitations could negatively impact employment of embedded vision technologies in real-time applications.
IoT Security is also a concern in the adoption and growth of embedded vision applications across any segment. In general, all IoT devices must be secured. A critical issue and concern in the use of smart visions devices is the possible misuse of sensitive images and videos. The unauthorised access to smart cameras form example is not only a breach of privacy, but it could pave a way for more harmful outcome.

Vision AI at the Endpoint​

  • Endpoint AI can enable image processing to infer a complex insight from a huge number of captured images.
  • AI uses machine-learning and deep-learning capabilities within the smart imaging devices to check a huge amount of previously well-known use cases.
  • For optimum performance, Embedded vision requires AI algorithms to run on the endpoint devices and not transmit data to the cloud. The data here is captured by the imaging recognition device, then processed and analysed in the same device.
The limitation of power consumption at the endpoint is still available, where the microcontrollers or microprocessors need to have more efficiency to take on high volumes of multiply-accumulate (MAC) that are required for AI processing.

Deployment of AI Vision Applications​

There are unlimited use cases for the deployment of AI in vision applications in the real world. Here are some of the examples where Renesas can provide comprehensive MCU and MPU based solutions inc. all the necessary SW and tools to enable quick development.

Smart Access Control:​

Security access control systems are becoming more valuable with the addition of voice and facial recognition features. Real-time recognition requires embedded systems with very high computational capabilities and on-chip hardware acceleration. To meet this challenge, Renesas provides a choice of MCU or MPU that offer very high computational power that also integrates many key features that are critical to high performance facial and voice recognition systems such as built-in H.265 hardware decoding, 2D/3D graphic acceleration, and ECC on internal and external memory to eliminate soft-errors and allow for high-speed video processing.

Industrial Control:​

Embedded vision has a huge impact as it utilizes many applications, including product safety, automation, sorting of products, and many more. AI techniques can perform multiple operations in the production process such as packaging and distribution, which can ensure quality and safety during production in all stages. Safety is needed in areas such as critical infrastructure, warehouses, production plants and buildings which require a high level of human resources.

Transportation:​

Computer vision presents a large scale of ways to improve transportation services. In self-driving cars as an example, computer vision is used to detect and classify objects on the road. It is also used to create 3D maps and estimate movement around. By using computer vision, self-driving cars gather information from the environment using cameras and sensors, which then interpret and analyse the data to make the most suitable response by using vision techniques such as pattern recognition, feature extraction, and object tracking.
Image
Computer vision is used to detect and classify objects on the road

In general, Embedded vision can serve many purposes, and these functionalities can be used after customization and the needed training on different types of datasets from many areas. Functionalities include monitoring physical area, recognition of intrusion, detecting crowd density, counting humans or objects, or animals. They also include identifying people or finding cars based on car number, detection of motion, and human behaviour analysis in different cases.

Case study: Agricultural Plant Disease Detection​

Vision AI & deep learning maybe employed to detect various anomalies - plant disease detection is one example of this type of system. Deep learning algorithms -one of the AI techniques- are used widely for this purpose. According to research, computer vision gives better, accurate, fast, and low-cost results, as compared to the costly and slow labour-intensive results of previous methods.
The process that is used in this case study can be applied to any other detection. There are three main steps for using deep learning in computer/machine vision:
Image
Three main steps for using deep learning in computer/machine vision

Step one is performed on normal computers in the lab, whereas step 2 is deployed on a microcontroller at the endpoint, which can be on the farm. Results in step 3 are displayed on the screen at the user side. The following diagram shows the process in general.
Image
General process of deep learning in computer/machine vision

Conclusion:​

We are experiencing a revolution in the high-performance smart vision applications across a number of segments. The trend is well supported by growing computational power of microcontrollers and Microprocessors at the endpoints, opening-up great opportunities for exciting new vision applications. Renesas Vision AI solutions can help you to enhance overall system capability by delivering embedded AI technology with intelligent data processing at the endpoint. Our advanced image processing solutions at the edge are provided through a unique combination of low power, multimodal, multi-feature AI inference capabilities. Take the chance now and start developing your vision AI application with Renesas Electronics.

Share this news on​

 
  • Like
  • Fire
  • Love
Reactions: 24 users
hub.JPG

 
  • Like
  • Love
  • Fire
Reactions: 22 users

Diogenese

Top 20
It is a photo of a very special thing. Stainless steel and multibeam interference. There are deposits abrasive additives because it lays for years in the machine. The light is specially refracted by the nanometer thin deposits like shells or opals. The steel is normal gray. There are multiple pathways for color not just pigments.
... or Jackson Pollock
 
  • Haha
Reactions: 5 users
Top Bottom