BRN Discussion Ongoing

Learning

Learning to the Top 🕵‍♂️
Apology if this has been posted.


It's great to be a shareholder.
 
  • Like
  • Fire
  • Love
Reactions: 12 users

MDhere

Regular
Fresh tweet




Sensor Fusion with Deep Learning​

Image
Suad Jusuf

Suad Jusuf
Senior Manager



Sensors are increasingly being used in our everyday lives to help collect meaningful data across a wide range of applications, such as building HVAC systems, industrial automation, healthcare, access control, and security systems, just to name a few. Sensor Fusion network assists in retrieving data from multiple sensors to provide a more holistic view of the environment around a smart endpoint device. In other words, Sensor Fusion provides techniques to combine multiple physical sensor data to generate accurate ground truth, even though each individual sensor might be unreliable on its own. This process helps to reduce the amount of uncertainty that may be involved in overall task performance.
To increase intelligence and reliability, the application of deep learning for sensor fusion is becoming progressively important across a wide range of industrial and consumer segments.
From a data science perspective, this paradigm shift allows extracting relevant knowledge from monitored assets through the adoption of intelligent monitoring and sensor fusion strategies, as well as by the application of machine learning and optimization methods. One of the main goals of data science in this context is to effectively predict abnormal behaviour in industrial machinery, tools, and processes to anticipate critical events and damage, eventually preventing important economic losses and safety issues.
Renesas Electronics provides intelligent endpoint sensing devices as well as a wide range of analog rich Microcontrollers that can become the heart of smart sensors, which enable a more accurate sensor fusion solution across different applications. In this context combining sensor data in a typical sensor fusion network may be achieved as follows:
  • Redundant sensors: All sensors give the same information to the world.
  • Complementary sensors: The sensors provide independent (disjointed) types of information about the world.
  • Coordinated sensors: The sensors collect information about the world sequentially.
Image
sensors

The communication in a sensor network is the backbone of the entire solution and could be in any of the schemes mentioned below:
  • Decentralized: No communication exists between the sensor nodes.
  • Centralized: All sensors provide measurements to a central node.
  • Distributed: The nodes interchange information at a given communication rate (e.g., every five scans, i.e., one-fifth communication rate).
The centralized scheme can be regarded as a special case of the distributed scheme where the sensors communicate every scan to each other. A pictorial representation of the fusion process is given in the figure below.
Image
A pictorial representation of the fusion process

From Industry 4.0 perspective, feedback from one sensor is typically not enough, particularly for the implementation of control algorithms.

Deep Learning​

Precisely calibrated and synchronized sensors are a precondition for effective sensor fusion. Renesas provides a range of solutions to enable informed decision-making by executing advanced sensor fusion at the endpoint on a centralized processing platform.
Performing late fusion allows for interoperable solutions, while early fusion gives AI rich data for predictions. Leveraging the complementary strengths of different strategies gives us the key advantage. The modern approach involves time and space synchronization of all onboard sensors before feeding synchronized data to the neural network for predictions. This data is then used for AI training or Software-In-the-Loop (SIL) testing of real-time algorithm that receives just a limited piece of information.
Deep learning involves the use of neural networks for the purpose of advanced machine learning techniques that leverage high-performance computational platforms such as Renesas RA MCU and RZ MPU for enhanced training and execution. These deep neural networks consist of many processing layers arranged to learn data representations with varying levels of abstraction from sensor fusion. The more layers in the deep neural network, the more abstract the learned representations become.
Deep learning offers a form of representation learning that aims to express complicated data representations by using other simpler representations. Deep learning techniques can understand features using a composite of several layers, each with unique mathematical transforms, to generate abstract representations that better distinguish high-level features in the data for enhanced separation and understanding of true form.
Multi-stream neural networks are useful in generating predictions from multi-modal data, where each data stream is important to the overall joint inference generated by the network. Multi-stream approaches have been shown successful for multi-modal data fusion, and deep neural networks have been applied successfully in multiple applications such as neural machine translation and time-series sensor data fusion.
This is a tremendous breakthrough that allows deep neural networks to train and deploy on MCU-based Endpoint applications, thereby helping to accelerate industrial adoption. Renesas RA MCU platform and associated Flexible SW Package combined with AI modeling tools offer the ability to apply many of the neural network layers as a multi-layer structure. Typically, more layers lead to more abstract features learned by the network. It has been proven that stacking multiple types of layers in a heterogeneous mixture can outperform a homogeneous mixture of layers. Renesas sensing solutions can be used to compensate for deficiencies in information by utilizing feedback from multiple sensors. The deficiencies associated with individual sensors to calculate types of information can be compensated for by combining the data from multiple sensors.
The flexible Renesas Advanced (RA) Microcontrollers (MCUs) are industry-leading 32-bit MCUs and are a great choice for building smart sensors. With a wide range of Renesas RA family MCUs, you can choose the best one as per your application needs. The Renesas RA MCU platform, combined with strong support & SW ecosystem, will help accelerate the development of Industry 4.0 applications with sensor fusion and deep learning modules.
As part of Renesas' extensive solution and design support, Renesas provides a reference design for a versatile Artificial Internet of Things (AIoT) sensor board solution. It targets applications in industrial predictive maintenance, smart home/IoT appliances with gesture recognition, wearables (activity tracking), and mobile for innovative human-machine interface, or HMI, (FingerSense) solutions. As part of this solution, Renesas can provide a complete range of devices, including an IoT-specified RA microcontroller, air quality sensor, light sensor, temperature and humidity sensor, a 6-axis inertial measurement unit as well as Cellular and Bluetooth communication support.
Image
Diagram
With the increasing number of sensors in Industry 4.0 systems comes a growing demand for sensor fusion to make sense of the mountains of data that those sensors produce. Suppliers are responding with integrated sensor fusion devices. For example, an intelligent condition monitoring box is available designed for machine condition monitoring based on fusing data from vibration, sound, temperature, and magnetic field sensors. Additional sensor modalities for monitoring acceleration, rotational speeds, and shock and vibration can be included optionally.
The system implements sensor fusion through AI algorithms to classify abnormal operating conditions with better granularity resulting in high probability decision making. This edge AI architecture can simplify handling the big data produced by sensor fusion, ensuring that only the most relevant data is sent to the edge AI processor or to the cloud for further analysis and possible use in training ML algorithms.
The use of AI-based Deep Learning has several benefits:
  • The AI algorithm can employ sensor fusion to utilize the data from one sensor to compensate for weaknesses in the data from other sensors.
  • The AI algorithm can classify the relevance of each sensor to specific tasks and minimize or ignore data from sensors determined to be less important.
  • Through continuous training at the edge or in the cloud, AI/ML algorithms can learn to identify changes in system behaviour that were previously unrecognized.
  • The AI algorithm can predict possible sources of failures, enabling preventative maintenance and improving overall productivity.
Sensor fusion combined with AI deep learning produces a powerful tool to maximize the benefits when using a variety of sensor modalities. AI/ML-based enhanced sensor fusion can be employed at several levels in a system, including at the data level, the fusion level, and the decision level. Basic functions in sensor fusion implementations include smoothing and filtering sensor data and predicting sensor and system states.
At Renesas Electronics, we invite you to take advantage of our high-performance MCUs and A&P portfolio combined with a complete SW platform providing targeted deep learning models and tools to build next generation sensor fusion solutions.

Thankyou littleshort! 😀and there it is again Sensor Fusion!.
And Brainchip have also used the same words in a couple positions- Customer Support Engineer and Solutions Architect roles -
Screenshot_20220809-231313_Chrome.jpg
 
  • Like
  • Fire
  • Love
Reactions: 34 users

Dhm

Regular
If you get to see this person again (I assume it is Nirav Patel you are talking about), ask him if he knows Andrew Morton, lead developer and maintainer of the Linux kernel. Andrew is also employed by Google. I used to work with Andrew back in the 1990s, we were jogging buddies too. I expect the CTO of the Linux Foundation probably has a working relationship with him. But I digress.

Open source means anyone can write software and others can use it freely. Even better, the actual source code is published in such a way that others can see the actual source code and can even modify it and re-publish it. That happens quite often and is how the community assists itself. This could help Akida uptake as people write software that can utilise Akida’s IP, and then users need to buy a licence, or buy hardware, or pay for IP on which to run their solution.

You probably have seen that software @uiux has been sharing. That’s on a platform that is used for software developers to share their code openly. And that is what @uiux is doing. And uiux’s software can help others work with Akida! At least show them some things that are possible and how simple it is.

For the coding I used to do, we were never allowed to use open source or publish our code as open source. Sometimes we had to work on air-gaped computers in secure internal rooms. It was secret squirrels stuff. I only add that as I expect a lot of companies developing systems utilising Akida may be in the same boat. Their code is the opposite of open-source as their livelihood depends on it. But any open source stuff out there will certainly help with prototyping and proof-of-concept work to show to investors and the like.

My view is that open source is for playing with and testing the waters only. If that is what was meant by “vital to the current evolution of computing” then I agree.

Any real-world, life-critical programs most likely will not contain open source code. The operating system may very well be open source itself, but the applications won’t.
Sly, I seem to have led you astray on the blokes position within the company. He is GM of Visual Media and Games in Florida of the Linux Foundation. I don’t want to mention his name but it is somewhat ‘regal’.

I would imagine that our Akida would add some speed or awareness advantage to games, or am I off the mark?
 
  • Like
  • Fire
Reactions: 10 users

dippY22

Regular
I think Rob likes all sorts of stuff to make people aware of his/Brainchips presence. It’s an easy way to increase exposure. I would think that whoever controls the accounts may see that he has liked something and wonder more about him with the potential to drum up business.
And the hard way to increase exposure is to get in front of the potential customer, identify their strategic goals and whether their corporate "pain" is related to increasing revenues or reducing expenses. Once he has done that he can determine whether Brainchip's Akida technology offers a cost effective solution to this "pain", and make a convincing case to then close the deal.

Sharing likes from the comfort of his phone or p.c. is easy. Closing a potential customer is hard.

Just sayin,.....dippY
 
  • Like
  • Love
Reactions: 8 users

Potato

Regular
CHIPS act signed.
I would be surprised if Brainchip haven't tried to leverage their technology with big names to benefit from this.

Time will tell i guess
 
  • Like
  • Fire
Reactions: 17 users

Proga

Regular
Nvidia is down again on the Nasdaq along with AMD, Micron, NXP and most other semiconductor stocks
 
  • Sad
  • Like
Reactions: 4 users
D

Deleted member 118

Guest
Hi @Zeebot - is it possible to have a thread dedicated to BRN short selling?

Its hard enough to navigate on the BRN Discussion thread as it is due to multiple people posting the same information, not to mention off-topic banter.

According to shortman.com the outstanding shorts on Jun 24 was 4.5%, on Aug 3 it was 4.51%, so nothing to see here ATM.
I am wondering why this fixation continues on the BRN threads.

There will always be people that don’t agree with our own views, that’s just how it is, but it is nothing new. The numbers are fairly steady. Of course, if that was to change, it would be newsworthy.

I have been accused of liking shorters. That is bs. I wish it was illegal but it isn’t.

No doubt I have offended some, that is not my intention.

Anyway, just a request - I have no problem if it is denied.

Are you losing money


 
  • Haha
Reactions: 11 users
Hi @Zeebot - is it possible to have a thread dedicated to BRN short selling?

Its hard enough to navigate on the BRN Discussion thread as it is due to multiple people posting the same information, not to mention off-topic banter.

According to shortman.com the outstanding shorts on Jun 24 was 4.5%, on Aug 3 it was 4.51%, so nothing to see here ATM.
I am wondering why this fixation continues on the BRN threads.

There will always be people that don’t agree with our own views, that’s just how it is, but it is nothing new. The numbers are fairly steady. Of course, if that was to change, it would be newsworthy.

I have been accused of liking shorters. That is bs. I wish it was illegal but it isn’t.

No doubt I have offended some, that is not my intention.

Anyway, just a request - I have no problem if it is denied.
Hey Teach22, I'm not sure what you're talking about..
Out of your 7 posts, 3 have been related to shorting, 2 complaining about people discussing shorting and 1, relating to the huge shorting "desperation" attack, on the 28th and 29th of July, you calling it "profit taking"..
(which just happened to coincide, with a 200 to 400% increase in new shorts, on the preceding days).

This is a general BRN discussion thread and I don't see posters, going on and on about shorters..
So I'm not sure what your concern is?

Other than a daily update on the short positions, by Esq.111, which I'm sure is of interest to most and some occasional bashing/humiliation from me, along with a bit of "burn shorters burn" fevor, when we are moving hard, by a few, I'm not sure what you're on about..

I know shorters are a part of every Major Company and we will tread different levels of them, all the way up, but BRN shareholders, in general, enjoy a particular satisfaction, in seeing them come undone.
As they have close to no idea, about the Company, they are playing with. 🔥🔥🔥

I'd say your nonchalant attitude to shorters, would be at odds to most here.
I think the general feel of the room, is delight in seeing them burn! 🔥🔥🔥

I don't offer any gems of research and only really give my opinions, on what others have posted..
So ignoring me, will pretty much achieve your objectives. 😉

I also believe, that supporting this website for $5 dollars a month, allows you to filter the most popular posts, which will allow you to cut down on your navigation time.

Good Fortune, to All "Share Holders!"
 
Last edited:
  • Like
  • Love
  • Fire
Reactions: 68 users

jtardif999

Regular
What I think would be interesting to see would be Akidas ability to learn autonomously. There used to be a video using DVS data from inilabs, where Akida would learn to recognise cars on a highway and start to "count" them

I wonder if this could be applied to things in the blood to identify new things that shouldn't be there
Being able to count things really fast when they are hard to count - like the car example has endless uses, like e.g. counting clear capsules being manufactured and looking for production errors; apparently an issue for standard camera tech which gets confounded by the clear curved nature of the capsules as they move on a conveyor belt. Not a problem for DVS event based.
 
  • Like
  • Fire
  • Love
Reactions: 11 users

cosors

👀
For Fucks sake, this site is now just embarrasing, so many precious petals with their head up their arses, who just have to be right, but sook as soon as they cop a question or differing viewpoint. A Grade cringe

Sayonara 👋
Seems like that was actually his last post because his avatar is deleted. I didn't catch what or who he was so upset about. I must have missed something or not understood 🤷‍♂️
 
  • Like
  • Haha
  • Thinking
Reactions: 11 users

MDhere

Regular
Seems like that was actually his last post because his avatar is deleted. I didn't catch what or who he was so upset about. I must have missed something or not understood 🤷‍♂️
seems i may have missed his trantrum and rightfully so 🤣
 
  • Haha
  • Like
Reactions: 7 users

Dhm

Regular
I mentioned yesterday that I met a bloke who worked for the Linux Foundation. I couldn’t understand how ‘open source’ computing works and asked him if RISC-V was similar. Please bear in mind I am a minnow trying to understand the ways of the whales of the industry (of which we are one)
His response: (and any comments by you guys appreciated)

EA9CF561-1DB7-4B3B-B430-C08C2928B8A7.jpeg
 
  • Like
  • Fire
  • Love
Reactions: 13 users



Edge Impulse

Lunch & Learn with Edge Impulse

Virtual Lunch and Learn with Edge Impulse
August 10th at 10am PT

Edge Machine Learning for Health and
Safety Monitoring​

What to expect?​

Edge ML is one of the most promising innovations of the last few years, enabling AI at the edge across a wide variety of industries. It addresses the need for intelligence in IoT devices, allowing end users to capture insights from audio, video, and sensor data. With the market estimated to exceed an annual shipment of 2 billion devices by the end of 2022, the market opportunities are massive.

Wearable technology combined with edge machine learning allows people to take charge of their health data in real-time. One can now check key bodily metrics to make better health-conscious decisions. For instance, some devices improve posture while others monitor UV exposure, glucose levels, fix circadian rhythms, and help regulate body temperature. Using edge machine learning, these wearables devices are bringing unique solutions to problems.

Join this Lunch and Learn for a chance to hear from one of the leading experts in this field, as he will highlight the real market opportunities, key use cases, and existing gaps before discussing the appropriate tools and services, providing recommendations, and sharing best practices to accelerate enterprise adoption of edge ML in health and safety space.


Zach Shelby
Founder of Edge Impulse
Zach Shelby, Co-founder and CEO, Edge Impulse

Zach Shelby​

Co-founder & CEO, Edge Impulse

Zach is an entrepreneur, investor and technologist in the embedded space with a passion for TinyML and engineering. He is a former Arm VP, founder and CEO of the Micro:bit Foundation and Sensinode, active in several of his portfolio companies, and working to bring ML to any embedded device.
 
  • Like
  • Love
  • Fire
Reactions: 37 users

Sirod69

bavarian girl ;-)
Seems like that was actually his last post because his avatar is deleted. I didn't catch what or who he was so upset about. I must have missed something or not understood 🤷‍♂️
@Filobeddo is still with us, I think he got upset about our iceberg discussion

1660069945651.png
 
  • Like
  • Fire
  • Love
Reactions: 9 users

Sirod69

bavarian girl ;-)

A Shift in Computer Vision Is Coming​

1660070369348.png

Prophesee’s evaluation kit for its DVS sensor developed in collaboration with Sony. Benosman is a co-founder of Prophesee. (Source: Prophesee)

General-purpose neuromorphic processors are lagging behind their DVS camera counterparts. Efforts from some of the industry’s biggest players (IBM Truenorth, Intel Loihi) are still works in progress. Benosman said that the right processor with the right sensor would be an unbeatable combination.


 
Last edited:
  • Like
  • Fire
  • Love
Reactions: 32 users
  • Like
  • Fire
  • Thinking
Reactions: 18 users

cosors

👀
@Filobeddo is still with us, I think he got upset about our iceberg discussion

View attachment 13750
It's interesting how some take a step back and still stay with us. I find that good! Maybe someday the time will come that FF Rise from the ashes.) I begrudge everyone the time out. Yak you are also very welcome from my side. With them there would be already three. But still no Bacon. I find that a pity. So I guess all four of you, no five! ❤️ Would Filobeddo get his name back if he changed his mind 🤔
___
Quite interesting! Only six months (ok I hold longer) and you are already my pack.
 
  • Like
  • Love
  • Fire
Reactions: 19 users

Sirod69

bavarian girl ;-)

Neuromorphic Device with Low Power Consumption​

1660071226991.png

Figure 1: The role of RRAM devices in neuromorphic circuits: (a) scanning electron microscopy (SEM) image of an HfO2 1T1R RRAM device, in blue, integrated on 130–nm CMOS technology, with its selector transistor (width of 650 nm) in green; (b) basic building block of the proposed neuromorphic circuit; (c) cumulative density function of the conductance of a population of 16–Kb RRAM devices, as a function of the compliance current ICC, which effectively controls the conductance level; (d) measurement of the circuit in (a); (e) measurement of the circuit in (b). (Source: “Neuromorphic object localization using resistive memories and ultrasonic transducers,” in Nature Communications)

While traditional processing techniques sample the detected signal continuously and perform calculations to extract useful information, the proposed neuromorphic solution calculates asynchronously when useful information arrives, increasing the system’s energy efficiency by up to five orders of magnitude.

 
  • Like
  • Fire
Reactions: 10 users

Sirod69

bavarian girl ;-)

Neuromorphic Device with Low Power Consumption​

View attachment 13753
Figure 1: The role of RRAM devices in neuromorphic circuits: (a) scanning electron microscopy (SEM) image of an HfO2 1T1R RRAM device, in blue, integrated on 130–nm CMOS technology, with its selector transistor (width of 650 nm) in green; (b) basic building block of the proposed neuromorphic circuit; (c) cumulative density function of the conductance of a population of 16–Kb RRAM devices, as a function of the compliance current ICC, which effectively controls the conductance level; (d) measurement of the circuit in (a); (e) measurement of the circuit in (b). (Source: “Neuromorphic object localization using resistive memories and ultrasonic transducers,” in Nature Communications)

While traditional processing techniques sample the detected signal continuously and perform calculations to extract useful information, the proposed neuromorphic solution calculates asynchronously when useful information arrives, increasing the system’s energy efficiency by up to five orders of magnitude.

I know is an older article but edge impuls posted it today

Edge Impulse @EdgeImpulse 45min
.
@CEA_Leti researchers are coupling innovative sensors with RRAM–based neuromorphic computation to build ultra-low-power systems for edge AI applications.
 
  • Like
  • Fire
Reactions: 8 users

cosors

👀
  • Haha
  • Like
Reactions: 7 users
Top Bottom