BRN Discussion Ongoing

The part I was interested in was the SNC which I’ve now worked out was a Sensor Node Controller. It’s been around for a while but maybe they’re using our 2 nodes for that purpose due to its low power?



Edit: sorry trying to do this on my phone doesn’t work to well.

View attachment 9832


As I’ve said before I have no technical knowledge; just posing the questions so anyone who does know can answer.

Cheers

Instead of Laughing, did anyone read the PDF on the SNC posted by Renesas. It’s a convoluted process of power management and I can’t see where an NPU or any Akida nodes could be included.

 
  • Like
  • Fire
Reactions: 4 users

uiux

Regular
I have a huge amount of respect of your opinion, but what does this post mean. Yes or no Akida?

This is from the datasheet of the product:

Analog interfaces
□ Ultra-Low Power Voice Activity
Detection (VAD) enabling seamless
audio processing with system-on
current < 26 uA

Screenshot_20220622-114423.png


Screenshot_20220622-114807.png
 
  • Like
  • Fire
  • Haha
Reactions: 14 users

MrNick

Regular
 
  • Like
  • Fire
  • Love
Reactions: 48 users
I have a huge amount of respect of your opinion, but what does this post mean. Yes or no Akida?
Ok. I got more laughs than I care for.

"(There's) no such thing as a stupid question" is a common phrase, that states that the quest for knowledgeincludes failure, and that just because one person may know less than others, they should not be afraid to ask rather than pretend they already know. In many cases multiple people may not know, but are too afraid to ask the "stupid question"; the one who asks the question may in fact be doing a service to those around them.
 
  • Like
  • Love
  • Fire
Reactions: 27 users

MrNick

Regular
A ubookquitous read for the 1000 eyes.
Screen Shot 2022-06-22 at 10.15.14 am.png
 
  • Like
  • Love
  • Haha
Reactions: 15 users
Ok. I got more laughs than I care for.

"(There's) no such thing as a stupid question" is a common phrase, that states that the quest for knowledgeincludes failure, and that just because one person may know less than others, they should not be afraid to ask rather than pretend they already know. In many cases multiple people may not know, but are too afraid to ask the "stupid question"; the one who asks the question may in fact be doing a service to those around them.

Didn’t mean to offend you. I was laughing with you as I quite often feel the same way when reading a lot of the technical information and sometimes the answers given to me are riddles! :)

I looked at the Renesas page about the SNC but it’s still Chinese to me :). (For some reason it won’t let me copy it across at the moment so I’ve got to take photographs). I haven’t seen anything about SNN, or Akida so dissapointingly I think it’s a no however when it comes to computers ”I know nothing.”


1655864546878.png


Cheers!
 
  • Like
  • Love
  • Fire
Reactions: 22 users

mrgds

Regular
  • Haha
  • Thinking
  • Like
Reactions: 7 users

chapman89

Founding Member

Thanks for sharing.
At the 15 min 30 seconds Rob telson says they’ve helped NASA get to orbit and also they’ve been able to capture images at extreme low power and touches on helping Mercedes achieve their goals…and he said other vehicle manufacturers, plural!
 
  • Like
  • Fire
  • Love
Reactions: 75 users

Boab

I wish I could paint like Vincent
Just received this newsletter via email.https://www.edge-ai-vision.com/latest-news/newsletter/?utm_campaign=EVI%20Newsletter&utm_medium=email&_hsmi=217201181&_hsenc=p2ANqtz-9mt67SVbR4NNylhbRO7T1MIdAgPtSDHuR_ukxvAehjBEhwTauuqXh6-eYrwQ-UzyewdWnCw5HjVDXLLSZZ6DnuDxUseA&utm_content=217082579&utm_source=hs_email
This is the website page if it makes it easier?

To view this newsletter online, please click here
DEVELOPING SECURE VISION-BASED DESIGNS

IoT and Vision: Why It’s a Security Minefield and How to Navigate ItArm
Recent advancements in machine learning have enabled market innovators to build insights from IoT sensors in the wild. These insights can be used to solve complex real-world challenges. The lack of security in typical vision-based IoT solutions is especially concerning, as they are typically responsible for managing sensitive data (PID, CCTV) or critical systems (cars, machinery). Security is rarely the first thought for developers of new types of solutions, but making systems secure after the fact is difficult since a holistic approach is required. Exacerbating this challenge of achieving end-to-end security, the development and deployment of IoT systems often involves multiple handovers of responsibility, which can make achieving end-to-end security difficult. And, due to the complexity and diversity of these systems, security bodies have been unable to prescribe “silver bullet” solutions. Based on first-hand experience, this 2021 Embedded Vision Summit presentation from Dr. Lyndon Fawcett, Principal Software Security Architect at Arm, provides insights to help decision-makers better understand key challenges and potential solutions for providing secure vision-based IoT systems.

A Secure Hardware Architecture for Embedded VisionNeuroBinder
Security is a problem for every IoT system, but due to privacy concerns, security is particularly worrisome for embedded vision systems. In this talk from the 2021 Embedded Vision Summit, Jonathan Cefalu, CEO and founder of NeuroBinder, covers how to design your embedded device so that the hardware architecture itself enforces strict guarantees about where visual data is able to flow. In particular, Cefalu explores the idea of a “data diode” that uses hardware to enforce what parts of the system have access to video or images. This provides the highest level of protection against hacking or malware, as even if the device is completely hacked at the root level, the intruder will still be unable to access the visual data.

HARDWARE FOR VISION SENSING AND PROCESSING

A New Adaptive Module for Vision AI at the EdgeAMD
Kria System-on-Modules (SOMs), as described by Chetan Khona, Director of the Industrial, Vision, Healthcare and Science Markets at Xilinx (now part of AMD) in this 2021 Embedded Vision Summit presentation, provide a secure, production-ready multi-core Arm and FPGA platform, including memories, power management, and Yocto or Ubuntu Linux to build accelerated AI-enabled applications at the edge. Kria SOMs enable smart vision applications across cities, factories, and hospitals to achieve high performance with low latency, low power consumption and a small footprint. And Kria SOMs feature a radical new way to design with Accelerated Applications via the App Store. Kria Accelerated Apps offer an industry-first: they enable both new and experienced designers to skip doing any FPGA design. Accelerated Apps give the SOM the personality of a purpose-built smart camera, AI box, or other vision AI system. Apps are fully accelerated—including image acquisition, pre-processing, AI inference, post-processing, encoding and connectivity—offering the highest performance for industrial use cases. Accelerated Apps span license plate recognition, retail shopper re-identification, HDR image signal processing, natural language processing and more.

An Introduction to Single-Photon Avalanche Diodes—A New Type of Imager for Computer VisionUniversity of Wisconsin-Madison
The single-photon avalanche diode (SPAD) is an emerging image sensing technology with unique capabilities relevant to computer vision applications. Originally designed for imaging in low-light conditions, the ultra-high time resolution of SPADs also helps to achieve extremely high dynamic range, motion blur-free images and even seeing around corners. The use of SPADs in recent iPhone models has spurred increased interest in the use of SPADs in commercial products. In this talk from the 2021 Embedded Vision Summit, Sebastian Bauer, Postdoctoral Student at the University of Wisconsin – Madison, introduces SPAD-based imagers, explains how they work, presents their fundamental capabilities, and identifies their key strengths and weaknesses relative to conventional image sensors. He also shows how they can be used in a variety of applications.

UPCOMING INDUSTRY EVENTS

Accelerating TensorFlow Models on Intel Compute Devices Using Only 2 Lines of Code - Intel Webinar: August 25, 2022, 9:00 am PT

More Events

FEATURED NEWS

FRAMOS Launches FSM-IMX547 Camera Accessory for the AMD-Xilinx Kria KR260 Robotics Starter Kit

Simplify AI Model Development with NVIDIA's Latest TAO Toolkit Release

2nd-generation Multi-zone Direct Time-of-flight Sensor from STMicroelectronics Uses Less Energy and Delivers Long-range Results

CEVA Expands Sensor Fusion Product Line with New Sensor Hub MCU for High Precision Motion Tracking and Orientation Detection

Arm Introduces New Image Signal Processor to Advance Vision Systems for IoT and Embedded Markets

More News

EDGE AI AND VISION PRODUCT OF THE YEAR WINNER SHOWCASE

Sequitur Labs EmSPARK Security Suite 3.0 (Best Edge AI Software or Algorithm)Sequitur Labs
Sequitur Labs’ EmSPARK Security Suite 3.0 is the 2022 Edge AI and Vision Product of the Year Award winner in the Edge AI Software and Algorithms category. The EmSPARK Security Suite is a software solution that makes it easy for IoT and edge device vendors to develop, manufacture, and maintain secure and trustworthy products. By implementing the EmSPARK Security suite, enabled by industry-leading processors, device OEMs can: isolate and protect security credentials to prevent device compromise, protect critical IP, including device-resident software, prevent supply chain compromises with secure software provisioning and updates and accelerate time-to-market while reducing implementation cost and overall security risk. The EmSPARK Security Suite is the industry’s first solution to provide a suite of tools for protecting AI models at the Edge. With the release of EmSPARK 3.0, developers can safely deploy AI models on IoT devices, opening the door for a new era of edge computing.

Please see here for more information on Sequitur Labs’ EmSPARK Security Suite 3.0. The Edge AI and Vision Product of the Year Awards celebrate the innovation of the industry's leading companies that are developing and enabling the next generation of edge AI and computer vision products. Winning a Product of the Year award recognizes a company's leadership in edge AI and computer vision as evaluated by independent industry experts.

About This E-Mail
LETTERS AND COMMENTS TO THE EDITOR: Letters and comments can be directed to the editor, Brian Dipert, at insights@edge-ai-vision.com.

PASS IT ON...Feel free to forward this newsletter to your colleagues. If this newsletter was forwarded to you and you would like to receive it regularly, click here to register.
Edge AI and Vision Alliance, 1646 N California Blvd, Suite 360, Walnut Creek, California 94596, United States, +1 925.954.1411
Unsubscribe Manage preferences
 
  • Like
  • Fire
Reactions: 15 users

hotty4040

Regular
Just wait until AKIDA can read minds 😲

Just wait until Akida has xray vision, lol...
 
  • Haha
  • Like
Reactions: 8 users

Pappagolla

Regular
Thanks for sharing.
At the 15 min 30 seconds Rob telson says they’ve helped NASA get to orbit and also they’ve been able to capture images at extreme low power and touches on helping Mercedes achieve their goals…and he said other vehicle manufacturers, plural!

Rob also later mentions “over 5000 unique users” of the MetaTF software. Pretty sure it was 4500 only a few weeks back. So what does this increase of 500+ users represent? New companies? If so, how many? 5, 10, 50, 100? Or is it existing companies ramping up their development? Maybe a bit of both?
 
  • Like
  • Fire
  • Love
Reactions: 36 users

Bravo

If ARM was an arm, BRN would be its biceps💪!
Thanks for sharing.
At the 15 min 30 seconds Rob telson says they’ve helped NASA get to orbit and also they’ve been able to capture images at extreme low power and touches on helping Mercedes achieve their goals…and he said other vehicle manufacturers, plural!

Sounds like the BrainChip team also invented a new acronym!

I like it AIoT! 😝


37.05 - Rob Telson
We started using this term a few months ago, a lot more freely than we have in the past and that is AIoT. That's applying the intelligence to the IoT devices. And so you're going to start seeing that pick-up in the world that we live in. Our whole ojective is to make sensors really efficient, really smart in very tiny ML environments.


AKIDA ...putting the A in AIoT🥳
 
  • Like
  • Love
  • Fire
Reactions: 40 users

Yak52

Regular
Looks like the shorters still have some fight huh..

Well I'm sure those borrowed shares, will get soaked up 😛
Hi DB.
As i mentioned in yesterdays post about the Shorting , wait to see if they (THM/HC/MF) have done more Shorts on Tuesday (yesterday) as the trading pattern seemed to suggest they had.
Todays action shows more "Shorting" activity thought it is very small really considering the weak market (ASX) today.

Well the Data is in and ............Yes they did take out some more "Shorts" yesterday - 3.8 Mil NEW shorts for Tuesday
after Mondays 3.3 Mil and Fridays 11.3 Mil.
Quite a lot of shorts................and the SP is still up around 0.90c aprox.

NOT going very good for them so far! lol

Yak52 :cool:
 
  • Like
  • Fire
  • Love
Reactions: 50 users
Ha ha looks like a TSE post

CCFF4776-80CE-4CAD-A239-1B4134B5E7A8.jpeg
 
  • Like
  • Haha
  • Fire
Reactions: 13 users

gex

Regular
Some stats:
- NVISO Cloud Model (GPU Optimized) = 23.8MB (internet required), latency = ~15ms
- NVISO Neuromorphic Model (Akida Optimized) = 150KB (no internet required), latency = 1ms

1655869123975.png






orgasmic
 
  • Like
  • Fire
  • Love
Reactions: 53 users
Reuters
FollowView profile

Power consumption soars in northern China as Premier issues warning​

56m ago
1655868576535.png

SHANGHAI (Reuters) - The electricity load in the Chinese province of Henan reached a new record on Monday, primarily driven by air-conditioning demand, as scorching heat waves spread across regions north of the Yangtze river.

Loads reached a peak of 71.08 million kilowatts, surpassing the previous day's record of 65.34 million kilowatts, according to a state media report published Wednesday morning.

Premier Li Keqiang, visiting a thermal power company in Hebei Province, said that China must increase coal production capacity to "resolutely prevent power outages", according to a state media summary published late on Tuesday."
 
  • Like
  • Sad
  • Thinking
Reactions: 15 users

Rskiff

Regular
Could I suggest that todays extremely low volume of shares sales has been those who wish it to stay low or go lower are the ones selling to themselves. Afterall even Blind Freddy can see that this company has a game changing technology and is executing the plan to market with outstanding results. I'm holding my shares tightly and happy to do so.
 
  • Like
  • Love
  • Fire
Reactions: 39 users

Shadow59

Regular
Reuters
FollowView profile

Power consumption soars in northern China as Premier issues warning​

56m ago
View attachment 9841
SHANGHAI (Reuters) - The electricity load in the Chinese province of Henan reached a new record on Monday, primarily driven by air-conditioning demand, as scorching heat waves spread across regions north of the Yangtze river.

Loads reached a peak of 71.08 million kilowatts, surpassing the previous day's record of 65.34 million kilowatts, according to a state media report published Wednesday morning.

Premier Li Keqiang, visiting a thermal power company in Hebei Province, said that China must increase coal production capacity to "resolutely prevent power outages", according to a state media summary published late on Tuesday."
The old double edged sword trick. Need to use more coal to cool down, so makes it warmer, so need to use even more coal!
 
  • Like
  • Sad
  • Thinking
Reactions: 19 users

gex

Regular
I Remember hearing about this on the national news
Fell through huh, wonder who could help

https://***************.com.au/resa...as-pfizer-takeover-deal-drops-53m-2022-06-21/

  • ResApp Health (RAP) has plunged into the red after inadequate test results for its COVID-19 detection app shaved $53 million off of a takeover bid from Pfizer
  • An independent confirmation study for RAP’s smartphone-based COVID-19 identification technology showed “significantly lower” results than a previous study in March
  • Pfizer had offered to pay 20.7 cents per share for control of ResApp if the latest results were on par with the March results, but the biotech giant will now offer 14.6 cents
  • This effectively lowers the value of the Pfizer deal from $180 million to $127 million given the test results came in below the threshold of the higher Pfizer offer
  • ResApp is down 24.57 per cent and trading at 13 cents per share at 1:15 pm AEST.
 
  • Like
  • Wow
  • Fire
Reactions: 13 users

Slymeat

Move on, nothing to see.
Yeah Mr J Chapman89, yeah 1000 fps, that Brainchip's Spiking Solution can use to make decisions! Computers and Human Behavior has been a challenge forever, and now can monitor and assist the driver of an automobile, so needed! This old 1962 video has a lot of neat things in it, when you have time! Best regards.


I just watched this video, thanks for sharing it @stuart888.

The following is a bit of a rant, but I think it makes some valid points. I apologise in advance for the words that are spewing forth from my fingers. I hope I don‘t mislead too many, and hopefully help some.

For me, this video tells quite an intriguing story. I originally only watched it because I wanted to see how people from 1962 worked with computers of the day. But after watching it, I found It is actually quite revealing of just how grand Akida is. What an amazing quantum leap Akida represents. Especially when it gets LTSM and cortical columns working to even better emulate the human brain. That will allow prior experience and big picture thinking to come into play, even potentially pre-cognition.

Firstly, I noticed how absolutely stuck on binary logic these 1962 scientists where. They saw everything as yes or no. Which does work and has worked for decades. It’s just so bloody limiting.

This also emphasised just how dumb computers are! Computers blindly follow rules, and hence are completely at the mercy of the programmer. They just blindly apply the rules VERY quickly, and in a reproducible manner, and so appear to be clever.

The blocks problem was a particularly good example of this. For those who haven’t watched the video—Given a pattern of black and white blocks, and some rules about how the blocks can be placed, reproduce the pattern.

I believe Billy did follow the rules when he first solved the problem by adding two same coloured blocks that were not adjacent. The rules DO NOT say the blocks have to be adjacent. That was an adjunct added by the tester, and implied by the pictures. This must have been coded in the computer program, but human error neglected to add it to the written rules presented to Billy. I think the tester stated something like “place two blocks together”. Billy just interpreted the word together as simultaneous, or time adjacent, rather than spatially adjacent. A clever interpretation IMHO.

Billy got a typical response of “Oh well, yes, you did solve it, but not the way I wanted to you to solve it.”

So not wrong then! Just different, that kind of stuff the brain does well, and computers cannot do. Maybe not until now - or In the imminent future.

I was equally impressed by Billy‘s absolutely first attempt of exactly reproducing the pattern, by placing the blocks down one at a time. This shows how prior experience and non-binary logic works. He saw the big picture and devised a mechanism to solve it directly that didn‘t apply the rules. He determined that the rules were not the most efficient way to solve the problem.

Again, this is the kind of stuff an intelligent human brain does well.

The computer program, as it was programmed to do, and as limited by pure binary logic, looked at a single column at a time and rather inefficiently tried to resolve any issues in that column alone. Again, yes that works, it IS effective, but it isn‘t how the human mind solves a pattern matching problem.

Computing, since 1962, progressed to using bytes and words. Well the 1962 computers probably did use 4-bit bytes. But with modern computers using 64 bit words (and possibly even 128 bit words now) as complete logic blocks, applying masks to determine what the combination of bits mean. Boolean logic can even be applied directly to these, and even to matrices of these words, to solve more complex problems in a single pass.

This allowed computers to do more complex things quicker, but still in a rather unintelligent, always pre-programmed, way.

Bring in neuomorphic computing and Akida. Some of the examples given by Anil Manaker show logic, and weightings, going back to single bits where appropriate. This results in ultra-low power consumption. Extra nodes and layers are brought into play as needed. And Akida has the ability to learn and hence apply logic that is outside of the initial rules. Just like Billy did so many years ago!

Learning burns in a pathway and reverts back to single bit logic and instant recognition. The fact that Akida does this in single shot is far superior to even the human brain. We need repetition to burn memories. And the more repetition, the better.

Akida does truly more closely mimic the way the brain works, and even seems to exceed it in its learning capability. Bring on LTSM and cortical columns and the amount of information that can form a memory and achieve single bit-like efficiencies becomes immense. Very complex things can be learned And associated to other very complex things.

It seems we have come full circle, back to single bits, but we haven’t. The nueromorphic processor may indeed use single bits but not in the same stupid way the original purely boolean based computers and programmers did/do.

e.g. Imagine a 1000 x 1000 pattern of red dots with an unknown number of blue dots randomly placed within it. And lets refine this particular case to there only being a single blue dot. It is possible to apply 1962 logic to this and test each of the 1M bits in isolation. You have to test them all because you don’t know how many anomalies here are. And even once it corrects the anomaly, the program must continue to test all 1M dots, even if the first one was the one in error. The human brain (and Akida) would ignore all the sameness and zoom in to the anomoly and fix it directly, and in a single step.

This is where Akida is fantastic in scarcity situations. And they appear to be the predominant cases - i.e. find a face in a crowd.

Better still, Akida uses zero power if an anomalous blue dot does not appear.

I have seen an experiment that does just this, timing eye movements to work out when the anomaly is found. The eye focuses on the dot within milliseconds.

The 1962 written program may also be as fast as the human eye/brin in this situation, but it would consume millions of times more power (than Akida) in all the unnecessary testing of each and every pixel and will be about one million times slower than Akida, assuming same clock speed.

Now where have we heard that analogy used before?
 
  • Like
  • Fire
  • Love
Reactions: 33 users
Top Bottom