BRN Discussion Ongoing

  • Like
  • Haha
  • Love
Reactions: 14 users

TechGirl

Founding Member
Great Sony Video


Imaging & Sensing​

Unlocking the potential of imaging and sensing technology with AI to unveil new opportunities

Our Approach​

We are working on combining industry leading sensor technologies developed at Sony Semiconductor Solutions with new machine learning methods and robotic actuators. By developing novel learning and control algorithms, while also evolving the computing hardware to handle the sensor data in optimal ways, we will discover and explore the full potential of Sony’s sensor technologies and sensor-based solutions.We believe that the resulting systems will open up applications for imaging and sensing technology in areas that no one has imagined before.


 
  • Like
  • Fire
  • Love
Reactions: 41 users

Slymeat

Move on, nothing to see.
Add it to the list of another head up their a..se useless ill informed opinion.

I mean even blind Freddie can see what's going on. Akidas Ubiquitous Game changing Defacto standard Essential Edge AI Ecosystem is in plain sight.

Edge Compute
I know the following isn’t a strong enough word but it fits in quite neatly:

A. Akida
S. Standard
S. in plain Sight
U. Ubiquitous
A. AI
G. Game changing
E. Essential/Edge
D Defacto

Akida ASSUAGED the need for ultra low power at the edge.
 
  • Like
  • Haha
  • Fire
Reactions: 16 users


The part I was interested in was the SNC which I’ve now worked out was a Sensor Node Controller. It’s been around for a while but maybe they’re using our 2 nodes for that purpose due to its low power?



Edit: sorry trying to do this on my phone doesn’t work to well.

1655861719565.png



As I’ve said before I have no technical knowledge; just posing the questions so anyone who does know can answer.

Cheers
 

Attachments

  • 1655861262646.png
    1655861262646.png
    152.3 KB · Views: 61
Last edited:
  • Like
  • Fire
Reactions: 10 users

stuart888

Regular

Surfing last night (www) and noticed that the Co Founder of AONDevices was a former employee of Brainchip, now I would imagine its a small world in the Edge AI space but I just wonder weather Mouna still keeps in touch with BRN.

Apologies if already been ogred it caught my key word spotting eye 🤪.

Edge Compute
My mind says this. Brainchip's strength is the no-brainer future of Edge Sensors helping humans have better lives. Bring on the sensors and patterns, Akida can use logic to make decisions smarter than humans. Side note (a 3D printing company I follow had 8 sensors in their last model, now 75). The point is edge sensor explosion ahead.

Brainchip smarts can deal with Patterns in all forms. Then store the data temporarily in superfast binary form (small bit and bytes/fast/low power). MegaChips/Renesas/Nviso/Brainchip/Sony/Others helping technology monitor in room care, plus the automobile safety braking. Video smarts to help caretakers is an awesome Medical USE CASE. It is so enjoyable to invest in what you enjoy following.

The Ecosystem news flow is nuts, building, and kind of easy to see. Logic says low downside risk, huge upside. Just my take.
 
  • Like
  • Fire
  • Love
Reactions: 33 users

Boab

I wish I could paint like Vincent
Great Sony Video


Imaging & Sensing​

Unlocking the potential of imaging and sensing technology with AI to unveil new opportunities

Our Approach​

We are working on combining industry leading sensor technologies developed at Sony Semiconductor Solutions with new machine learning methods and robotic actuators. By developing novel learning and control algorithms, while also evolving the computing hardware to handle the sensor data in optimal ways, we will discover and explore the full potential of Sony’s sensor technologies and sensor-based solutions.We believe that the resulting systems will open up applications for imaging and sensing technology in areas that no one has imagined before.



Thanks for that TG. I now have better understanding of what MetaData is. For a non techy this is simple language.
Cheers
 
  • Like
  • Fire
  • Love
Reactions: 9 users
The part I was interested in was the SNC which I’ve now worked out was a Sensor Node Controller. It’s been around for a while but maybe they’re using our 2 nodes for that purpose due to its low power?



Edit: sorry trying to do this on my phone doesn’t work to well.

View attachment 9832


As I’ve said before I have no technical knowledge; just posing the questions so anyone who does know can answer.

Cheers

Instead of Laughing, did anyone read the PDF on the SNC posted by Renesas. It’s a convoluted process of power management and I can’t see where an NPU or any Akida nodes could be included.

 
  • Like
  • Fire
Reactions: 4 users

uiux

Regular
I have a huge amount of respect of your opinion, but what does this post mean. Yes or no Akida?

This is from the datasheet of the product:

Analog interfaces
□ Ultra-Low Power Voice Activity
Detection (VAD) enabling seamless
audio processing with system-on
current < 26 uA

Screenshot_20220622-114423.png


Screenshot_20220622-114807.png
 
  • Like
  • Fire
  • Haha
Reactions: 14 users

MrNick

Regular
 
  • Like
  • Fire
  • Love
Reactions: 48 users
I have a huge amount of respect of your opinion, but what does this post mean. Yes or no Akida?
Ok. I got more laughs than I care for.

"(There's) no such thing as a stupid question" is a common phrase, that states that the quest for knowledgeincludes failure, and that just because one person may know less than others, they should not be afraid to ask rather than pretend they already know. In many cases multiple people may not know, but are too afraid to ask the "stupid question"; the one who asks the question may in fact be doing a service to those around them.
 
  • Like
  • Love
  • Fire
Reactions: 27 users

MrNick

Regular
A ubookquitous read for the 1000 eyes.
Screen Shot 2022-06-22 at 10.15.14 am.png
 
  • Like
  • Love
  • Haha
Reactions: 15 users
Ok. I got more laughs than I care for.

"(There's) no such thing as a stupid question" is a common phrase, that states that the quest for knowledgeincludes failure, and that just because one person may know less than others, they should not be afraid to ask rather than pretend they already know. In many cases multiple people may not know, but are too afraid to ask the "stupid question"; the one who asks the question may in fact be doing a service to those around them.

Didn’t mean to offend you. I was laughing with you as I quite often feel the same way when reading a lot of the technical information and sometimes the answers given to me are riddles! :)

I looked at the Renesas page about the SNC but it’s still Chinese to me :). (For some reason it won’t let me copy it across at the moment so I’ve got to take photographs). I haven’t seen anything about SNN, or Akida so dissapointingly I think it’s a no however when it comes to computers ”I know nothing.”


1655864546878.png


Cheers!
 
  • Like
  • Love
  • Fire
Reactions: 22 users

mrgds

Regular
  • Haha
  • Thinking
  • Like
Reactions: 7 users

chapman89

Founding Member

Thanks for sharing.
At the 15 min 30 seconds Rob telson says they’ve helped NASA get to orbit and also they’ve been able to capture images at extreme low power and touches on helping Mercedes achieve their goals…and he said other vehicle manufacturers, plural!
 
  • Like
  • Fire
  • Love
Reactions: 75 users

Boab

I wish I could paint like Vincent
Just received this newsletter via email.https://www.edge-ai-vision.com/latest-news/newsletter/?utm_campaign=EVI%20Newsletter&utm_medium=email&_hsmi=217201181&_hsenc=p2ANqtz-9mt67SVbR4NNylhbRO7T1MIdAgPtSDHuR_ukxvAehjBEhwTauuqXh6-eYrwQ-UzyewdWnCw5HjVDXLLSZZ6DnuDxUseA&utm_content=217082579&utm_source=hs_email
This is the website page if it makes it easier?

To view this newsletter online, please click here
DEVELOPING SECURE VISION-BASED DESIGNS

IoT and Vision: Why It’s a Security Minefield and How to Navigate ItArm
Recent advancements in machine learning have enabled market innovators to build insights from IoT sensors in the wild. These insights can be used to solve complex real-world challenges. The lack of security in typical vision-based IoT solutions is especially concerning, as they are typically responsible for managing sensitive data (PID, CCTV) or critical systems (cars, machinery). Security is rarely the first thought for developers of new types of solutions, but making systems secure after the fact is difficult since a holistic approach is required. Exacerbating this challenge of achieving end-to-end security, the development and deployment of IoT systems often involves multiple handovers of responsibility, which can make achieving end-to-end security difficult. And, due to the complexity and diversity of these systems, security bodies have been unable to prescribe “silver bullet” solutions. Based on first-hand experience, this 2021 Embedded Vision Summit presentation from Dr. Lyndon Fawcett, Principal Software Security Architect at Arm, provides insights to help decision-makers better understand key challenges and potential solutions for providing secure vision-based IoT systems.

A Secure Hardware Architecture for Embedded VisionNeuroBinder
Security is a problem for every IoT system, but due to privacy concerns, security is particularly worrisome for embedded vision systems. In this talk from the 2021 Embedded Vision Summit, Jonathan Cefalu, CEO and founder of NeuroBinder, covers how to design your embedded device so that the hardware architecture itself enforces strict guarantees about where visual data is able to flow. In particular, Cefalu explores the idea of a “data diode” that uses hardware to enforce what parts of the system have access to video or images. This provides the highest level of protection against hacking or malware, as even if the device is completely hacked at the root level, the intruder will still be unable to access the visual data.

HARDWARE FOR VISION SENSING AND PROCESSING

A New Adaptive Module for Vision AI at the EdgeAMD
Kria System-on-Modules (SOMs), as described by Chetan Khona, Director of the Industrial, Vision, Healthcare and Science Markets at Xilinx (now part of AMD) in this 2021 Embedded Vision Summit presentation, provide a secure, production-ready multi-core Arm and FPGA platform, including memories, power management, and Yocto or Ubuntu Linux to build accelerated AI-enabled applications at the edge. Kria SOMs enable smart vision applications across cities, factories, and hospitals to achieve high performance with low latency, low power consumption and a small footprint. And Kria SOMs feature a radical new way to design with Accelerated Applications via the App Store. Kria Accelerated Apps offer an industry-first: they enable both new and experienced designers to skip doing any FPGA design. Accelerated Apps give the SOM the personality of a purpose-built smart camera, AI box, or other vision AI system. Apps are fully accelerated—including image acquisition, pre-processing, AI inference, post-processing, encoding and connectivity—offering the highest performance for industrial use cases. Accelerated Apps span license plate recognition, retail shopper re-identification, HDR image signal processing, natural language processing and more.

An Introduction to Single-Photon Avalanche Diodes—A New Type of Imager for Computer VisionUniversity of Wisconsin-Madison
The single-photon avalanche diode (SPAD) is an emerging image sensing technology with unique capabilities relevant to computer vision applications. Originally designed for imaging in low-light conditions, the ultra-high time resolution of SPADs also helps to achieve extremely high dynamic range, motion blur-free images and even seeing around corners. The use of SPADs in recent iPhone models has spurred increased interest in the use of SPADs in commercial products. In this talk from the 2021 Embedded Vision Summit, Sebastian Bauer, Postdoctoral Student at the University of Wisconsin – Madison, introduces SPAD-based imagers, explains how they work, presents their fundamental capabilities, and identifies their key strengths and weaknesses relative to conventional image sensors. He also shows how they can be used in a variety of applications.

UPCOMING INDUSTRY EVENTS

Accelerating TensorFlow Models on Intel Compute Devices Using Only 2 Lines of Code - Intel Webinar: August 25, 2022, 9:00 am PT

More Events

FEATURED NEWS

FRAMOS Launches FSM-IMX547 Camera Accessory for the AMD-Xilinx Kria KR260 Robotics Starter Kit

Simplify AI Model Development with NVIDIA's Latest TAO Toolkit Release

2nd-generation Multi-zone Direct Time-of-flight Sensor from STMicroelectronics Uses Less Energy and Delivers Long-range Results

CEVA Expands Sensor Fusion Product Line with New Sensor Hub MCU for High Precision Motion Tracking and Orientation Detection

Arm Introduces New Image Signal Processor to Advance Vision Systems for IoT and Embedded Markets

More News

EDGE AI AND VISION PRODUCT OF THE YEAR WINNER SHOWCASE

Sequitur Labs EmSPARK Security Suite 3.0 (Best Edge AI Software or Algorithm)Sequitur Labs
Sequitur Labs’ EmSPARK Security Suite 3.0 is the 2022 Edge AI and Vision Product of the Year Award winner in the Edge AI Software and Algorithms category. The EmSPARK Security Suite is a software solution that makes it easy for IoT and edge device vendors to develop, manufacture, and maintain secure and trustworthy products. By implementing the EmSPARK Security suite, enabled by industry-leading processors, device OEMs can: isolate and protect security credentials to prevent device compromise, protect critical IP, including device-resident software, prevent supply chain compromises with secure software provisioning and updates and accelerate time-to-market while reducing implementation cost and overall security risk. The EmSPARK Security Suite is the industry’s first solution to provide a suite of tools for protecting AI models at the Edge. With the release of EmSPARK 3.0, developers can safely deploy AI models on IoT devices, opening the door for a new era of edge computing.

Please see here for more information on Sequitur Labs’ EmSPARK Security Suite 3.0. The Edge AI and Vision Product of the Year Awards celebrate the innovation of the industry's leading companies that are developing and enabling the next generation of edge AI and computer vision products. Winning a Product of the Year award recognizes a company's leadership in edge AI and computer vision as evaluated by independent industry experts.

About This E-Mail
LETTERS AND COMMENTS TO THE EDITOR: Letters and comments can be directed to the editor, Brian Dipert, at insights@edge-ai-vision.com.

PASS IT ON...Feel free to forward this newsletter to your colleagues. If this newsletter was forwarded to you and you would like to receive it regularly, click here to register.
Edge AI and Vision Alliance, 1646 N California Blvd, Suite 360, Walnut Creek, California 94596, United States, +1 925.954.1411
Unsubscribe Manage preferences
 
  • Like
  • Fire
Reactions: 15 users

hotty4040

Regular
  • Haha
  • Like
Reactions: 8 users

Pappagallo

Regular
Thanks for sharing.
At the 15 min 30 seconds Rob telson says they’ve helped NASA get to orbit and also they’ve been able to capture images at extreme low power and touches on helping Mercedes achieve their goals…and he said other vehicle manufacturers, plural!

Rob also later mentions “over 5000 unique users” of the MetaTF software. Pretty sure it was 4500 only a few weeks back. So what does this increase of 500+ users represent? New companies? If so, how many? 5, 10, 50, 100? Or is it existing companies ramping up their development? Maybe a bit of both?
 
  • Like
  • Fire
  • Love
Reactions: 36 users

Bravo

If ARM was an arm, BRN would be its biceps💪!
Thanks for sharing.
At the 15 min 30 seconds Rob telson says they’ve helped NASA get to orbit and also they’ve been able to capture images at extreme low power and touches on helping Mercedes achieve their goals…and he said other vehicle manufacturers, plural!

Sounds like the BrainChip team also invented a new acronym!

I like it AIoT! 😝


37.05 - Rob Telson
We started using this term a few months ago, a lot more freely than we have in the past and that is AIoT. That's applying the intelligence to the IoT devices. And so you're going to start seeing that pick-up in the world that we live in. Our whole ojective is to make sensors really efficient, really smart in very tiny ML environments.


AKIDA ...putting the A in AIoT🥳
 
  • Like
  • Love
  • Fire
Reactions: 40 users

Yak52

Regular
Looks like the shorters still have some fight huh..

Well I'm sure those borrowed shares, will get soaked up 😛
Hi DB.
As i mentioned in yesterdays post about the Shorting , wait to see if they (THM/HC/MF) have done more Shorts on Tuesday (yesterday) as the trading pattern seemed to suggest they had.
Todays action shows more "Shorting" activity thought it is very small really considering the weak market (ASX) today.

Well the Data is in and ............Yes they did take out some more "Shorts" yesterday - 3.8 Mil NEW shorts for Tuesday
after Mondays 3.3 Mil and Fridays 11.3 Mil.
Quite a lot of shorts................and the SP is still up around 0.90c aprox.

NOT going very good for them so far! lol

Yak52 :cool:
 
  • Like
  • Fire
  • Love
Reactions: 50 users
Ha ha looks like a TSE post

CCFF4776-80CE-4CAD-A239-1B4134B5E7A8.jpeg
 
  • Like
  • Haha
  • Fire
Reactions: 13 users
Top Bottom