BRN Discussion Ongoing

Bravo

If ARM was an arm, BRN would be its biceps💪!
Computer vision

The science behind visual ID​


A new opt-in feature for Echo Show and Astro provides more-personalized content and experiences for customers who choose to enroll.​


By The Amazon visual ID teams
September 28, 2021



Share

With every feature and device we build, we challenge ourselves to think about how we can create an immersive, personalized, and proactive experience for our customers. Often, our devices are used by multiple people in our homes, and yet there are times when you want a more personalized experience. That was the inspiration for visual ID.

More coverage of devices and services announcements​


On the all-new Echo Show 15, Echo Show 8, and Echo Show 10, you and other members of your household will soon be able to enroll in visual ID, so that at a glance you can see personalized content such as calendars and reminders, recently played music, and notes for you.
And with Astro, a new kind of household robot, enrolling in visual ID enables Astro to do things like find you to deliver something, such as a reminder or an item in Astro’s cargo bin.

Creating your visual ID​


Visual ID is opt-in, so you must first enroll in the feature, much as you can enroll in voice ID (formerly Alexa voice profile) today. During enrollment, you will use the camera on your supported Echo Show device or Astro to take a series of headshots at different angles. For visual ID to accurately recognize you, we require five different angles of your face.
During the enrollment process, the device runs algorithms to ensure that each of the images is of high enough quality. For example, if the room is too dark, you will see on-screen instructions to adjust the lighting and try again. You will also see on-screen notifications as an image of each pose is successfully captured.
The images are used to create numeric representations of your facial characteristics. Called vectors (one for each angle of your face), these numeric representations are just that: a string of numbers. The images are also used to revise the vectors in the event of periodic updates to the visual ID model — meaning customers are not required to re-enroll in visual ID every time there is a model update. These images and vectors are securely stored on-device, not in Amazon’s cloud.
Up to 10 members of a household per account can enroll on each compatible Echo Show or Astro to enjoy more-personalized experiences for themselves. Customers with more than one visual-ID-compatible device will need to enroll on each device individually.
enrollment image_resized.png


A screenshot of the enrollment process, during which the device’s camera takes a series of headshots at different angles.

Identifying an enrolled individual​


Once you’ve enrolled in visual ID, your device attempts to match people who walk into the camera’s field of view with the visual IDs of enrolled household members. There are two steps to this process, facial detection and facial recognition, and both are done through local processing using machine learning models called convolutional neural networks.
To recognize a person, the device first uses a convolutional neural network to detect when a face appears in the camera’s field of view. If a person whom the device does not recognize as enrolled in visual ID walks into the camera’s field of view, the device will determine that there are no matches to the stored vectors. The device does not retain images or vectors from unenrolled individuals after processing. All of this happens in fractions of a second and is done securely on-device.

When your supported Echo Show device recognizes you, your avatar and a personalized greeting will appear in the upper right of the screen.
Echo Show 15_Visual ID.jpg


An example of what Echo Show 15 might show on its screen once an enrolled individual is recognized.
What shows on Astro’s screen will depend on what Astro is doing. For example, if you’ve enrolled in visual ID, and Astro is trying to find you, Astro will display text on its screen — “Looking for [Bob]”, followed by “Found [Bob]” — to acknowledge that it’s recognized you.
Looking for Bob.png


Astro will display text on its screen — “Looking for [Bob]”, followed by “Found [Bob]” — to acknowledge that it’s recognized you.

Enhancing fairness​


We set a high bar for equity when it came to designing visual ID. To clear that bar, our scientists and engineers built and refined our visual ID models using millions of images — collected in studies with participants’ consent — explicitly representing a diversity of gender, ethnicity, skin tone, age, ability, and other factors. We then set performance targets to ensure the visual ID feature performed well across groups.
In addition to consulting with several Amazon Scholars who specialize in computer vision, we also consulted with an external expert in algorithmic bias, Ayanna Howard, dean of the Ohio State University College of Engineering, to review the steps we took to enhance the fairness of the feature. We’ve implemented feedback from our Scholars and Dr. Howard, and we will solicit and listen to customer feedback and make improvements to ensure the feature continues to improve on behalf of our customers.

Privacy by design​


As with all of our products and services, privacy was foundational to how we built and designed visual ID. As mentioned above, the visual IDs of enrolled household members are securely stored on-device, and both Astro and Echo Show devices use local processing to recognize enrolled customers. You can delete your visual ID from individual devices on which you’ve enrolled through on-device settings and, for Echo Show, through the Alexa app. This will delete the stored enrollment images and associated vectors from your device. We will also automatically delete your visual ID from individual devices if your face is not recognized by that device for 18 months.
It’s still day one for visual ID, Echo Show, and Astro. We look forward to hearing how our customers use visual ID to personalize their experiences with our devices.


 
  • Like
  • Fire
  • Love
Reactions: 31 users

Blazar85

Regular
Not sure if this has been posted previously, but have a read of this.....further evidence of the shift towards ARM over x86.


I quite like this part:

The ARM-x86 War

As I’ve written earlier (I, II), there’s the real chance that the x86 architecture, sold by Intel and AMD, is entering into a life and death fight with CPUs based on an ARM architecture / instruction set.
 
  • Like
  • Love
Reactions: 10 users
Computer vision

The science behind visual ID​


A new opt-in feature for Echo Show and Astro provides more-personalized content and experiences for customers who choose to enroll.​


By The Amazon visual ID teams
September 28, 2021



Share

With every feature and device we build, we challenge ourselves to think about how we can create an immersive, personalized, and proactive experience for our customers. Often, our devices are used by multiple people in our homes, and yet there are times when you want a more personalized experience. That was the inspiration for visual ID.

More coverage of devices and services announcements​


On the all-new Echo Show 15, Echo Show 8, and Echo Show 10, you and other members of your household will soon be able to enroll in visual ID, so that at a glance you can see personalized content such as calendars and reminders, recently played music, and notes for you.
And with Astro, a new kind of household robot, enrolling in visual ID enables Astro to do things like find you to deliver something, such as a reminder or an item in Astro’s cargo bin.

Creating your visual ID​


Visual ID is opt-in, so you must first enroll in the feature, much as you can enroll in voice ID (formerly Alexa voice profile) today. During enrollment, you will use the camera on your supported Echo Show device or Astro to take a series of headshots at different angles. For visual ID to accurately recognize you, we require five different angles of your face.
During the enrollment process, the device runs algorithms to ensure that each of the images is of high enough quality. For example, if the room is too dark, you will see on-screen instructions to adjust the lighting and try again. You will also see on-screen notifications as an image of each pose is successfully captured.
The images are used to create numeric representations of your facial characteristics. Called vectors (one for each angle of your face), these numeric representations are just that: a string of numbers. The images are also used to revise the vectors in the event of periodic updates to the visual ID model — meaning customers are not required to re-enroll in visual ID every time there is a model update. These images and vectors are securely stored on-device, not in Amazon’s cloud.
Up to 10 members of a household per account can enroll on each compatible Echo Show or Astro to enjoy more-personalized experiences for themselves. Customers with more than one visual-ID-compatible device will need to enroll on each device individually.
enrollment image_resized.png


A screenshot of the enrollment process, during which the device’s camera takes a series of headshots at different angles.

Identifying an enrolled individual​


Once you’ve enrolled in visual ID, your device attempts to match people who walk into the camera’s field of view with the visual IDs of enrolled household members. There are two steps to this process, facial detection and facial recognition, and both are done through local processing using machine learning models called convolutional neural networks.
To recognize a person, the device first uses a convolutional neural network to detect when a face appears in the camera’s field of view. If a person whom the device does not recognize as enrolled in visual ID walks into the camera’s field of view, the device will determine that there are no matches to the stored vectors. The device does not retain images or vectors from unenrolled individuals after processing. All of this happens in fractions of a second and is done securely on-device.

When your supported Echo Show device recognizes you, your avatar and a personalized greeting will appear in the upper right of the screen.
Echo Show 15_Visual ID.jpg


An example of what Echo Show 15 might show on its screen once an enrolled individual is recognized.
What shows on Astro’s screen will depend on what Astro is doing. For example, if you’ve enrolled in visual ID, and Astro is trying to find you, Astro will display text on its screen — “Looking for [Bob]”, followed by “Found [Bob]” — to acknowledge that it’s recognized you.
Looking for Bob.png


Astro will display text on its screen — “Looking for [Bob]”, followed by “Found [Bob]” — to acknowledge that it’s recognized you.

Enhancing fairness​


We set a high bar for equity when it came to designing visual ID. To clear that bar, our scientists and engineers built and refined our visual ID models using millions of images — collected in studies with participants’ consent — explicitly representing a diversity of gender, ethnicity, skin tone, age, ability, and other factors. We then set performance targets to ensure the visual ID feature performed well across groups.
In addition to consulting with several Amazon Scholars who specialize in computer vision, we also consulted with an external expert in algorithmic bias, Ayanna Howard, dean of the Ohio State University College of Engineering, to review the steps we took to enhance the fairness of the feature. We’ve implemented feedback from our Scholars and Dr. Howard, and we will solicit and listen to customer feedback and make improvements to ensure the feature continues to improve on behalf of our customers.

Privacy by design​


As with all of our products and services, privacy was foundational to how we built and designed visual ID. As mentioned above, the visual IDs of enrolled household members are securely stored on-device, and both Astro and Echo Show devices use local processing to recognize enrolled customers. You can delete your visual ID from individual devices on which you’ve enrolled through on-device settings and, for Echo Show, through the Alexa app. This will delete the stored enrollment images and associated vectors from your device. We will also automatically delete your visual ID from individual devices if your face is not recognized by that device for 18 months.
It’s still day one for visual ID, Echo Show, and Astro. We look forward to hearing how our customers use visual ID to personalize their experiences with our devices.


Personally not yet convinced it seems more complex than AKIDA and requires too many images.

Anil Mankar’s presentation in the demonstration of one shot learning would make him a difficult subject but with only one shot he is correctly identified.

This Alexa system requires five perfect images from a range of angles.

Why have they dumbed it down?

Why in addition to these five perfect images do they need the data set created from millions of individuals to support its learning?

My opinion only DYOR
FF

AKIDA BALLISTA
 
  • Like
  • Thinking
  • Fire
Reactions: 24 users

Bravo

If ARM was an arm, BRN would be its biceps💪!
Screen Shot 2022-09-30 at 11.33.53 am.png
 
  • Like
  • Love
Reactions: 17 users

Mn2019

Regular
Listen carefully, I will say this only once ...

ISL 20220109
Akida 1000 PCI 20220117
SiFive 20220405
nViso 20220419
ARM 20220501
Edge Impulse 20220515
Prophesee 20220619
BrainChip University 20220816

... but apart from that, what has Brainchip ever done for shareholders?
The Aqueduct?
 
  • Like
  • Haha
Reactions: 5 users

TasTroy77

Founding Member
A total stab in the dark here, and just a gut feel but I feel that revenue may be down quarter on quarter. And I only say this because last quarter was surprisingly decent but Sean said to expect lumpy revenue.

On the other hand, I hold out hope that revenue will have increased dramatically based on Sean's statement regarding revenue growth exceeding operational cost growth (don't hold me to the exact quote).

So in conclusion, I have no idea what the 4C will bring. :ROFLMAO:
To be honest we all have no idea what the short term revenue will be.
I am happy to wait it out short term revenue, global macro conditions are irrelevant if one is patient with the companies commercialisation.
 
  • Like
  • Love
Reactions: 12 users

JK200SX

Regular
We either have a monstrosity of a competitor that has just appeared! Or this is the first dominant signs of our incorporation into all things Amazon. Data centres and edge devices would be the dream.

Would be strange to have such a company on our podcasts if they were about to make an exact replica of our technology.

This is a rather large walking and talking duck. blind freddy.... what's your take????

Well, Louis did say that a number of our NDA's are household names. :)
(and most of the dot joining to date has been with "non-household names....)
 
  • Like
  • Haha
Reactions: 15 users

Learning

Learning to the Top 🕵‍♂️
  • Like
  • Fire
  • Love
Reactions: 16 users

alwaysgreen

Top 20
Well, Louis did say that a number of our NDA's are household names. :)
(and most of the dot joining to date has been with "non-household names....)
I have a theory that a number of the household names under NDA or part of the EAP, have been encouraged by Brainchip to purchase through Renasas and Megachips in order for them to keep their secrets.

If they sign direct with Brainchip, we have to disclose it to the ASX and their competitive advantage is lost.
 
  • Like
  • Fire
  • Love
Reactions: 30 users

clip

Regular
Personally not yet convinced it seems more complex than AKIDA and requires too many images.

Anil Mankar’s presentation in the demonstration of one shot learning would make him a difficult subject but with only one shot he is correctly identified.

This Alexa system requires five perfect images from a range of angles.

Why have they dumbed it down?

Why in addition to these five perfect images do they need the data set created from millions of individuals to support its learning?

My opinion only DYOR
FF

AKIDA BALLISTA
Maybe to make it work more reliable. And in case you have a twin
 
  • Love
  • Like
Reactions: 2 users
  • Haha
  • Wow
Reactions: 4 users
Maybe to make it work more reliable. And in case you have a twin
I would have saved them a lot of time and money all they needed to do was say FF do you have a twin. Answer No. 😂🤣😂🤡
 
  • Haha
  • Like
Reactions: 3 users

Diogenese

Top 20

Amazon Introduces the New Blink Wired Floodlight Camera and Blink Mini Pan Tilt—Offering Customers Even More Flexibility in Security Coverage and Peace of Mind​


7fce4ceb0fbf3f68fdfa5a7ce6ff68cd

Blink Wired Floodlight Camera uses Amazon’s AZ2 Neural Edge Processor to capture and process videos locally—and starts at just $99.99

New Blink Mini Pan Tilt mount brings additional functionality to the popular Blink Mini, giving customers the ability to pan and tilt their cameras remotely


SEATTLE, September 28, 2022--(BUSINESS WIRE)--Amazon (NASDAQ: AMZN) today introduced two new additions to the Blink family of devices—the new Blink Wired Floodlight Camera and the new Blink Mini Pan Tilt. At just $99.99, the Blink Wired Floodlight Camera includes a smart security camera and powerful LED lighting all in one, streamlined design, and Amazon’s AZ2 silicon to process videos without going to the cloud. The Blink Mini Pan Tilt is a new mount that works with Blink Mini to enable you to see a wider field of view and remotely pan and tilt to follow motion.


"The Blink Wired Floodlight Camera is our first wired floodlight device, and it adds to the existing lineup of easy-to-use, reliable, and affordable security devices that help customers keep an eye on their homes," said Mike Harris, chief operating officer at Blink. "With an all-in-one security and lighting design, and a price below $100, it offers a mix of performance and value that’s hard to beat. Plus, it leverages the intelligence of Amazon silicon, enabling us to offer features such as computer vision and local video processing for the first time."

Blink Wired Floodlight Camera—Advanced Features at an Affordable Price

The Blink Wired Floodlight Camera is designed to offer high performance for those looking for a hardwired security solution in an affordable package. Support for preferred motion detection zones means you can focus on the areas that are most important to you, and new person detection provides the ability to limit motion alert notifications to only when a person is present. Blink Wired Floodlight Camera’s enhanced motion detection features are built on the capabilities provided by Amazon’s AZ2 Neural Edge Processor, which also enables video content to be processed locally on the edge.

The Blink Wired Floodlight Camera provides 2600 lumens of LED lighting, 1080p HD live view, and crisp two-way audio. Setup is easy using an existing wired connection, and you can easily store video clips locally with a Sync Module 2 via a USB flash drive (sold separately). With a Blink Subscription Plan, you can also store video clips and photos in the cloud.

Blink Mini Pan Tilt—Adding New Functionality and Flexibility for Blink Mini

The new Blink Mini Pan Tilt adds a motorized mount to the Blink Mini to help keep an eye on even more of your home. With Mini Pan Tilt, you instantly gain the ability to remotely pan left and right, and tilt up and down, using the Blink app—getting corner-to-corner, 360-degree coverage of any room. If you already have a Blink Mini, you can easily add just the mount via a micro-USB, and you can place it on a table or countertop, or connect via a tri-pod or wall-mount (sold separately) for additional functionality.
AMAZON filed a truckload of patents for image/voice recognition in 2020. They talked about DNN/CNN.

A couple of examples:

This one has "audio modules 320 , video modules 322 , ML models 324 (e.g., using deep neural networks, convolutional networks, etc.)", but it appears that they hadn't heard of SNNs in 2022.

US11218666B1 Enhanced audio and video capture and presentation DNN/CNN

Priorities US202017119074A·2020-12-11

1664503250954.png



[0017] In one or more embodiments, some devices may be equipped with a microphone and/or image capture device (e.g., a camera) to capture audio and/or video of a person. The devices may include a machine learning (ML) model, such as a deep neural network, that may perform image analysis and audio analysis to identify people, faces, gestures, facial expressions, and sounds. Using a combination of audio and video, the ML model may identify specific phonemes represented by the audio and/or video captured by the devices.

[0051] Still referring to FIG. 3, any of the device 302 , the device 304 , the camera 306 , the headset 308 , and/or the facemask 310 may include or be in communication with audio modules 320 , video modules 322 , ML models 324 (e.g., using deep neural networks, convolutional networks, etc.), audio storage 326 , and/or image storage 328
.


This one looks like it is for DOT:

US2022262156A1 ELECTRONIC DEVICE FOR AUTOMATED USER IDENTIFICATION

1664503145665.png


... but not very edgy:

1664503873691.png



[0104] FIG. 7 illustrates an example environment 700 of a materials handling facility 702 that includes the electronic device 104 to capture biometric data of users. In this example, the electronic device 104 generates image data depicting a palm of a user 704 and sends the image data to one or more backend server(s) 706 to be used to enroll the user 704 for use of the user-recognition system.

However, if my memory does not play me false, I recall Amazon saying within a year or so ago that they would be open to using non-home-brand tech for their push to the edge.
 
  • Like
  • Fire
  • Love
Reactions: 29 users

Harwig

Regular
I would have saved them a lot of time and money all they needed to do was say FF do you have a twin. Answer No. 😂🤣😂🤡
Maybe a doppelganger?
 
  • Haha
  • Like
Reactions: 4 users

alwaysgreen

Top 20
Maybe a doppelganger?

Despite knowing it's real meaning, every time I hear the word doppelganger, my brain goes to the gutter as it sounds like it should be a german word for swingers party/orgy!
 
  • Haha
  • Like
Reactions: 18 users

TechGirl

Founding Member
  • Like
  • Fire
  • Love
Reactions: 54 users

krugerrands

Regular
Personally not yet convinced it seems more complex than AKIDA and requires too many images.

Anil Mankar’s presentation in the demonstration of one shot learning would make him a difficult subject but with only one shot he is correctly identified.

This Alexa system requires five perfect images from a range of angles.

Why have they dumbed it down?

Why in addition to these five perfect images do they need the data set created from millions of individuals to support its learning?

My opinion only DYOR
FF

AKIDA BALLISTA

Because it was mentioned that it was a custom ARM chip that piqued my interest.

But as you highlighted, they talk about CNN and from a practical differential the user experience does not match the demonstration of Akida.

The other angle would be revenenue.

This product has been shipping since ~December 2021.
If this was Akida IP, we should have seen license and 6 months worth of product revenue.
 
  • Like
  • Love
Reactions: 18 users
AMAZON filed a truckload of patents for image/voice recognition in 2020. They talked about DNN/CNN.

A couple of examples:

This one has "audio modules 320 , video modules 322 , ML models 324 (e.g., using deep neural networks, convolutional networks, etc.)", but it appears that they hadn't heard of SNNs in 2022.

US11218666B1 Enhanced audio and video capture and presentation DNN/CNN

Priorities US202017119074A·2020-12-11

View attachment 17697


[0017] In one or more embodiments, some devices may be equipped with a microphone and/or image capture device (e.g., a camera) to capture audio and/or video of a person. The devices may include a machine learning (ML) model, such as a deep neural network, that may perform image analysis and audio analysis to identify people, faces, gestures, facial expressions, and sounds. Using a combination of audio and video, the ML model may identify specific phonemes represented by the audio and/or video captured by the devices.

[0051] Still referring to FIG. 3, any of the device 302 , the device 304 , the camera 306 , the headset 308 , and/or the facemask 310 may include or be in communication with audio modules 320 , video modules 322 , ML models 324 (e.g., using deep neural networks, convolutional networks, etc.), audio storage 326 , and/or image storage 328
.


This one looks like it is for DOT:

US2022262156A1 ELECTRONIC DEVICE FOR AUTOMATED USER IDENTIFICATION

View attachment 17696

... but not very edgy:

View attachment 17698


[0104] FIG. 7 illustrates an example environment 700 of a materials handling facility 702 that includes the electronic device 104 to capture biometric data of users. In this example, the electronic device 104 generates image data depicting a palm of a user 704 and sends the image data to one or more backend server(s) 706 to be used to enroll the user 704 for use of the user-recognition system.

However, if my memory does not play me false, I recall Amazon saying within a year or so ago that they would be open to using non-home-brand tech for their push to the edge.
Who needs down rampers when we have Fact Finder and @Diogenese

If this helps all those who are lying desolate on the kitchen floor pleading for someone to kill them I can say this, in one of my very robust debates with Brainchip over my differing view as to how they might disclose more information without falling foul of the ASX I received the response that in relation to one of the NDA's that any disclosure would be 'devastating for the company and its shareholders' and as such they would continue with an absolute no risk approach.

Where does this take me well to two publicly stated facts:

1. The former CEO Mr. Dinardo stated that the NDA's were with Fortune 500 and Household Named companies - as a retired lawyer the significance of this statement is clear. If there was ever a class action by shareholders this is such an easily proven lie if it were to be a lie that only a complete fool would make the statement and Mr. Dinardo is anything but a fool.

2. Ken Scarince the CFO in his German Investor presentation made the point that one of the companies was adamant about its desire for secrecy and would not tolerate any breach - the same legal issue arises as there would be a documentary trail covering such interactions and Board minutes and again Ken Scarince is nobodies fool and in my assessment over a number of years now completely trustworthy.

So putting these all together I have significant confidence that one day a company like Amazon, Apple, Google, Facebook (Meta), IBM, Intel, Nvidia etc will reveal itself but that at the point of the reveal the current CEO Sean Hehir's request for shareholders to look to the income to judge the companies progress will have long proven that Brainchip has become an economic powerhouse in the semiconductor industry.

My opinion only DYOR
FF

AKIDA BALLISTA
 
  • Like
  • Love
  • Fire
Reactions: 79 users

alwaysgreen

Top 20
Who needs down rampers when we have Fact Finder and @Diogenese

If this helps all those who are lying desolate on the kitchen floor pleading for someone to kill them I can say this, in one of my very robust debates with Brainchip over my differing view as to how they might disclose more information without falling foul of the ASX I received the response that in relation to one of the NDA's that any disclosure would be 'devastating for the company and its shareholders' and as such they would continue with an absolute no risk approach.

Where does this take me well to two publicly stated facts:

1. The former CEO Mr. Dinardo stated that the NDA's were with Fortune 500 and Household Named companies - as a retired lawyer the significance of this statement is clear. If there was ever a class action by shareholders this is such an easily proven lie if it were to be a lie that only a complete fool would make the statement and Mr. Dinardo is anything but a fool.

2. Ken Scarince the CFO in his German Investor presentation made the point that one of the companies was adamant about its desire for secrecy and would not tolerate any breach - the same legal issue arises as there would be a documentary trail covering such interactions and Board minutes and again Ken Scarince is nobodies fool and in my assessment over a number of years now completely trustworthy.

So putting these all together I have significant confidence that one day a company like Amazon, Apple, Google, Facebook (Meta), IBM, Intel, Nvidia etc will reveal itself but that at the point of the reveal the current CEO Sean Hehir's request for shareholders to look to the income to judge the companies progress will have long proven that Brainchip has become an economic powerhouse in the semiconductor industry.

My opinion only DYOR
FF

AKIDA BALLISTA
Bloody downrampers. Are you shorting the stock? Go back to the Crapper.

Where are the announcementz!

On a side note, the ASX and BRN in particular are holding up well today on a day I anticipated to be a bit of a bloodbath. Might even nudge higher by close?
 
  • Haha
  • Like
Reactions: 19 users
Just on Commsec and as predicted by Fact Finder BRN is now part of the Big Bank Index:

ANZ down 1.13%
CBA down 1.75%
NAB down 0.75%
WBC down 1.26%
and
BRN down 1.16%

What more can be said.​


My opinion only DYOR
FF

AKIDA BALLISTA
 
  • Like
  • Haha
  • Love
Reactions: 29 users
Top Bottom