HopalongPetrovski
I'm Spartacus!
Everyone’s portfolio today
Another potential buying opportunity.

Giddyuppppp.

Everyone’s portfolio today
Amazon Introduces the New Blink Wired Floodlight Camera and Blink Mini Pan Tilt—Offering Customers Even More Flexibility in Security Coverage and Peace of Mind
![]()
Blink Wired Floodlight Camera uses Amazon’s AZ2 Neural Edge Processor to capture and process videos locally—and starts at just $99.99
New Blink Mini Pan Tilt mount brings additional functionality to the popular Blink Mini, giving customers the ability to pan and tilt their cameras remotely
SEATTLE, September 28, 2022--(BUSINESS WIRE)--Amazon (NASDAQ: AMZN) today introduced two new additions to the Blink family of devices—the new Blink Wired Floodlight Camera and the new Blink Mini Pan Tilt. At just $99.99, the Blink Wired Floodlight Camera includes a smart security camera and powerful LED lighting all in one, streamlined design, and Amazon’s AZ2 silicon to process videos without going to the cloud. The Blink Mini Pan Tilt is a new mount that works with Blink Mini to enable you to see a wider field of view and remotely pan and tilt to follow motion.
"The Blink Wired Floodlight Camera is our first wired floodlight device, and it adds to the existing lineup of easy-to-use, reliable, and affordable security devices that help customers keep an eye on their homes," said Mike Harris, chief operating officer at Blink. "With an all-in-one security and lighting design, and a price below $100, it offers a mix of performance and value that’s hard to beat. Plus, it leverages the intelligence of Amazon silicon, enabling us to offer features such as computer vision and local video processing for the first time."
Blink Wired Floodlight Camera—Advanced Features at an Affordable Price
The Blink Wired Floodlight Camera is designed to offer high performance for those looking for a hardwired security solution in an affordable package. Support for preferred motion detection zones means you can focus on the areas that are most important to you, and new person detection provides the ability to limit motion alert notifications to only when a person is present. Blink Wired Floodlight Camera’s enhanced motion detection features are built on the capabilities provided by Amazon’s AZ2 Neural Edge Processor, which also enables video content to be processed locally on the edge.
The Blink Wired Floodlight Camera provides 2600 lumens of LED lighting, 1080p HD live view, and crisp two-way audio. Setup is easy using an existing wired connection, and you can easily store video clips locally with a Sync Module 2 via a USB flash drive (sold separately). With a Blink Subscription Plan, you can also store video clips and photos in the cloud.
Blink Mini Pan Tilt—Adding New Functionality and Flexibility for Blink Mini
The new Blink Mini Pan Tilt adds a motorized mount to the Blink Mini to help keep an eye on even more of your home. With Mini Pan Tilt, you instantly gain the ability to remotely pan left and right, and tilt up and down, using the Blink app—getting corner-to-corner, 360-degree coverage of any room. If you already have a Blink Mini, you can easily add just the mount via a micro-USB, and you can place it on a table or countertop, or connect via a tri-pod or wall-mount (sold separately) for additional functionality.
Don’t forget NASA referring to using unconnected Alexa in space???Further to this, BMW have just announced that they will be using Amazon Alexa AI for their in-cabin voice processing. #interesting!
Just thinking back to the "Hey, Akida" demo in the BMW. hmmmmmmm........
![]()
BMW holt Amazons Alexa-KI für den eigenen Sprachassistenten ins Auto | t3n
BMW will seinen hauseigenen Sprachassistenten verbessern und greift dafür auf die Technologie von Amazons Alexa zurück. Die ersten Fahrzeuge sollen bis 2024 auf den Markt kommen. Der Sprachassistent, der auf das Hotword „Hey BMW“ gehorcht, bekommt künftig einen Alexa-Unterbau. Damit will der...t3n.de
“Ambient Intelligence.”Don’t forget NASA referring to using unconnected Alexa in space???
Rob Telson mentioning Alexa by name many times when speaking about AKIDA’s unconnected ability then dropping Alexa from his vocabulary???
Did Alexa become more advanced by itself or did someone or something lend a hand from the future???
My opinion only DYOR
FF
AKIDA BALLISTA
“Ambient Intelligence.”
![]()
Amazon’s AZ2 CPU knows your face
The latest version of the Echo Show uses a CPU that can remember your face.www.theverge.com
It's great to be a shareholder 🏖
This means it has plenty of local bandwidth to learn your face as well as your voice. In fact, Amazon says it can process speech and facial recognition simultaneously. A big reason for this is because it's a neural edge processor. Those sound like the kind of words tech companies like to throw around, but they do mean something — the "neural" part means it's a chip used with algorithms for machine learning and the "edge" part means it can do it without calling for backup from some server.The AZ2 Neural Engine can work 22-times faster than Amazon's last-generation processor.
Speaking of that, the Echo Show 15 is the only device that will use the new AZ2 Neural Edge chip for now. We expect that to change as Amazon brings its Visual ID feature to other devices. Maybe even drones or robots.Edge computing is not only better for privacy, but it's faster, too.
This is amazing, if they did that without brainchip, I might consider selling brainchip and cry.It certainly does walks and talks like a duck doesn't it @Learning!
What you need to know about the Amazon AZ2 Neural Engine
By Jerry Hildenbrand
last updated February 03, 2022
![]()
Amazon AZ2 SoC (Image credit: Amazon)
The Amazon Echo Show 15 not only hangs on your wall but can learn to recognize your face. That's because it has a new piece of Amazon-designed silicon inside dubbed the Amazon AZ2 Neural Engine.
Yes, Amazon custom designs ARM chips. The AZ2 isn't even the first one (hence the 2), but it's a lot more capable than the AZ1, which powers some of the best Alexa speakers and offers something new for Amazon — edge computing.
If you're not sure what edge computing is, this chip and what it does actually makes it easy to understand. All the processing to learn and recognize your face is done using machine learning through the chip itself and nothing needs to be sent across the internet to make that happen.
Sponsored Links
https://info.hearclear.com/latest-Australian-hearing-technology?utm_campaign=1128869&utm_content=3433304281&cid=5ad70f3e65e00&utm_source=taboola&utm_medium=cpc&campaign=HA-AU-HC-2-D&platform=Desktop&utm_term=Pensioners+Are+Ditching+Hearing+Loss+Problems+With+This!&content=https://console.brax-cdn.com/creatives/44dd7285-cd6a-4a0f-9085-8137587509a3/images/nadads-sh-chloe-ha__08b4fa4d-7af5-4f78-9b49-04ed77d730e7_1000x600.jpeg&network=futureplc-androidcentral&title=Pensioners+Are+Ditching+Hearing+Loss+Problems+With+This!&click-id=GiCu3yvkY5g-f0Xl9sdIVLKrNMW9eqfRaFmSCeuCAHGZTSCqs0Ao24qqrZePlo16#tblciGiCu3yvkY5g-f0Xl9sdIVLKrNMW9eqfRaFmSCeuCAHGZTSCqs0Ao24qqrZePlo16
Pensioners Are Ditching Hearing Loss Problems With This!HearClear Hearing Aid Quotes
I still think any computer learning to recognize human faces is pretty creepy but doing it locally instead of through a remote server is pretty cool. Also, you have to opt-in for this feature, so you can still buy Amazon's new Echo Show 15 even if you think it's creepy like I do. But enough about creepy stuff.
RECOMMENDED VIDEOS FOR YOU...
CLOSE
![]()
Source: Amazon (Image credit: Source: Amazon)
What the AZ2 can do — on paper anyway — is pretty impressive. Consider the last-gen AZ1, which was able to recognize your voice without Amazon needing to send that data through the cloud. The new model does that, of course, but it's also capable of performing 22 times the amount of operations each second.
This means it has plenty of local bandwidth to learn your face as well as your voice. In fact, Amazon says it can process speech and facial recognition simultaneously. A big reason for this is because it's a neural edge processor. Those sound like the kind of words tech companies like to throw around, but they do mean something — the "neural" part means it's a chip used with algorithms for machine learning and the "edge" part means it can do it without calling for backup from some server.
By doing things locally, there is almost zero latency, which means there is virtually zero wait time between operations. We haven't seen how well it actually operates but based on its capabilities, it looks like the perfect chip to put inside something like an Echo Show.
Speaking of that, the Echo Show 15 is the only device that will use the new AZ2 Neural Edge chip for now. We expect that to change as Amazon brings its Visual ID feature to other devices. Maybe even drones or robots.
Whether you love Amazon products or hate them, you can't help but be impressed with the new AZ2. It's easy to forget that Amazon is also part of Big Tech, but things like this serve to remind us that some top-level engineers work a lot of hours to build those Echo devices so many people love.
![]()
Source: Ring (Image credit: Source: Ring)
We either have a monstrosity of a competitor that has just appeared! Or this is the first dominant signs of our incorporation into all things Amazon. Data centres and edge devices would be the dream.It certainly does walk and talk like a duck doesn't it @Learning!
What you need to know about the Amazon AZ2 Neural Engine
By Jerry Hildenbrand
last updated February 03, 2022
![]()
Amazon AZ2 SoC (Image credit: Amazon)
The Amazon Echo Show 15 not only hangs on your wall but can learn to recognize your face. That's because it has a new piece of Amazon-designed silicon inside dubbed the Amazon AZ2 Neural Engine.
Yes, Amazon custom designs ARM chips. The AZ2 isn't even the first one (hence the 2), but it's a lot more capable than the AZ1, which powers some of the best Alexa speakers and offers something new for Amazon — edge computing.
If you're not sure what edge computing is, this chip and what it does actually makes it easy to understand. All the processing to learn and recognize your face is done using machine learning through the chip itself and nothing needs to be sent across the internet to make that happen.
Sponsored Links
https://info.hearclear.com/latest-Australian-hearing-technology?utm_campaign=1128869&utm_content=3433304281&cid=5ad70f3e65e00&utm_source=taboola&utm_medium=cpc&campaign=HA-AU-HC-2-D&platform=Desktop&utm_term=Pensioners+Are+Ditching+Hearing+Loss+Problems+With+This!&content=https://console.brax-cdn.com/creatives/44dd7285-cd6a-4a0f-9085-8137587509a3/images/nadads-sh-chloe-ha__08b4fa4d-7af5-4f78-9b49-04ed77d730e7_1000x600.jpeg&network=futureplc-androidcentral&title=Pensioners+Are+Ditching+Hearing+Loss+Problems+With+This!&click-id=GiCu3yvkY5g-f0Xl9sdIVLKrNMW9eqfRaFmSCeuCAHGZTSCqs0Ao24qqrZePlo16#tblciGiCu3yvkY5g-f0Xl9sdIVLKrNMW9eqfRaFmSCeuCAHGZTSCqs0Ao24qqrZePlo16
Pensioners Are Ditching Hearing Loss Problems With This!HearClear Hearing Aid Quotes
I still think any computer learning to recognize human faces is pretty creepy but doing it locally instead of through a remote server is pretty cool. Also, you have to opt-in for this feature, so you can still buy Amazon's new Echo Show 15 even if you think it's creepy like I do. But enough about creepy stuff.
RECOMMENDED VIDEOS FOR YOU...
CLOSE
![]()
Source: Amazon (Image credit: Source: Amazon)
What the AZ2 can do — on paper anyway — is pretty impressive. Consider the last-gen AZ1, which was able to recognize your voice without Amazon needing to send that data through the cloud. The new model does that, of course, but it's also capable of performing 22 times the amount of operations each second.
This means it has plenty of local bandwidth to learn your face as well as your voice. In fact, Amazon says it can process speech and facial recognition simultaneously. A big reason for this is because it's a neural edge processor. Those sound like the kind of words tech companies like to throw around, but they do mean something — the "neural" part means it's a chip used with algorithms for machine learning and the "edge" part means it can do it without calling for backup from some server.
By doing things locally, there is almost zero latency, which means there is virtually zero wait time between operations. We haven't seen how well it actually operates but based on its capabilities, it looks like the perfect chip to put inside something like an Echo Show.
Speaking of that, the Echo Show 15 is the only device that will use the new AZ2 Neural Edge chip for now. We expect that to change as Amazon brings its Visual ID feature to other devices. Maybe even drones or robots.
Whether you love Amazon products or hate them, you can't help but be impressed with the new AZ2. It's easy to forget that Amazon is also part of Big Tech, but things like this serve to remind us that some top-level engineers work a lot of hours to build those Echo devices so many people love.
![]()
Source: Ring (Image credit: Source: Ring)
Great minds think alike @Bravo, I was reading the same article 20 minutes ago, but with hand on tools, can't post it. Lol.It certainly does walk and talk like a duck doesn't it @Learning!
What you need to know about the Amazon AZ2 Neural Engine
By Jerry Hildenbrand
last updated February 03, 2022
![]()
Amazon AZ2 SoC (Image credit: Amazon)
The Amazon Echo Show 15 not only hangs on your wall but can learn to recognize your face. That's because it has a new piece of Amazon-designed silicon inside dubbed the Amazon AZ2 Neural Engine.
Yes, Amazon custom designs ARM chips. The AZ2 isn't even the first one (hence the 2), but it's a lot more capable than the AZ1, which powers some of the best Alexa speakers and offers something new for Amazon — edge computing.
If you're not sure what edge computing is, this chip and what it does actually makes it easy to understand. All the processing to learn and recognize your face is done using machine learning through the chip itself and nothing needs to be sent across the internet to make that happen.
Sponsored Links
https://info.hearclear.com/latest-Australian-hearing-technology?utm_campaign=1128869&utm_content=3433304281&cid=5ad70f3e65e00&utm_source=taboola&utm_medium=cpc&campaign=HA-AU-HC-2-D&platform=Desktop&utm_term=Pensioners+Are+Ditching+Hearing+Loss+Problems+With+This!&content=https://console.brax-cdn.com/creatives/44dd7285-cd6a-4a0f-9085-8137587509a3/images/nadads-sh-chloe-ha__08b4fa4d-7af5-4f78-9b49-04ed77d730e7_1000x600.jpeg&network=futureplc-androidcentral&title=Pensioners+Are+Ditching+Hearing+Loss+Problems+With+This!&click-id=GiCu3yvkY5g-f0Xl9sdIVLKrNMW9eqfRaFmSCeuCAHGZTSCqs0Ao24qqrZePlo16#tblciGiCu3yvkY5g-f0Xl9sdIVLKrNMW9eqfRaFmSCeuCAHGZTSCqs0Ao24qqrZePlo16
Pensioners Are Ditching Hearing Loss Problems With This!HearClear Hearing Aid Quotes
I still think any computer learning to recognize human faces is pretty creepy but doing it locally instead of through a remote server is pretty cool. Also, you have to opt-in for this feature, so you can still buy Amazon's new Echo Show 15 even if you think it's creepy like I do. But enough about creepy stuff.
RECOMMENDED VIDEOS FOR YOU...
CLOSE
![]()
Source: Amazon (Image credit: Source: Amazon)
What the AZ2 can do — on paper anyway — is pretty impressive. Consider the last-gen AZ1, which was able to recognize your voice without Amazon needing to send that data through the cloud. The new model does that, of course, but it's also capable of performing 22 times the amount of operations each second.
This means it has plenty of local bandwidth to learn your face as well as your voice. In fact, Amazon says it can process speech and facial recognition simultaneously. A big reason for this is because it's a neural edge processor. Those sound like the kind of words tech companies like to throw around, but they do mean something — the "neural" part means it's a chip used with algorithms for machine learning and the "edge" part means it can do it without calling for backup from some server.
By doing things locally, there is almost zero latency, which means there is virtually zero wait time between operations. We haven't seen how well it actually operates but based on its capabilities, it looks like the perfect chip to put inside something like an Echo Show.
Speaking of that, the Echo Show 15 is the only device that will use the new AZ2 Neural Edge chip for now. We expect that to change as Amazon brings its Visual ID feature to other devices. Maybe even drones or robots.
Whether you love Amazon products or hate them, you can't help but be impressed with the new AZ2. It's easy to forget that Amazon is also part of Big Tech, but things like this serve to remind us that some top-level engineers work a lot of hours to build those Echo devices so many people love.
![]()
Source: Ring (Image credit: Source: Ring)
Great works @wilzy123Amazon Introduces the New Blink Wired Floodlight Camera and Blink Mini Pan Tilt—Offering Customers Even More Flexibility in Security Coverage and Peace of Mind
![]()
Blink Wired Floodlight Camera uses Amazon’s AZ2 Neural Edge Processor to capture and process videos locally—and starts at just $99.99
New Blink Mini Pan Tilt mount brings additional functionality to the popular Blink Mini, giving customers the ability to pan and tilt their cameras remotely
SEATTLE, September 28, 2022--(BUSINESS WIRE)--Amazon (NASDAQ: AMZN) today introduced two new additions to the Blink family of devices—the new Blink Wired Floodlight Camera and the new Blink Mini Pan Tilt. At just $99.99, the Blink Wired Floodlight Camera includes a smart security camera and powerful LED lighting all in one, streamlined design, and Amazon’s AZ2 silicon to process videos without going to the cloud. The Blink Mini Pan Tilt is a new mount that works with Blink Mini to enable you to see a wider field of view and remotely pan and tilt to follow motion.
"The Blink Wired Floodlight Camera is our first wired floodlight device, and it adds to the existing lineup of easy-to-use, reliable, and affordable security devices that help customers keep an eye on their homes," said Mike Harris, chief operating officer at Blink. "With an all-in-one security and lighting design, and a price below $100, it offers a mix of performance and value that’s hard to beat. Plus, it leverages the intelligence of Amazon silicon, enabling us to offer features such as computer vision and local video processing for the first time."
Blink Wired Floodlight Camera—Advanced Features at an Affordable Price
The Blink Wired Floodlight Camera is designed to offer high performance for those looking for a hardwired security solution in an affordable package. Support for preferred motion detection zones means you can focus on the areas that are most important to you, and new person detection provides the ability to limit motion alert notifications to only when a person is present. Blink Wired Floodlight Camera’s enhanced motion detection features are built on the capabilities provided by Amazon’s AZ2 Neural Edge Processor, which also enables video content to be processed locally on the edge.
The Blink Wired Floodlight Camera provides 2600 lumens of LED lighting, 1080p HD live view, and crisp two-way audio. Setup is easy using an existing wired connection, and you can easily store video clips locally with a Sync Module 2 via a USB flash drive (sold separately). With a Blink Subscription Plan, you can also store video clips and photos in the cloud.
Blink Mini Pan Tilt—Adding New Functionality and Flexibility for Blink Mini
The new Blink Mini Pan Tilt adds a motorized mount to the Blink Mini to help keep an eye on even more of your home. With Mini Pan Tilt, you instantly gain the ability to remotely pan left and right, and tilt up and down, using the Blink app—getting corner-to-corner, 360-degree coverage of any room. If you already have a Blink Mini, you can easily add just the mount via a micro-USB, and you can place it on a table or countertop, or connect via a tri-pod or wall-mount (sold separately) for additional functionality.
Personally not yet convinced it seems more complex than AKIDA and requires too many images.Computer vision
The science behind visual ID
A new opt-in feature for Echo Show and Astro provides more-personalized content and experiences for customers who choose to enroll.
By The Amazon visual ID teams
September 28, 2021
Share
With every feature and device we build, we challenge ourselves to think about how we can create an immersive, personalized, and proactive experience for our customers. Often, our devices are used by multiple people in our homes, and yet there are times when you want a more personalized experience. That was the inspiration for visual ID.
More coverage of devices and services announcements
On the all-new Echo Show 15, Echo Show 8, and Echo Show 10, you and other members of your household will soon be able to enroll in visual ID, so that at a glance you can see personalized content such as calendars and reminders, recently played music, and notes for you.
- "Unlocking AI for everyone"
- "Astro’s Intelligent Motion brings state-of-the-art navigation to the home"
- "A more useful way to measure robotic localization error"
- "How Amazon is using self-service to democratize AI"
And with Astro, a new kind of household robot, enrolling in visual ID enables Astro to do things like find you to deliver something, such as a reminder or an item in Astro’s cargo bin.
Creating your visual ID
Visual ID is opt-in, so you must first enroll in the feature, much as you can enroll in voice ID (formerly Alexa voice profile) today. During enrollment, you will use the camera on your supported Echo Show device or Astro to take a series of headshots at different angles. For visual ID to accurately recognize you, we require five different angles of your face.
During the enrollment process, the device runs algorithms to ensure that each of the images is of high enough quality. For example, if the room is too dark, you will see on-screen instructions to adjust the lighting and try again. You will also see on-screen notifications as an image of each pose is successfully captured.
The images are used to create numeric representations of your facial characteristics. Called vectors (one for each angle of your face), these numeric representations are just that: a string of numbers. The images are also used to revise the vectors in the event of periodic updates to the visual ID model — meaning customers are not required to re-enroll in visual ID every time there is a model update. These images and vectors are securely stored on-device, not in Amazon’s cloud.
Up to 10 members of a household per account can enroll on each compatible Echo Show or Astro to enjoy more-personalized experiences for themselves. Customers with more than one visual-ID-compatible device will need to enroll on each device individually.
![]()
A screenshot of the enrollment process, during which the device’s camera takes a series of headshots at different angles.
Identifying an enrolled individual
Once you’ve enrolled in visual ID, your device attempts to match people who walk into the camera’s field of view with the visual IDs of enrolled household members. There are two steps to this process, facial detection and facial recognition, and both are done through local processing using machine learning models called convolutional neural networks.
To recognize a person, the device first uses a convolutional neural network to detect when a face appears in the camera’s field of view. If a person whom the device does not recognize as enrolled in visual ID walks into the camera’s field of view, the device will determine that there are no matches to the stored vectors. The device does not retain images or vectors from unenrolled individuals after processing. All of this happens in fractions of a second and is done securely on-device.
When your supported Echo Show device recognizes you, your avatar and a personalized greeting will appear in the upper right of the screen.
![]()
An example of what Echo Show 15 might show on its screen once an enrolled individual is recognized.
What shows on Astro’s screen will depend on what Astro is doing. For example, if you’ve enrolled in visual ID, and Astro is trying to find you, Astro will display text on its screen — “Looking for [Bob]”, followed by “Found [Bob]” — to acknowledge that it’s recognized you.
![]()
Astro will display text on its screen — “Looking for [Bob]”, followed by “Found [Bob]” — to acknowledge that it’s recognized you.
Enhancing fairness
We set a high bar for equity when it came to designing visual ID. To clear that bar, our scientists and engineers built and refined our visual ID models using millions of images — collected in studies with participants’ consent — explicitly representing a diversity of gender, ethnicity, skin tone, age, ability, and other factors. We then set performance targets to ensure the visual ID feature performed well across groups.
In addition to consulting with several Amazon Scholars who specialize in computer vision, we also consulted with an external expert in algorithmic bias, Ayanna Howard, dean of the Ohio State University College of Engineering, to review the steps we took to enhance the fairness of the feature. We’ve implemented feedback from our Scholars and Dr. Howard, and we will solicit and listen to customer feedback and make improvements to ensure the feature continues to improve on behalf of our customers.
Privacy by design
As with all of our products and services, privacy was foundational to how we built and designed visual ID. As mentioned above, the visual IDs of enrolled household members are securely stored on-device, and both Astro and Echo Show devices use local processing to recognize enrolled customers. You can delete your visual ID from individual devices on which you’ve enrolled through on-device settings and, for Echo Show, through the Alexa app. This will delete the stored enrollment images and associated vectors from your device. We will also automatically delete your visual ID from individual devices if your face is not recognized by that device for 18 months.
It’s still day one for visual ID, Echo Show, and Astro. We look forward to hearing how our customers use visual ID to personalize their experiences with our devices.
![]()
The science behind visual ID
A new opt-in feature for Echo Show and Astro provides more-personalized content and experiences for customers who choose to enroll.www.amazon.science