Zedjack33
Regular
Hi @Zedjack33
What's the emoji for chagrin?
Hi @Zedjack33
What's the emoji for chagrin?
Mary doesn’t need to explain to us Australians what a TOSA is.
Mary Bennion on LinkedIn: #tosa #ml #arm #standards
What in the world is #TOSA?? We will tell you next week! Registration below 👇👇👇 #ml #arm #standardswww.linkedin.com
AI brain, AI technology, AI algorithms.. A million references to the mysterious AI antidote Whether it's a phone, a robot, a car, a drone, a camera, a VR headset or a damn fridge.. I don't care I am just desperate to buy something with brainchip on board.. I'm sure there's 1,000+ sales right here if they'd just add a little AKIDA to their marketing and propagandaI am wondering why Edge Impulse is promoting this.
Edge Impulse on LinkedIn: Along with new camera hardware, the Tensor G2 AI brain in Google's Pixel 7…
Along with new camera hardware, the Tensor G2 AI brain in Google's Pixel 7 Pro phone helps you zoom, focus, unblur faces, and shoot in the dark. cc: CNET…www.linkedin.com
I bought one already. Regardless if it has Akida onboard or not, their cameras are amazing.AI brain, AI technology, AI algorithms.. A million references to the mysterious AI antidote Whether it's a phone, a robot, a car, a drone, a camera, a VR headset or a damn fridge.. I don't care I am just desperate to buy something with brainchip on board.. I'm sure there's 1,000+ sales right here if they'd just add a little AKIDA to their marketing and propaganda
I am not a techie so all I can do is remember what Peter van der Made said at the 2019 AGM when they had finished the design of the original AKD1000 and he stated that 1,024 AKD1000 could be combined.I had yesterday a stupid and naive post regarding Akidas in parallel but already deleted again. Nevertheless, the thought does not let me go. The answers to my following questions can be found here somewhere but you know how it is. So I just dare to ask:
1.) How many Akidas can be connected in parallel, at least I assume that it is parallel.
2.) Why is the number limited?
3) I assume that the energy consumption cannot be the same no matter what is calculated. What is the maximum energy consumption per Akida if someone upscaled the full program?
My thought goes in the direction of an Akida cluster. So the maximum what goes. Like an Akida mainframe.
From about the 20 minute mark, he talks about their model libraries. They use 1-bit weights/activations, so the library may be a good fit for Akida.Arm AI Ecosystem Partner Plumerai creates extremely small AND extremely accurate AI models that run #onarm—this is a great overview of how they do it—check it out!
A couple of weeks ago I presented a webinar about our fast and accurate people detection. The recording is now available. See how we developed our AI to run on an Arm CPU inside TI’s AM62x Sitara. Live demo starts at 13:34. Thanks for hosting me Texas Instruments!
AND Rob Telson likes it!
How you all always say: It's great to be a shareholder!
When you think about it, Plumerai is like the Akida simulation in MetaTF.From about the 20 minute mark, he talks about their model libraries. They use 1-bit weights/activations, so the library may be a good fit for Akida.
The inference engine is software running on Cortex M with sufficiently low power that no fans are required. It can take up to 25 fps, but 5 fps is adequate for tracking.
So maybe Akida loaded with Plumerai model library, Akida doing the object detection (bounding boxes) would take the object detection load off the software, then the software could do the tracking.
Late edition: Using Akida to do the object detection would reduce power consumption and speed up detection. Also I guess it would significantly improve fps because the software detection would be the main bottleneck limiting fps.
Hi @DiogeneseFrom about the 20 minute mark, he talks about their model libraries. They use 1-bit weights/activations, so the library may be a good fit for Akida.
The inference engine is software running on Cortex M with sufficiently low power that no fans are required. It can take up to 25 fps, but 5 fps is adequate for tracking.
So maybe Akida loaded with Plumerai model library, Akida doing the object detection (bounding boxes) would take the object detection load off the software, then the software could do the tracking.
Late edition: Using Akida to do the object detection would reduce power consumption and speed up detection. Also I guess it would significantly improve fps because the software detection would be the main bottleneck limiting fps.
Hi FF,Hi @Diogenese
One of their use cases is a Smartdoor bell security camera with the idea it can monitor for people and will not set off false alarms for birds, cats, dogs etc; while at the same time not alerting for people who simply walk past the premises on the public footpath/sidewalk.
On what I have read it does not seem to have the ability to oneshot learn the occupants of the residence so if I am correct this would be another advantage AKIDA would bring to the table.
My opinion only DYOR
FF
AKIDA BALLISTA
I remember Rob Telson saying our Ken the robot might need a companion like a dog.
Loona petbot fits the description. Opinion only, dyor.
Innovative Robotics Firm KEYi Technology Introduces Loona to its Family of Consumer Robots
/PRNewswire/ -- KEYi Technology, leading consumer robotics company, today introduced Loona, a heartwarming robotic pet, to their existing line of consumer...www.prnewswire.com
Furthermore, with its 3D ToF (time-of-flight) camera, Loona can roam freely using four-wheel and two-wheel motions through its proprietary self-balancing design. Additionally, Loona features a high-performance CPU processor, allowing the chip to perform 54 trillion neural network convolution calculations per second, equating to one-quarter of PC's calculation capability, which is rarely seen in consumer robotics.
Voice based AI: Voice source localization, voice wake-up, voice recognition, and natural language recognition
Visual based AI features: Face recognition, emotion detection, human body recognition, human skeleton-based recognition, gesture recognition, motion detection, edge detection, 3D environment detection, trajectory planning
Emotional Interactive AI model
Loona equips powerful offline process capabilities. All AI functions are processed by Loona locally, except voice recognition and natural language processing which are based on Lax service provided by Amazon. Loona's process ability is as powerful as your smart phone, can perform 5 trillion neural network convolution calculations per second (5 TOPS, at the same level as iPhone XR )
Edit: I see @clip had mentioned it already a few days ago
It was fun when Rob said. And I paraphrase as well. When I’m listening to all of the other presenters, all I can think of is Brainchip. I truely believe he meant that and that was off the script in the moment. It was not a marketing thing, he is sold.With Mercedes we know three things:
1. Hey Mercedes is an example of using AKIDA,
2. There are other uses unspecified,
3. When it is deployed at scale it will revolutionise existing technology in Mercedes vehicles,
thus all of your nominated categories and more are in play for Brainchip to win at Mercedes Benz.
When we listen to the Luca Verre Prophesee podcast despite having worked with Qualcomm Snapdragon, then SynSense it was not till Prophesee worked with Brainchip and used AKIDA with their vision sensor were they able to reach the full potential of their technology.
Paraphrasing what Rob Telson said in the latest ML presentation AKIDA is just better than anything else.
My opinion only DYOR
FF
AKIDA BALLISTA
Feel like dismantling it to see what's inside?I bought one already. Regardless if it has Akida onboard or not, their cameras are amazing.
Morning fellow brners,Feel like dismantling it to see what's inside?
Yes it was a great moment.It was fun when Rob said. And I paraphrase as well. When I’m listening to all of the other presenters, all I can think of is Brainchip. I truely believe he meant that and that was off the script in the moment. It was not a marketing thing, he is sold.
I do like this extracted comment:This is a very interesting interview with some of the Sony engineers that worked with Prophesee on their EVS. They sound very confident in the tech. Future looks bright!
Event-based Vision Sensors that can function like human optic nerves and monitor only changes in subjects will open up vast new possibilities for AI and robots | Feature|Sony Semiconductor Solutions Group
Introducing special contents of Sony Semiconductor Solutions Group.www.sony-semicon.com
Sakai:First of all, I want to devote myself fully to this project until we can achieve mass production and shipment of this product, the IMX636.
Nitta:Generally speaking, I want to focus on having EVS become widespread in the market.
Furukawa:To begin with, I believe we must first properly bring EVS to mass production.
Ihara:In addition to the manufacturing and inspection industries, I would like to expand the application range of EVS to various areas where frame-based image sensors are currently in use.