Rise from the ashes
Regular
Was reading about nvisio listing on the ASX a couple months back on the crapper. That's one listing I'll pounce on once it's available. @chapman89
For those who can, highly recommended.Hi everybody, I’ve been in contact with Tim the CEO of NVISO and there will be another demo here in Sydney ourisde The Glenmore Hotel in The Rocks.
It’s between 4-6pm today, the demo will be shown in a brand new AMG Mercedes G wagon, I highly recommend those who are in Sydney to attend it and see the demo live for themselves.
You just have to rock up, you’ll see the car there.
Any short videos captured would be greatly appreciatedHi everybody, I’ve been in contact with Tim the CEO of NVISO and there will be another demo here in Sydney ourisde The Glenmore Hotel in The Rocks.
It’s between 4-6pm today, the demo will be shown in a brand new AMG Mercedes G wagon, I highly recommend those who are in Sydney to attend it and see the demo live for themselves.
You just have to rock up, you’ll see the car there.
Hahahaha.... thats gold mate (from a few days ago i see wat u did there....)Oh dear. Please don't mention buying at cheaper prices
Mercedes seems to be keen to give their expensive cars for demos...Hi everybody, I’ve been in contact with Tim the CEO of NVISO and there will be another demo here in Sydney ourisde The Glenmore Hotel in The Rocks.
It’s between 4-6pm today, the demo will be shown in a brand new AMG Mercedes G wagon, I highly recommend those who are in Sydney to attend it and see the demo live for themselves.
You just have to rock up, you’ll see the car there.
Dear @Diogenese,
This patent for Apples new AR/VR device suggest it will be using spiking neural networks for deformable object tracking. Could this be Akida?
View attachment 7865
https://patents.google.com/patent/US20200273180A1/en
Deformable object tracking
Abstract
Various implementations disclosed herein include devices, systems, and methods that use event camera data to track deformable objects such as faces, hands, and other body parts. One exemplary implementation involves receiving a stream of pixel events output by an event camera. The device tracks the deformable object using this data. Various implementations do so by generating a dynamic representation of the object and modifying the dynamic representation of the object in response to obtaining additional pixel events output by the event camera. In some implementations, generating the dynamic representation of the object involves identifying features disposed on the deformable surface of the object using the stream of pixel events. The features are determined by identifying patterns of pixel events. As new event stream data is received, the patterns of pixel events are recognized in the new data and used to modify the dynamic representation of the object.
If it looks anything like this its a very expensive bit of kit.Mercedes seems to be keen to give their expensive cars for demos...
Hi Bravo,
Apple's face-tracking invention uses an event camera. They would need some form of NN to identify the features as mentioned below. This could be software (CNN), their preferred option, or hardware. They also talk of using 2 event cameras for improved object movement recognition.
US2020273180A1 DEFORMABLE OBJECT TRACKING
View attachment 7867
View attachment 7868
1 . A system comprising:
an event camera comprising a two-dimensional (2D) array of pixel sensors;
non-transitory computer-readable storage medium; and
one or more processors communicatively coupled to the non-transitory computer-readable storage medium and the event camera,
wherein the non-transitory computer-readable storage medium comprises program instructions that, when executed on the one or more processors, cause the system to perform operations, the operations comprising:
receiving a stream of pixel events output by the event camera, the event camera comprising a plurality of pixel sensors positioned to receive light from a scene disposed within a field of view of the event camera, each respective pixel event generated in response to a respective pixel sensor detecting a change in light intensity that exceeds a comparator threshold;
identifying features disposed on a deformable surface of the object using the stream of pixel events; and
generating a dynamic representation of the object, the dynamic representation comprising the features;
modifying the dynamic representation of the object in response to obtaining additional pixel events output by the event camera; and
outputting the dynamic representation of the object for further processing.
2 . The system of claim 1, further comprising a second event camera, wherein modifying the dynamic representation of the object comprises:
identifying the features in the stream of pixel events from the event camera;
identifying the features in a second stream of pixel events from the second event camera; and
tracking three dimensional (3D) locations of the features based on correlating the features identified from the streams of pixel events from the event camera and the features identified from the second stream of pixel events from the second event camera.
As we know, Akida can perform the CNN function, or rather the CNN data can be adapted for use in Akida in a far more efficient manner than running the CNN process on a CPU.
The fact that they mention SNNs in passing does not of itself indicate that Apple are aware of Akida, as the most widely known SNN in 2018 would have been analog.
So, yes, Akida could be used to advantage in Apple's face recognition patent
Interesting that ComSec state BRN is their number 1 traded stock and then take the drum roll approach to releasing the video - engaging their audience to maximise viewing. Shows just how far BRN have come and how seriously they are now being taken. AIMO.
Smartphone camera module shipments will increase to 5.02 billion units in 2022 |
Thursday, May 26, 2022 Smartphone camera module shipments will increase to 5.02 billion units in 2022, registering an annual growth rate of 5%, according to TrendForce research. Since the price-performance ratio of whole devices is the primary basis for consumer purchases, the cost of high-standard solutions such as the five-camera design and main cameras sporting hundreds of millions of pixels will inevitably be passed on to the manufacturer with little improvement in sales performance. Therefore, the three-camera module remains the mainstream design this year and is forecast to account for more than 40% of total shipments. Only some smartphone models will adopt a four-camera design to differentiate their specifications, while the number of products with dual-cameras or less will fall, with entry-level models being the primary candidates. By combining a high-pixel main camera with two low-pixel function cameras, a mobile phone can retain a three-camera design while taking into account hardware costs. TrendForce believes that this is also the primary reason for the development of low-end and mid-range products towards a three-camera or even four-camera design in addition to the increased adoption of low-pixel function cameras including 2-megapixel depth cameras and macro cameras. Growth momentum in mobile phone camera module shipments in 2022 will come primarily from additional numbers of low-pixel cameras prompted by the three-camera design. Although a high-resolution main camera with better specifications allows mobile phone brands to provide better photographic performance, pixel specifications have not continued to climb higher and mainstream cameras linger at approximately 50 million pixels, causing a slight stagnation in demand. TrendForce indicates that mobile phone brands are currently curtailing competition in the hardware specifications of mobile phone camera modules but remain focused on photographic and video performance as promotional features of their mobile phones and will emphasize dynamic photography, night photography and other scenarios to highlight product advantages. This can be achieved not only by strengthening the optical performance of the camera module itself but also through algorithms and software, thereby increasing the enthusiasm of mobile phone brands to invest in self-developed chips. In addition to Apple and Samsung, which have long used their own SoCs, other mobile phone brands have also tried to launch self-developed chips to enhance image processing performance such as Xiaomi’s Surging C1 and OPPO’s MariSilicon X and VIVO’s V1+. |
This is the car (compliments of Jesse)
Totally agree ............. IMO, the EAP's have been in the system for some time now (even years) .......So, is this why Sean H keeps saying ..... Watch the Revenue side of things.Great post Tech.
I would like to elaborate on the time taken to bring a product to market though. Whilst I completely agree with your timeframe, I would like to point out that the timeframe could have commenced in 2020. The EAP saw a couple of dozen or so ”Tier 1” companies pay $50k or whatever it was to work Brainchip to effectively test Akida in their applications. Now as subjective as it is, I don’t think it’s unreasonable to assume that some of this testing would have occurred in real world products, Mercedes being a good example. If this were the case, I think theres potential for these companies to be further along in their journey to the commercialisation of these products that if they were to commence R&D say tomorrow. Obviously they will wait until the last possible moment to sign with Brainchip, as this will defer payment longer. The EAP has given these companies a fantastic opportunity to implement Akida, test, and prep for market, without having to provide further progress payments along the way. I think it’s possible that Akida could hit the shelves in some form sooner than later.
AIMO
Akida ubiquitous? Who are you? Harry Potter?I particularly liked Sean’s comment in the AGM when answering a question about competitors (paraphrasing) - choose competitors with NPUs based on traditional tech to eek out improvements of 5 - 10% or choose Akida to get 500 - 1000% performance improvement.
We don’t currently have any competition, the market will decide we are the way they have to go to be competitive. AIMO. Akida ubiquitous.
I still think the Chips will have a place in certain circumstances ..... like hand held medical units for instance.Still trying to put the pieces together. We know from the AGM that BRN is now only an IP company. Originally the plan was to sell chips up to volumes of around 1 million, for amounts above this BRN recommended going to an IP license. I could image that dealing with multiple smaller customers would put some strain on our engineering team. Better to partner with MegaChips and split the load for those customers.
All guesses from me, possibly way off the mark.
Nvisio connectionComputer Vision | Project Eyes of Things | EOT
International project focused on building the best embedded, wearable, computer vision platform (intelligent camera) ever. Innovation Action funded under the European Union’s H2020 Framework Programmeeyesofthings.euParticipants | Eyes of Things | EOT
Project Participants of Eyes of Things are: Visilab, UCLM, Awaiba, Evercam, DFKI, Movidius, Thales, Fluxguide, Nviso. EoT is an Innovation Action funded under the European Union’s H2020eyesofthings.eu