BRN Discussion Ongoing

Boab

I wish I could paint like Vincent
Hi everybody, I’ve been in contact with Tim the CEO of NVISO and there will be another demo here in Sydney ourisde The Glenmore Hotel in The Rocks.

It’s between 4-6pm today, the demo will be shown in a brand new AMG Mercedes G wagon, I highly recommend those who are in Sydney to attend it and see the demo live for themselves.

You just have to rock up, you’ll see the car there.
Damn you Eastern Staters. Us Westerners are very jealous.
 
  • Haha
  • Like
  • Love
Reactions: 12 users
Was reading about nvisio listing on the ASX a couple months back on the crapper. That's one listing I'll pounce on once it's available. @chapman89
 
  • Like
  • Fire
  • Love
Reactions: 19 users

BaconLover

Founding Member
Hi everybody, I’ve been in contact with Tim the CEO of NVISO and there will be another demo here in Sydney ourisde The Glenmore Hotel in The Rocks.

It’s between 4-6pm today, the demo will be shown in a brand new AMG Mercedes G wagon, I highly recommend those who are in Sydney to attend it and see the demo live for themselves.

You just have to rock up, you’ll see the car there.
For those who can, highly recommended.
Absolutely amazing to witness first hand these things work.
 
  • Like
  • Fire
  • Love
Reactions: 32 users

Boab

I wish I could paint like Vincent
Hi everybody, I’ve been in contact with Tim the CEO of NVISO and there will be another demo here in Sydney ourisde The Glenmore Hotel in The Rocks.

It’s between 4-6pm today, the demo will be shown in a brand new AMG Mercedes G wagon, I highly recommend those who are in Sydney to attend it and see the demo live for themselves.

You just have to rock up, you’ll see the car there.
Any short videos captured would be greatly appreciated
 
  • Like
  • Love
  • Fire
Reactions: 27 users

Bravo

If ARM was an arm, BRN would be its biceps💪!
Here's another potential use case for Akida in a compact wearable electronic device for long-term EEG recording, in other words a non-invasive therapy for monitoring patients affected by epilepsy!



Extract

Neuromorphic devices​

The large amount of sensory data recorded by a wearable device calls for the development of low-power embedded "edge-computing" technologies that can process the signals being measured locally without requiring bulky computers, internet connection, or cloud servers. When developing a wearable sensory-processing device, neuromorphic engineering represents a promising technology.
Neuromorphic electronic circuits are a class of hybrid analog/digital circuits that implement hardware models of biological systems29,30. They carry out computation by emulating the dynamics of real neurons and synapses, configured as small spiking neuronal networks (SNNs). The styles of computation used in neuromorphic circuits are fundamentally different from those used by conventional computers. Neuromorphic architectures provide massively parallel arrays of computing elements, exploit redundancy to achieve fault tolerance, and emulate the neural style of computation. In this way, neuromorphic systems can exploit to the fullest potential the features of advanced scaled Very Large Scale Integrated (VLSI) processes and future emerging technologies, naturally coping with the problems that characterize them, such as device inhomogeneity and imperfections. Consequently, neuromorphic networks can be implemented in compact VLSI devices to perform real-time computation with low energy consumption8,29,31.


 
  • Like
  • Love
  • Fire
Reactions: 21 users

Xhosa12345

Regular
  • Haha
  • Like
Reactions: 6 users

Reuben

Founding Member
Hi everybody, I’ve been in contact with Tim the CEO of NVISO and there will be another demo here in Sydney ourisde The Glenmore Hotel in The Rocks.

It’s between 4-6pm today, the demo will be shown in a brand new AMG Mercedes G wagon, I highly recommend those who are in Sydney to attend it and see the demo live for themselves.

You just have to rock up, you’ll see the car there.
Mercedes seems to be keen to give their expensive cars for demos... :cool:
 
  • Like
  • Fire
  • Love
Reactions: 23 users

Diogenese

Top 20
Dear @Diogenese,

This patent for Apples new AR/VR device suggest it will be using spiking neural networks for deformable object tracking. Could this be Akida? 🤞



View attachment 7865




https://patents.google.com/patent/US20200273180A1/en

Deformable object tracking

Abstract
Various implementations disclosed herein include devices, systems, and methods that use event camera data to track deformable objects such as faces, hands, and other body parts. One exemplary implementation involves receiving a stream of pixel events output by an event camera. The device tracks the deformable object using this data. Various implementations do so by generating a dynamic representation of the object and modifying the dynamic representation of the object in response to obtaining additional pixel events output by the event camera. In some implementations, generating the dynamic representation of the object involves identifying features disposed on the deformable surface of the object using the stream of pixel events. The features are determined by identifying patterns of pixel events. As new event stream data is received, the patterns of pixel events are recognized in the new data and used to modify the dynamic representation of the object.

Hi Bravo,

Apple's face-tracking invention uses an event camera. They would need some form of NN to identify the features as mentioned below. This could be software (CNN), their preferred option, or hardware. They also talk of using 2 event cameras for improved object movement recognition.

US2020273180A1 DEFORMABLE OBJECT TRACKING

1653616359591.png


1653616424694.png



1 . A system comprising:
an event camera comprising a two-dimensional (2D) array of pixel sensors;
non-transitory computer-readable storage medium; and
one or more processors communicatively coupled to the non-transitory computer-readable storage medium and the event camera,
wherein the non-transitory computer-readable storage medium comprises program instructions that, when executed on the one or more processors, cause the system to perform operations, the operations comprising:
receiving a stream of pixel events output by the event camera, the event camera comprising a plurality of pixel sensors positioned to receive light from a scene disposed within a field of view of the event camera, each respective pixel event generated in response to a respective pixel sensor detecting a change in light intensity that exceeds a comparator threshold;
identifying features disposed on a deformable surface of the object using the stream of pixel events; and
generating a dynamic representation of the object, the dynamic representation comprising the features;
modifying the dynamic representation of the object in response to obtaining additional pixel events output by the event camera; and
outputting the dynamic representation of the object for further processing
.


2 . The system of claim 1, further comprising a second event camera, wherein modifying the dynamic representation of the object comprises:
identifying the features in the stream of pixel events from the event camera;
identifying the features in a second stream of pixel events from the second event camera; and
tracking three dimensional (3D) locations of the features based on correlating the features identified from the streams of pixel events from the event camera and the features identified from the second stream of pixel events from the second event camera
.

As we know, Akida can perform the CNN function, or rather the CNN data can be adapted for use in Akida in a far more efficient manner than running the CNN process on a CPU.

The fact that they mention SNNs in passing does not of itself indicate that Apple are aware of Akida, as the most widely known SNN in 2018 would have been analog.

So, yes, Akida could be used to advantage in Apple's face recognition patent.

PS: We are already working with nViso on facial expression recognition, but that would not be an exclusive arrangement. We would, of course need to keep the two sets of information separate.
 
Last edited:
  • Like
  • Love
  • Fire
Reactions: 25 users

Boab

I wish I could paint like Vincent
Mercedes seems to be keen to give their expensive cars for demos... :cool:
If it looks anything like this its a very expensive bit of kit.

G wagon.jpg
 
  • Like
  • Fire
  • Love
Reactions: 13 users

Bravo

If ARM was an arm, BRN would be its biceps💪!
Hi Bravo,

Apple's face-tracking invention uses an event camera. They would need some form of NN to identify the features as mentioned below. This could be software (CNN), their preferred option, or hardware. They also talk of using 2 event cameras for improved object movement recognition.

US2020273180A1 DEFORMABLE OBJECT TRACKING

View attachment 7867

View attachment 7868


1 . A system comprising:
an event camera comprising a two-dimensional (2D) array of pixel sensors;
non-transitory computer-readable storage medium; and
one or more processors communicatively coupled to the non-transitory computer-readable storage medium and the event camera,
wherein the non-transitory computer-readable storage medium comprises program instructions that, when executed on the one or more processors, cause the system to perform operations, the operations comprising:
receiving a stream of pixel events output by the event camera, the event camera comprising a plurality of pixel sensors positioned to receive light from a scene disposed within a field of view of the event camera, each respective pixel event generated in response to a respective pixel sensor detecting a change in light intensity that exceeds a comparator threshold;
identifying features disposed on a deformable surface of the object using the stream of pixel events; and
generating a dynamic representation of the object, the dynamic representation comprising the features;
modifying the dynamic representation of the object in response to obtaining additional pixel events output by the event camera; and
outputting the dynamic representation of the object for further processing
.


2 . The system of claim 1, further comprising a second event camera, wherein modifying the dynamic representation of the object comprises:
identifying the features in the stream of pixel events from the event camera;
identifying the features in a second stream of pixel events from the second event camera; and
tracking three dimensional (3D) locations of the features based on correlating the features identified from the streams of pixel events from the event camera and the features identified from the second stream of pixel events from the second event camera
.

As we know, Akida can perform the CNN function, or rather the CNN data can be adapted for use in Akida in a far more efficient manner than running the CNN process on a CPU.

The fact that they mention SNNs in passing does not of itself indicate that Apple are aware of Akida, as the most widely known SNN in 2018 would have been analog.

So, yes, Akida could be used to advantage in Apple's face recognition patent


Great! I'll ask @Dhm if he can email this information to Tim Cook (CEO of Apple). Just joking! He-he-he!😜
 
  • Haha
  • Like
  • Love
Reactions: 7 users
Interesting that ComSec state BRN is their number 1 traded stock and then take the drum roll approach to releasing the video - engaging their audience to maximise viewing. Shows just how far BRN have come and how seriously they are now being taken. AIMO.

Yup, still NOT up that i can find ? anyone seen or posted yet ?
 
  • Like
  • Thinking
Reactions: 5 users
Was just reading the below recent article and just how many camera module units shipping :oops:

I posted this Nviso image previously and love to get a slice of this mkt.

IMG_20220527_093710.jpg



Smartphone camera module shipments will increase to 5.02 billion units in 2022
Thursday, May 26, 2022
Smartphone camera module shipments will increase to 5.02 billion units in 2022, registering an annual growth rate of 5%, according to TrendForce research. Since the price-performance ratio of whole devices is the primary basis for consumer purchases, the cost of high-standard solutions such as the five-camera design and main cameras sporting hundreds of millions of pixels will inevitably be passed on to the manufacturer with little improvement in sales performance.
Therefore, the three-camera module remains the mainstream design this year and is forecast to account for more than 40% of total shipments. Only some smartphone models will adopt a four-camera design to differentiate their specifications, while the number of products with dual-cameras or less will fall, with entry-level models being the primary candidates.
By combining a high-pixel main camera with two low-pixel function cameras, a mobile phone can retain a three-camera design while taking into account hardware costs. TrendForce believes that this is also the primary reason for the development of low-end and mid-range products towards a three-camera or even four-camera design in addition to the increased adoption of low-pixel function cameras including 2-megapixel depth cameras and macro cameras.
Growth momentum in mobile phone camera module shipments in 2022 will come primarily from additional numbers of low-pixel cameras prompted by the three-camera design. Although a high-resolution main camera with better specifications allows mobile phone brands to provide better photographic performance, pixel specifications have not continued to climb higher and mainstream cameras linger at approximately 50 million pixels, causing a slight stagnation in demand.
TrendForce indicates that mobile phone brands are currently curtailing competition in the hardware specifications of mobile phone camera modules but remain focused on photographic and video performance as promotional features of their mobile phones and will emphasize dynamic photography, night photography and other scenarios to highlight product advantages. This can be achieved not only by strengthening the optical performance of the camera module itself but also through algorithms and software, thereby increasing the enthusiasm of mobile phone brands to invest in self-developed chips.
In addition to Apple and Samsung, which have long used their own SoCs, other mobile phone brands have also tried to launch self-developed chips to enhance image processing performance such as Xiaomi’s Surging C1 and OPPO’s MariSilicon X and VIVO’s V1+.
 
  • Like
  • Love
  • Fire
Reactions: 30 users

Wags

Regular

Attachments

  • Nvisidemo.JPG
    Nvisidemo.JPG
    3.2 MB · Views: 121
  • Like
  • Love
  • Fire
Reactions: 34 users

Attachments

  • 43FA127F-97DF-45F9-81C1-EC1301D255D0.jpeg
    43FA127F-97DF-45F9-81C1-EC1301D255D0.jpeg
    613.2 KB · Views: 30
  • Like
  • Fire
Reactions: 15 users

Xray1

Regular
Great post Tech.

I would like to elaborate on the time taken to bring a product to market though. Whilst I completely agree with your timeframe, I would like to point out that the timeframe could have commenced in 2020. The EAP saw a couple of dozen or so ”Tier 1” companies pay $50k or whatever it was to work Brainchip to effectively test Akida in their applications. Now as subjective as it is, I don’t think it’s unreasonable to assume that some of this testing would have occurred in real world products, Mercedes being a good example. If this were the case, I think theres potential for these companies to be further along in their journey to the commercialisation of these products that if they were to commence R&D say tomorrow. Obviously they will wait until the last possible moment to sign with Brainchip, as this will defer payment longer. The EAP has given these companies a fantastic opportunity to implement Akida, test, and prep for market, without having to provide further progress payments along the way. I think it’s possible that Akida could hit the shelves in some form sooner than later.

AIMO
Totally agree ............. IMO, the EAP's have been in the system for some time now (even years) .......So, is this why Sean H keeps saying ..... Watch the Revenue side of things.
 
  • Like
  • Fire
Reactions: 12 users

Fox151

Regular
I particularly liked Sean’s comment in the AGM when answering a question about competitors (paraphrasing) - choose competitors with NPUs based on traditional tech to eek out improvements of 5 - 10% or choose Akida to get 500 - 1000% performance improvement.

We don’t currently have any competition, the market will decide we are the way they have to go to be competitive. AIMO. Akida ubiquitous. 😎
Akida ubiquitous? Who are you? Harry Potter?
 
  • Haha
  • Like
  • Fire
Reactions: 11 users

Attachments

  • 96D9481E-21FC-4819-A185-82AF1665F0B9.jpeg
    96D9481E-21FC-4819-A185-82AF1665F0B9.jpeg
    551.1 KB · Views: 52
  • Like
  • Fire
Reactions: 11 users

Xray1

Regular
Still trying to put the pieces together. We know from the AGM that BRN is now only an IP company. Originally the plan was to sell chips up to volumes of around 1 million, for amounts above this BRN recommended going to an IP license. I could image that dealing with multiple smaller customers would put some strain on our engineering team. Better to partner with MegaChips and split the load for those customers.

All guesses from me, possibly way off the mark.
I still think the Chips will have a place in certain circumstances ..... like hand held medical units for instance.
 
  • Like
Reactions: 2 users
 
  • Like
  • Fire
Reactions: 6 users

Reuben

Founding Member
Porsche playing with neural networks and constant learning -
For the more technical genuises - here is the link to read more


1653618145324.png


1653618302135.png
 
  • Like
  • Love
Reactions: 8 users
Top Bottom