Hi @Zedjack33sean hehir - Google Search
www.google.com.au
Pretty cool
This is also a must read. FF
Not that I have seen yet. FFI may have missed it, but has the Commsec interview with Sean gone live yet?
Thanks FF- Was having a hunt around this morning, Long Long time holder here & long time watcher of HC and TSENot that I have seen yet. FF
Watched it twice, once for the drone and once for the performance specs to the right, whole commands using less than a milli wattsean hehir - Google Search
www.google.com.au
Pretty cool
Wow !!
“We are coming to get you”Good Morning all.
Not a chartist by any means, but this is my take on it.
View attachment 7737
We formed the base from late Feb to early May, went above the SMA early May which attracted a few eyes which you can see with the increased volume. I am not really worried about yesterday's dip.
Not saying those who dumped around the AGM to buy back won't do it again but when traders play their games, don't let it be a distraction for you. Not everyone can do it, neither they are 100% successful when they do it. You just hear from them ONLY when they're successful. Big difference.
If you are one of those who bought around $2.00 or above, if your financial situation permits, sit tight, we are coming to get you![]()
![]()
morning fellow brners, I was told by a Brainchip exec that a vehicle has as many as 300 chips!
Does anyone know what that could be in calculations of royalties in relation to ip licence within these x number of cars?
@Esq.111 i think u have a good calculator or anyone else please care to help me out in my fogged brain
Thanks for doing this I tried to do it for @Zedjack but could not as I know many do not like to open links. Obviously I failed but I will say again it is a must watch.
FF
AKIDA BALLISTA
PS: Imagine lying by the pool on your Super yacht and saying beer please and your personal drone flying off to get the beer and bringing back and placing it in your hand.
Bit of activity to catchup across networks with the AGM and last couple of partner deals
Thanks Jk!MD, Fyi, not the best photo, but its from a Ford Falcon instrument Cluster in a test stand at Continental where I've circled 5 chips. If AKIDA was deployed in an application like this, then it may only go in one of the chips. However if the manufacturers introduce (for example) eyesight detection sensing or cameras on it, there may be other chips that can be used on such a pcb.
View attachment 7749
Very coolsean hehir - Google Search
www.google.com.au
Pretty cool
Diogenese this is a great summary in layman’s terms that I was looking for. @zeeb0t i think this is worthy of a sticky at top of BRN for all new readers to cast an eye over, maybe like a tech 101 threadHi JK,
Akida's forte is identifying (classifying) input signals from sensors. In simple applications this may be sufficient to trigger a direct response or action.
However, in some cases the Akida output is used as an input to another CPU/GPU (von Neumann processor) to form part of that computer's program variables.
In the first case, the entire process gets the full power saving/speed improvement from Akida.
In the second case, the benefit is the reduction in power/time which Akida brings to the classification task while the CPU performs the remaining processes under the control of its software program. This is important because the classification task carried out on a software controlled CPU uses very large amounts of power and takes a relatively long time.
Classification of an image on a CPU uses CNN (convolutional neural network) processes which involve multiplying multi-bit (8, 16, 32, 63) bytes representing each pixel on the sensor. Multiplication involves the number of computer operations determined by the square of the number of bits in the byte, so an 8-bit byte multiplication would involve, 64 computer operations. For 32-bit bytes, 1024 operations are required to process the output from a single pixel, whether it's value has changed or not.
On the other hand, Akida ignores pixels whose output value does not change, and only performs a computer operation for the pixels whose output changes (an event). This is "sparsity". In addition, in 1-bit mode there is only a singe computer operation for each pixel event.
For example, the sparsity may reduce the number of events by, say, 40%.
Even in 4-bit mode, Akida only needs 16 computer operations, and that only for pixels whose output has changed.
Hence there are large savings in power and time in using Akida to do the classification task compared to using, eg, a 32-bit ARM Cortex microprocessor.
While the rest of the program may be carried out on the microprocessor, this uses comparatively little power compared to the power the microprocessor would have used performing the CNN task. So there are still large power savings to be made by using Akida in "accelerator" mode as an input device for a von Neumann processor.
The other point is that Akida performs its classification independent of any processor with which it is associated. For example, Akida 1000 includes an ARM Cortex processor, but this is only used for configuration of the arrangement of the NPU nodes to optimize performance for the particular task, but the ARM Cortex plays no part in actual classification task. The ARM Cortex does not form part of the Akida IP. Akida is "processor-agnostic" and can operate with any CPU.