BRN Discussion Ongoing

  • Like
  • Fire
Reactions: 46 users

miaeffect

Oat latte lover
I've always said that I will never sell my BRN shares, sorry but change of plans.
I will now sell only one share when the share price reaches US$500, buy a bottle of champagne for the same value, drink the bottle on my own then share it over the grave of a BRN shorter.
Whhhhaaat?? USD$500 per share?
I think you gotta hold another 5 years from now on........ long way
 
  • Like
Reactions: 3 users
AKIDA NET

Akida Net 5 doing MELANOMA classification with 98.31% accuracy.

This is a brand new talent not disclosed previously.


My opinion only DYOR
FF

AKIDA BALLISTA
AND THIS LITTLE BEAUTY:

ANOTHER STATE OF THE ART PERFORMANCE:

"DVS Gesture (Amir et al., 2017) is a well-known event-based dataset, comprising recordings of subjects performing gestures (clapping, waving, circular motions etc.) made with a DVS128 event-based camera. Here we present our approach to achieving state of the art performance on this dataset, using a MetaTF training pipeline."


My opinion only DYOR
FF

AKIDA BALLISTA
 
  • Like
  • Fire
Reactions: 22 users

VictorG

Member
Whhhhaaat?? USD$500 per share?
I think you gotta hold another 5 years from now on........ long way

Whhhhaaat?? USD$500 per share?
I think you gotta hold another 5 years from now on........ long way
yeah I know, however revenge has no time limit. I will dance on their graves until then.
 
  • Haha
  • Like
Reactions: 2 users

Yak52

Regular
Did YOU have your Stop Loss Triggered this morning ????

View attachment 3121

Yak52
Todays drop in SP is from the 2,900,000 Shorts taken out on Weds this week. Prior to this short positions had dropped significantly leading into Tuesday.

Now the B0T is holding it down accumulating. Whether its target is a Volume number, a date/time or a stat on a chart .............is to be seen.

BOT BOT BOT.png


Yak52

Yak52
 
  • Like
  • Fire
Reactions: 21 users

HopalongPetrovski

I'm Spartacus!
The weary, long suffering Brainchip holder finally stumbles his way into Castle Akida after spotting the neuromorhic beacon atop yon highest tower..........lets watch, shall we?

 
  • Like
  • Haha
  • Love
Reactions: 12 users

Murphy

Life is not a dress rehearsal!
It makes sense JK. Afterall, where else would all the sextillionaires hang-out with eachother, other than on TSEX?
T-Sex? Sounds like a fun evening activity post-dinner!😅😅


If you don't have dreams, you can't have dreams come true!
 
  • Like
  • Haha
Reactions: 6 users

IloveLamp

Top 20
On the theme of Renesas and implementing AKIDA technology in MCU’s I have been excited by this prospect since Peter van der Made and Anil Mankar separately disclosed this was where Renesas was heading. I have lacked the depth of technical knowledge to bring a proper explanation of why this is so exciting. Just now on the Renesas thread I found the following.

This article is not about AKIDA technology but It identifies the opportunity to be had in making trillions of MCUs smart and offering a clever but convoluted old school solution.

If you had a calculator that could do trillions you could work out what 1 trillion times a 2 cent royalty would be for IP to make an MCU smart with every cent being profit to the owner of the IP:

Date 02/18/20

What is tinyML?​

What is tinyML technology?​

“We see a new world with trillions of intelligent devices enabled by tinyML technologies that sense, analyze, and autonomously act together to create a healthier and more sustainable environment for all.”

This is a quote by Evgeni Gousev, senior director at Qualcomm and co-chair of the tinyML Foundation, in his opening remarks at a recent conference.

Machine learning is a subset of artificial intelligence. tinyML aka tiny ml is an abbreviation for tiny machine learning and means that machine learning algorithms are processed locally on embedded devices.

TinyML is very similar with Edge AI, but tinyML takes Edge AI one step further, making it possible to run machine learning models on the smallest microcontrollers (MCU’s). Learn more about Edge AI here.

What is a microcontroller?​

507_o_microcontroller-stm32-based-on-arm.png

Embedded systems normally include a microcontroller. A microcontroller is a compact integrated circuit designed to govern a specific operation in an embedded system. A typical microcontroller includes a processor, memory and input/output (I/O) peripherals on a single chip.
Microcontrollers are cheap, with average sales prices reaching under $0.50, and they’re everywhere, embedded in consumer and industrial devices. At the same time, they don’t have the resources found in generic computing devices. Most of them don’t have an operating system.
They have a small CPU, are limited to a few hundred kilobytes of low-power memory (SRAM) and a few megabytes of storage, and don’t have any networking gear. They mostly don’t have a mains electricity source and must run on cell and coin batteries for years.

What are the advantages with tinyML?​

There are a number of advantages with running tinyML machine learning models on embedded devices.

Low Latency: Since the machine learning model runs on the edge, the data doesn't have to be sent to a server to run inference. This reduces the latency of the output.
Low Power Consumption: Microcontrollers are ultra low power which means that they consume very little power. This enables them to run without being charged for a really long time.
Low Bandwidth: As the data doesn’t have to be sent to the server constantly, less internet bandwidth is used.
Privacy: Since the machine learning model is running on the edge, the data is not stored in the cloud.

What are the challenges with tinyML?​

Deep learning models require a lot of memory and consumes a lot of power. There have been multiple efforts to shrink deep machine learning models to a size that fits on tiny devices and embedded systems. Most of these efforts are focused on reducing the number of parameters in the deep learning model. For example, “pruning,” a popular class of optimization algorithms, compress neural networks by removing the parameters that are less significant in the model’s output.
The problem with pruning methods is that they don’t address the memory bottleneck of the neural networks. Standard implementations of embedded machine learning require an entire network layer and activation maps to be loaded into memory. Unfortunately, classic optimization methods don’t make any big changes to the early layers of the network, especially in convolutional networks.

What is quantization?​

Quantization is a branch of computer science and data science, and has to do with the actual implementation of the neural network on a digital computer, eg tiny devices. Conceptually we can think of it as approximating a continuous function with a discrete one.
Depending on the bit width of chosen integers (64, 32, 16 or 8 bits) and the implementation this can be done with more or less quantization errors. All modern PCs, smartphones, tablets, and more powerful microcontrollers (MCUs) for embedded systems have a so-called floating-point unit or an FPU.
This is a piece of hardware next to or is integrated with the main processor, with the purpose to make floating-point operations fast. But for the tiniest MCUs there are no FPUs and instead, one must resort to either of two options: Transform the data to integer numbers and perform all the calculations with integer arithmetic or perform the floating-point calculations in software.
The latter approach takes considerably more processor time, so go get a good performance on these chips (e.g., ARM’s M0-M3 cores), the former approach is the preferred one. If memory is sparse, which usually is the case on these devices, one can specify the integers to be 16 or 8-bit, thereby reducing the RAM and Flash memory used by the program to half, or a quarter.

What is tinyML software?​

tinyML software are typically ultra low power tinyML applications that run on a physical hardware device. There are a large number of machine learning algorithms that are used, but the trend is that deep learning is becoming more and more popular. People that develop tinyML applications are normally data scientists, machine learning engineers or embedded developers. Normally the tinyML models are developed and trained in a cloud system. When the tinyML application runs on the hardware device, the tinyML model can then understand what it was trained for. This is called inference.

What is tinyML hardware?​

tinyML can run on a range of different hardware platforms from Arm Cortex M-series processors to advanced neural processing devices. Deep learning normally requires more powerful hardware platforms than other types of machine learning algorithms. IoT devices are an example of tinyML hardware devices. Many people are using an Arduino board to demonstrate machine learning applications.

What is a tinyML framework?​

A tinyML framework is a software platform that makes it easier for developers to develop tinyML applications. Tensorflow Lite for microcontrollers is one of the most popular embedded machine learning frameworks.

What is a tinyML development platform?​

A tinyML development platform is an end-to-end software platform, where users can develop tinyML applications. They start with collecting data, they can use AutoML to develop the machine learning models and eventually they can deploy the machine learning model on a tiny device. The main benefit with a tinyML platform is that it reduces time-to-market and lower the costs for the developer.

What are some examples of tinyML use cases?​

The benefits of tinyML are huge, and the number of use cases for tinyML technology are almost unlimited. Popular use cases include Computer vision, visual wake words, keyword spotters, predictive maintenance, gesture recognition, maintenance of industrial machines and many more.

Imagimob in tinyML​

Imagimob offers Imagimob AI which is a tinyML development platform that covers the end-to-end process from data collection to deploying AI models on a microcontroller board”

My opinion only DYOR
FF

AKIDA BALLISTA
Could be relevant................


Capture.PNG
 
  • Like
Reactions: 3 users

JK200SX

Regular
THIS IS HUGE NEWS - WHY WAS I NOT TOLD - BECAUSE I AM JUST A RETAIL SHAREHOLDER - THIS IS ANOTHER PILLAR SUPPORTING
MY FUTURE GENERATIONAL WEALTH TRAJECTORY.

"AKIDA NET
Recently, we have developed a replacement for the popular MobileNet v1
model used as a backbone in many applications that we call AkidaNet. AkidaNet’s architecture utilizes the Akida hardware more efficiently. Some of our preliminary results are shown below for object classification, face recognition, and face detection. In many cases, switching from MobileNet v1 to AkidaNet results in a slight increase in speed and accuracy accompanied by a 15% to 30% decrease in power usage."

How mind blowingly important is a 30% decrease in already ludicrously low power usage at the Edge.

I think we need to start a list of those who want to purchase an EQXX.

My opinion only DYOR
FF

AKIDA BALLISTA
1648176439129.png
 
  • Like
  • Haha
  • Love
Reactions: 20 users

Murphy

Life is not a dress rehearsal!
I've always said that I will never sell my BRN shares, sorry but change of plans.
I will now sell only one share when the share price reaches US$500, buy a bottle of champagne for the same value, drink the bottle on my own then share it over the grave of a BRN shorter.
... after you have strained it through your kidneys, I hope!😁


If you don't have dreams, you can't have dreams come true!
 
  • Like
  • Haha
Reactions: 7 users
AKIDA NET

Akida Net 5 doing MELANOMA classification with 98.31% accuracy.

This is a brand new talent not disclosed previously.


My opinion only DYOR
FF

AKIDA BALLISTA
This is the most recent paper I could find on diagnostic accuracy for melanoma.
My son who is in the UK last year had a melanoma diagnosed and successfully removed and when speaking about it at Christmas he said he was fortunate as most Doctor's in the UK have little experience with melanoma but he was lucky because his Doctor had worked in Australia.
98.31% accuracy is a very significant accomplishment based on the following paper and the other two I read from 2018.


My opinion only DYOR
FF

AKIDA BALLISTA
 
  • Like
  • Fire
Reactions: 21 users

JK200SX

Regular
  • Like
  • Love
  • Fire
Reactions: 25 users

Bravo

If ARM was an arm, BRN would be its biceps💪!
Check this out crew!

The first article published 24 hours ago (which mentions Weebit) says "As interest in artificial intelligence (AI) and in-memory computing significantly increases, resistive random-access memory (ReRAM) could be the key to unlocking the ability to imitate the human brain".

Now I have been interested for some time in trying to establish the ways in which BRN and WBT could collaborate and this article seems to hit the nail on the head.

There's also a link in the article which says "synonymous with neuropathic computing" and this takes you to an article describing work done by researchers at Politecnico Milan. It states" The university developed a hardware design that uses Weebit’s ReRAM to combine the efficiency of convolutional neural networks (CNNs) with the plasticity of brain-inspired spiking neural networks (SNN) to enable the hardware to learn new things without forgetting trained tasks of previously acquired information. In addition, the system adapts its operative frequency for power saving, enabling feasible solutions for lifelong learning in autonomous AI systems."

Well, to cut a long story short I tried to find a link between BRN, WBT and Politecnico Milan and came across a TMT Analytics Report entitled "Weebit Nano Limited" from Feb 2019 from which I think we can deduce a collaboration of sorts.



Screen Shot 2022-03-25 at 1.53.45 pm.png






 
  • Like
  • Fire
Reactions: 39 users

Diogenese

Top 20
THIS IS HUGE NEWS - WHY WAS I NOT TOLD - BECAUSE I AM JUST A RETAIL SHAREHOLDER - THIS IS ANOTHER PILLAR SUPPORTING
MY FUTURE GENERATIONAL WEALTH TRAJECTORY.

"AKIDA NET
Recently, we have developed a replacement for the popular MobileNet v1
model used as a backbone in many applications that we call AkidaNet. AkidaNet’s architecture utilizes the Akida hardware more efficiently. Some of our preliminary results are shown below for object classification, face recognition, and face detection. In many cases, switching from MobileNet v1 to AkidaNet results in a slight increase in speed and accuracy accompanied by a 15% to 30% decrease in power usage."

How mind blowingly important is a 30% decrease in already ludicrously low power usage at the Edge.

I think we need to start a list of those who want to purchase an EQXX.

My opinion only DYOR
FF

AKIDA BALLISTA
Just guessing, but Mobilenet v1 was not designed for an on-chip one-shot learning digital SNN, so it probably has a lot of redundant data, such as front, back, sides, underneath, and plan views. Akida probably only needs generic "quadruped/vehicle/person/..." classes with a few all round shots in the generic heading, and Akida can then learn specific examples of each class as required. So KC & co have probably trained Akida Net on several samples of each class in a manner more suited to Akida.
 
  • Like
  • Love
  • Fire
Reactions: 12 users

Diogenese

Top 20
AND THIS LITTLE BEAUTY:

ANOTHER STATE OF THE ART PERFORMANCE:

"DVS Gesture (Amir et al., 2017) is a well-known event-based dataset, comprising recordings of subjects performing gestures (clapping, waving, circular motions etc.) made with a DVS128 event-based camera. Here we present our approach to achieving state of the art performance on this dataset, using a MetaTF training pipeline."


My opinion only DYOR
FF

AKIDA BALLISTA
You do know that you're breaching the total fire ban?!
 
  • Haha
  • Like
  • Fire
Reactions: 22 users

Diogenese

Top 20
Todays drop in SP is from the 2,900,000 Shorts taken out on Weds this week. Prior to this short positions had dropped significantly leading into Tuesday.

Now the B0T is holding it down accumulating. Whether its target is a Volume number, a date/time or a stat on a chart .............is to be seen.

View attachment 3131

Yak52

Yak52


1648178446126.png
 
  • Haha
  • Like
  • Fire
Reactions: 8 users
You do know that you're breaching the total fire ban?!
Nothing better than sitting around a camp fire with the autumn chill in the air, the clear night sky illuminated by the Southern Cross with a glass of your favourite red, total fire ban or not.

FF.
 
  • Like
  • Haha
  • Love
Reactions: 12 users
Check this out crew!

The first article published 24 hours ago (which mentions Weebit) says "As interest in artificial intelligence (AI) and in-memory computing significantly increases, resistive random-access memory (ReRAM) could be the key to unlocking the ability to imitate the human brain".

Now I have been interested for some time in trying to establish the ways in which BRN and WBT could collaborate and this article seems to hit the nail on the head.

There's also a link in the article which says "synonymous with neuropathic computing" and this takes you to an article describing work done by researchers at Politecnico Milan. It states" The university developed a hardware design that uses Weebit’s ReRAM to combine the efficiency of convolutional neural networks (CNNs) with the plasticity of brain-inspired spiking neural networks (SNN) to enable the hardware to learn new things without forgetting trained tasks of previously acquired information. In addition, the system adapts its operative frequency for power saving, enabling feasible solutions for lifelong learning in autonomous AI systems."

Well, to cut a long story short I tried to find a link between BRN, WBT and Politecnico Milan and came across a TMT Analytics Report entitled "Weebit Nano Limited" from Feb 2019 from which I think we can deduce a collaboration of sorts.



View attachment 3135





Hi @Diogenese
Am I correct if I say that unless Peter van der Made and Anil Mankar have walked on water yet again that they have probably had to increase the memory being used by AKD2000 to achieve high performance LSTM?

Asking dumb questions comes easily when you have had a lengthy career in the law. LOL
FF.
 
  • Like
  • Haha
Reactions: 11 users

Bravo

If ARM was an arm, BRN would be its biceps💪!
Hi @Diogenese
Am I correct if I say that unless Peter van der Made and Anil Mankar have walked on water yet again that they have probably had to increase the memory being used by AKD2000 to achieve high performance LSTM?

Asking dumb questions comes easily when you have had a lengthy career in the law. LOL
FF.


Tell me about it FF! You should hear how many dumb questions foot models ask eachother! We may have beautiful feet, but I'll be the first to admitt, we're generally not the sharpest tools in the shed.
 
  • Haha
  • Like
  • Fire
Reactions: 15 users

VictorG

Member
Tell me about it FF! You should hear how many dumb questions foot models ask eachother! We may have beautiful feet, but I'll be the first to admitt, we're generally not the sharpest tools in the shed.
I hear ya sista, if it wasn't for my good looks I would've starved to death long ago.
 
  • Haha
  • Like
Reactions: 14 users
Top Bottom