BRN Discussion Ongoing

charles2

Regular
View attachment 33737
Couldn't find the link for the post. Sometimes it's like those tweets that flash on the screen and then disappear into the nether regions of cyberspace.

If Dr Hebb could see us now.​

Did you ever ask yourself where the concept of SNN originated at the biological level...​


D.O. Hebb, Phd is often referred to as the father of neuropsychology. His research efforts were directed towards giving a biological explanation to psychological processes such as learning.

The Organization of Behavior: Hebbian Theory

Little did I know I was in the presence of a pioneering genius when I took his course at McGill University in 1966. I just thought it was Physiological Psychology 101.

Canadian Association for Neuroscience

Donald Olding Hebb​

by Sarah Ferguson
Donald Hebb
Donald Hebb
Donald Hebb (1904-1985) is often considered the “father of neuropsychology” because of the way he was able to merge the psychological world with the world of neuroscience. This achievement was accomplished largely through his work The Organization of Behavior: A Neuropsychological Theory which was published in 1949.

Beginnings

Donald Olding Hebb was born on July 22, 1904 in Chester, Nova Scotia where he lived until his family moved to Dartmouth when he was 16. He was homeschooled by his mother until he was eight years old. He was placed in the seventh grade at the young age of 10, but in high school, Hebb was not impressed with policy or authority and failed Grade 11 the first time. He was able to graduate from high school and went to Dalhousie University with the desire to become a writer, receiving his BA in 1925.
Hebb then became a teacher. He came to McGill in 1928 as a part-time graduate student in psychology while also working as headmaster at a Montreal school. He received his Master’s in 1932.

Departure from – and return to – Montreal

Two years later, Hebb was looking for a change of scenery due to both his frustrations with the limitations imposed by the Quebec curriculum on his job as headmaster and the direction of psychology research of the McGill department at the time – Hebb was more interested in the physiology of psychology. He was accepted to do a PhD with famous behaviourist Karl Lashley at the University of Chicago in 1934, following him to Harvard the following year and completed his thesis there in 1936. Working with Lashley provided Hebb with the opportunity to study learning and spatial orientation with an emphasis on the neurobiological aspect, which was where his interests lay. It was at Harvard that this interest in the development neural networks started to flourish.
He then returned to Montreal – armed with a PhD – to work with Wilder Penfield at the Montreal Neurological Institute in 1937. There, he researched the impact of surgeries and head injuries on brain functioning. He developed new tests for brain surgery patients to test their functioning post-operation – the Adult Comprehension Test and the Picture Anomaly Test – that measured specific functions as opposed to the typical overall intelligence tests that were being used.
In 1939 he again left Montreal, this time for eight years to first teach at Queen’s University and then to reunite with Lashley at the Yerkes National Primate Research Center. Hebb returned to McGill as a psychology professor in 1947 where he remained until 1974 – serving as Chancellor of the university from 1970-74.

The Organization of Behavior: Hebbian Theory

Hebb’s major contribution to the fields of both neuroscience and psychology was bringing the two together. Published in 1949, The Organization of Behavior: A Neuropsychological Theory is the book in which Hebb outlined his theory about how learning is accomplished within the brain. Perhaps the most well-known part of this work is what has become known as the Hebbian Theory or cell assembly theory.
The Hebbian theory aims to explain how neural pathways are developed based on experiences. As certain connections are used more frequently, they become stronger and faster. This hypothesis is perhaps best described by the following passage from The Organization of Behavior:
“When an axon of cell A is near enough to excite a cell B and repeatedly or persistently takes part in firing it, some growth process or metabolic change takes place in one or both cells such that A’s efficiency, as one of the cells firing B, is increased.”
So, cell A and cell B are located near each other. As cell A is repeatedly involved in the firing of cell B by exciting it, a change occurs in one of or both the cells. This change improves how effective cell A is at contributing to the firing of cell B. The two cells become associated with each other. This is often described as “cells that fire together, wire together.”
Hebbian theory provides a micro, physiological mechanism for the learning and memory processes. This theory has also been extended to computational machines that model biological processes and in artificial intelligence.

The Organization of Behavior: Bringing Two Groups Together

First and foremost, the purpose of The Organization of Behavior was to illustrate Hebb’s theory about behaviour. It also played a huge role in merging the fields of psychology and neuroscience. Hebb states this goal in the introduction to the book:
“Another [goal] is to seek a common ground with the anatomist, physiologist, and neurologist, to show them how psychological theory relates to their problems and at the same time to make it more possible for them to contribute to that theory.”
Hebb saw a need for psychology to work with neurology and physiology to be able to explain human behaviour in a more objective manner. In this way, the more abstract “mind” that psychology tended to focus on was merged with the physical, biological brain function. Hebb argued that this approach was necessary if psychology was going to be viewed as a scientific discipline.
The field of neuropsychology perseveres under the umbrella of both neuroscience and psychology, aiming to explore how behaviour correlates with brain function.

Awards and Honours

1966: elected as a Fellow of the Royal Society
1980: The Donald O. Hebb award was created and named in his honour. Hebb was the first recipient. It is awarded annually to member or fellow of the Canadian Psychological Association for their contribution to Canadian psychology as a science.
2003: Posthumously inducted into the Canadian Medical Hall of Fame
Other Resources
Hebb, Donald O. (1949), The Organization of Behavior: A Neuropsychological Theory. New York: Wiley & Sons.
Hebb, D. O. (1959) A neuropsychological theory. In S. Koch (Ed), Psychology: A Study of a Science. Vol 1. New York: McGraw-Hill
Hebb, D. O. (1980) Essay on Mind. Hillsdale, NJ: Erlbaum





 
  • Like
Reactions: 3 users

Antman0505

Emerged
This ia another example of how AI will be a gamechanger in the medical field:



I can't see any evidence of Brainchip's involvement but this would seem to be a perfect use case for Akida tech.

The company is Fujifilm Sonsite.
 
  • Like
  • Thinking
Reactions: 9 users

cosors

👀
Bless me father for I have sinned, I was meant to buy a 2ltr milk but couldn't resist buying 6 more BRN shares at this ridiculous low price.
I bought a euro pallet of milk.
 
  • Haha
Reactions: 3 users

Boab

I wish I could paint like Vincent
So say if BRN were to capture 1% of market, then US $800 million. That would be US $444, so say PE of 20 = US $8888 then convert to AUS =$13226 per share. Just saying.....
1,000 Million is a billion so the article is confusing from what it says in the headline to what is says within the article.

The Global Neuromorphic Computing Market Size is to grow from USD 31.2 million in 2021 to USD 8,275.9 million by 2030, at a Compound Annual Growth Rate (CAGR) of 85.73%
 
  • Like
Reactions: 5 users

Rskiff

Regular
So say if BRN were to capture 1% of market, then US $800 million. That would be US $444, so say PE of 20 = US $8888 then convert to AUS =$13226 per share. Just saying.....
As @FrederikSchack pointed out, 1% is 80 billion not 800 million, so I guess we have to multiply by 10 times more, so $132260 per share, sorry to everyone for under valuing the worth :ROFLMAO:Edit, as @Boab has pointed out, it's in millions :( Recalculate then puts it in more realistic perspective!
 
Last edited:
  • Haha
  • Like
Reactions: 18 users
1,000 Million is a billion so the article is confusing from what it says in the headline to what is says within the article.

The Global Neuromorphic Computing Market Size is to grow from USD 31.2 million in 2021 to USD 8,275.9 million by 2030, at a Compound Annual Growth Rate (CAGR) of 85.73%
Thanks, I was to quick with the head, embarassing error.
 
  • Like
  • Love
Reactions: 5 users

jtardif999

Regular
I guess the key to the Renasas deal is who did the Tape out the Chip for???
Was it produced for BRN to warehouse and sell?
Did Renasas produce for themselves to sell?
Were the Chips produced for a genuine 3rd party who have clients to sell to? They would not get Renasas to Tape the Chip out on a 'wing and a prayer'. They would have done their research and have clients to sell to.
So i guess depending on who the chips were for is the key to revenue this quarter. It might not be much but it would be a start.
One thing is a certainty - the Chips would not have been Taped out for the office Rubbish bin.
Renesas took a licence from BrainChip so they could sell a chip they have created with Akida inside to their customers. I don’t think they would have taped out the chip unless they had at least one customer in line. Further to this in my previous post - the chip they’ve taped out is stated to be ISO compliant, so there’s a fair chance the at least one customer is from the automobile industry. AIMO.
 
  • Like
  • Fire
  • Love
Reactions: 27 users
Renesas took a licence from BrainChip so they could sell a chip they have created with Akida inside to their customers. I don’t think they would have taped out the chip unless they had at least one customer in line. Further to this in my previous post - the chip they’ve taped out is stated to be ISO compliant, so there’s a fair chance the at least one customer is from the automobile industry. AIMO.
That should get the scarlet pimpernel or whatever his name is on LinkedIn happy when that becomes available.
 
  • Like
Reactions: 3 users
This just popped up, and may mean something to someone......


1680736641919.png

1680736684971.png
 
  • Like
  • Thinking
  • Fire
Reactions: 26 users
This blog post was originally published at Qualcomm’s website. It is reprinted here with the permission of Qualcomm.

How we are pushing the boundaries of integer quantization and making it possible at scale.​

Artificial intelligence (AI) is enhancing our daily lives, whether it’s improving collaboration, providing entertainment, or transforming industries through autonomous vehicles, digital health, smart factories, and more. However, these AI-driven benefits come at a high energy consumption cost. To put things in perspective, it is expected that by 2025 deep neural networks will have reached 100 trillion weight parameters, which is similar to the capacity of the human brain, or at least the number of synapses in the brain.

Why quantization continues to be important for edge AI​

To make our vision of intelligent devices possible, AI inference needs to run efficiently on the device — whether a smartphone, car, robot, drone, or any other machine. That is why Qualcomm AI Researchcontinues to work on holistic model efficiency research, which covers areas such as quantization, neural architecture search, compilation, and conditional compute. Quantization is particularly important because it allows for automated reduction of weights and activations to improve power efficiency and performance while maintaining accuracy. Based on how power is consumed in silicon, we see an up to 16X increase in performance per watt and a 4X decrease in memory bandwidth by going from 32-bit floating point (FP32) to 8-bit integer quantization (INT8), which equates to a lot of power savings and/or increased performance. Our webinar explores the latest findings in quantization.

AI engineers and developers can opt for post-training quantization (PTQ) and/or quantization-aware training (QAT) to optimize and then deploy their models efficiently on the device. PTQ consists of taking a pre-trained FP32 model and optimizing it directly into a fixed-point network. This is a straightforward process but may yield lower accuracy than QAT. QAT requires training/fine-tuning the network with the simulated quantization operations in place, but yields higher accuracy, especially for lower bit-widths and certain AI models.

ModelEffPicture2.jpg


Integer quantization is the way to do AI inference​

For the same numbers of bits, integers and floating-point formats have the same number of values but different distributions. Many assume that floating point is more accurate than integer, but it really depends on the distribution of the model. For example, FP8 is better at representing distributions with outliers than INT8, but it is less practical as it consumes more power — also note that for distributions without outliers, INT8 provides a better representation. Furthermore, there are multiple FP8 formats in existence, rather than a one-size-fits-all. Finding the best FP8 format and supporting it in hardware comes at a cost. Crucially, with QAT the benefits offered by FP8 in outlier-heavy distributions are mitigated, and the accuracy of FP8 and INT8 becomes similar — refer to our NeurIPS 2022 paper for more details. In the case of INT16 and FP16, PTQ mitigates the accuracy differences, and INT16 outperforms FP16.

ModelEffPicture3.jpg


Making 4-bit integer quantization possible​

The latest findings by Qualcomm AI Research go one step further and address another issue harming the model accuracy: oscillating weights when training a model. In our ICML 2022 accepted paper, “Overcoming Oscillations in Quantization-Aware Training,” we show that higher oscillation frequencies during QAT negatively impact accuracy, but it turns out that this issue can be solved with two state-of-the-art (SOTA) methods called oscillation dampening and iterative freezing. SOTA accuracy results are achieved for 4-bit and 3-bit quantization thanks to the two novel methods.

Enabling the ecosystem with open-source quantization tools​

Now, AI engineers and developers can efficiently implement their machine learning models on-device with our open-source tools AI Model Efficiency Toolkit (AIMET) and AIMET Model Zoo. AIMET features and APIs are easy to use and can be seamlessly integrated into the AI model development workflow. What does that mean?

  • SOTA quantization and compression tools
  • Support for both TensorFlow and PyTorch
  • Benchmarks and tests for many models
AIMET enables accurate integer quantization for a wide range of use cases and model types, pushing weight bit-width precision all the way down to 4-bit.

ModelEffPicture4.jpg


In addition to AIMET, Qualcomm Innovation Center has also made the AIMET Model Zoo available: a broad range of popular pre-trained models optimized for 8-bit inference across a broad range of categories from image classification, object detection, and semantic segmentation to pose estimation and super resolution. AIMET Model Zoo’s quantized models are created using the same optimization techniques as found in AIMET. Together with the models, AIMET Model Zoo also provides the recipe for quantizing popular 32-bit floating point (FP32) models to 8-bit integer (INT8) models with little loss in accuracy. In the future, AIMET Model Zoo will provide more models covering new use cases and architectures while also enabling models with 4-bit weights quantization via further optimizations. AIMET Model Zoo makes it as simple as possible for an AI developer to grab an accurate quantized model for enhanced performance, latency, performance per watt, and more without investing a lot of time.

ModelEffPicture5.jpg


Model efficiency is key for enabling on-device AI and accelerating the growth of the connected intelligent edge. For this purpose, integer quantization is the optimal solution. Qualcomm AI Research is enabling 4-bit quantization without loss in accuracy, and AIMET makes this technology available at scale.

Tijmen Blankevoort
Director, Engineering, Qualcomm Technologies Netherlands B.V.

Chirag Patel
Engineer, Principal/Mgr., Qualcomm Technologies
Can you repost that in English🤯
 

Xray1

Regular
In Dec'20 BRN and Renesas struck a deal.
  • Firstly, the company has signed an intellectual property licensing agreement with major semiconductor manufacturer Renesas Electronics America
  • Under the agreement, Renesas will gain a single-use, royalty-bearing, worldwide design license to use the Akida IP

On Dec 2nd 2022 BRN reported that Renesas was Taping out a Chip using BRN Tech.

Renesas had basically 2 years to work with AKIDA testing, talking to clients etc.
There is no way Renesas would have produced the Chip just for it to gather dust.
Its a certainty they would have done their Tech research worked with clients over the 2 year period.
No Company would produce a product unless it was certain of demand and/or advance orders.
IMO the chip was produced for sale simply because that is what Renesas do. No different than Ford making cars to sell.
No dot joining there.
I expect to see revenue start in the 1st quarter. The product sales will appear in Renesas accounts. The Royalty in BRN Accounts.
When and how much depends on the timing of Renesas sales and Royalty payments.
As the sales belong to Renesas BRN cannot make an announcement.
I doubt when the 1st Royalty $ comes in that the ASX will allow an announcement. Simply because the Royalty flows from Renesas sales of which BRN has no control over and therefore would not be able to make a reliable estimate.
Does this make sense or am i missing something??
Happy to be corrected.
From memory ... I think Renesas purchased only the "2 nodes" so most likely imo possibly for whitegoods sales ??!!!
 
  • Like
Reactions: 2 users

Violin1

Regular
In Dec'20 BRN and Renesas struck a deal.
  • Firstly, the company has signed an intellectual property licensing agreement with major semiconductor manufacturer Renesas Electronics America
  • Under the agreement, Renesas will gain a single-use, royalty-bearing, worldwide design license to use the Akida IP

On Dec 2nd 2022 BRN reported that Renesas was Taping out a Chip using BRN Tech.

Renesas had basically 2 years to work with AKIDA testing, talking to clients etc.
There is no way Renesas would have produced the Chip just for it to gather dust.
Its a certainty they would have done their Tech research worked with clients over the 2 year period.
No Company would produce a product unless it was certain of demand and/or advance orders.
IMO the chip was produced for sale simply because that is what Renesas do. No different than Ford making cars to sell.
No dot joining there.
I expect to see revenue start in the 1st quarter. The product sales will appear in Renesas accounts. The Royalty in BRN Accounts.
When and how much depends on the timing of Renesas sales and Royalty payments.
As the sales belong to Renesas BRN cannot make an announcement.
I doubt when the 1st Royalty $ comes in that the ASX will allow an announcement. Simply because the Royalty flows from Renesas sales of which BRN has no control over and therefore would not be able to make a reliable estimate.
Does this make sense or am i missing something??
Happy to be corrected.
Hi @manny100
I'm not sure this will mean revenue in the first quarter though. If they started taping out in Dec 2022 I think one of the things we've learnt over the past few years is that taping out, testing, production, testing etc all takes time. It therefore depends on when the royalty becomes due to BRN and if invoices are then prepared and terms of trade - none of which we are privy to. So I'm thinking last half of calendar 2023 rather than the past or next few months.

I'd rather you be correct though!

Just my thoughts.
 
  • Like
  • Fire
Reactions: 21 users

Steve10

Regular
Hi @manny100
I'm not sure this will mean revenue in the first quarter though. If they started taping out in Dec 2022 I think one of the things we've learnt over the past few years is that taping out, testing, production, testing etc all takes time. It therefore depends on when the royalty becomes due to BRN and if invoices are then prepared and terms of trade - none of which we are privy to. So I'm thinking last half of calendar 2023 rather than the past or next few months.

I'd rather you be correct though!

Just my thoughts.

Renesas mentioned on December 2, 2022 announcement:

"We are working with a third party taping out a device in December on 22nm CMOS,” said Chittipeddi.

Brainchip mentioned on January 29, 2023 announcement:

today announced that it has achieved tape out of its AKD1500 reference design on GlobalFoundries’ 22nm fully depleted silicon-on-insulator (FD-SOI) technology.

Neither have announced completion of chips yet. And requires testing.

Wafer fabrication commenced in April 2020 for AKD1000 & was completed in July 2020 = 3 months. Then required time for testing/validation which was completed by September 2020 = 2 months.

From tape out completed to tested/validated chip = 5 months.

So if tape out for AKD1500 was completed late January 2023 + 3 months = late April 2023 news of chip manufactured.

Then another 2 months for testing/validation = late June 2023 news.

No news as to when Renesas completed their tape out.
 
  • Like
  • Fire
  • Love
Reactions: 39 users

Steve10

Regular
By end of April or another 14 trading days after today news is due for AKD1500 chip production completion.

Just in time for AGM on 23rd May.
 
  • Like
Reactions: 22 users

Bravo

If ARM was an arm, BRN would be its biceps💪!

Some key points that stand out to me. Apologies if you already posted this info @Bravo , I must have missed it!

"While the consumer experience driven by our technology is simple, there’s a great amount of complexity behind the scenes. The experience is facilitated by state-of-the-art computer vision (CV), sensor fusion, and deep learning algorithms made possible by several pieces of hardware and a system of in-store and cloud microservices that we’ve designed."

What hardware powers the Just Walk Out technology shopping experience?


"Once consumers enter the store, our technology relies on in-house-designed cameras to identify what products consumers take off or put back on the shelves. Each of our cameras has high resolution and a wide field of view, allowing us to install the fewest number of cameras possible. The reduced number of cameras makes the technology cost effective. We run CV algorithms directly on the camera to process data locally to reduce the bandwidth needed to send data to other devices or to the cloud. To provide security, our cameras also incorporate hardware-backed security capabilities and end-to-end encryption of data both locally and while being sent between our services."

Bringing the power of the cloud to the store

"While Amazon Web Services (AWS) helps us to elastically scale our resources to process data, stores can be a long distance from a data center, and there can often be a large amount of data to process. Our initial prototypes and installations in our own store formats started with all of our processing done in the cloud. As we scaled to different locations and larger store formats, we quickly needed to iterate on an architecture to allow us to run our algorithms where it makes the most sense—either in the cloud with elastic compute or in the store where the data is.

To manage these bandwidth issues, we built an edge computing architecture to process sensor data and compute receipts locally without going back and forth to the cloud
. Placing compute close to our data helps us to improve reliability by sending less data over the internet.

Ultimately, all our cameras, sensors, and scanners create a significant amount of data to process. To make the whole system more robust, the data streams are processed as independently as possible, resulting in a highly concurrent and asynchronous architecture."

Nice @DollazAndSense! One can only assume we are or will be working with Amazon (IMO DYOR).


Amit Mate am.png
 
  • Like
  • Fire
  • Love
Reactions: 39 users

JDelekto

Regular
This just popped up, and may mean something to someone......


View attachment 33755
View attachment 33756
These are the Python packages that appear to have been recently updated for the Akida Execution Engine (sorry for re-stating the obvious), which contain the Akida emulation layer, and if you have the hardware available (such as the PCIe card), it can be used for training and inference.

Pypi.org also has the CNN to SNN converter package, the various Akida models from their model zoo, as well as a library for quantizing models.

The homepage link will take you to BrainChip's Meta TF documentation, to which their main Web site also links from the "Try Meta TF" button off of the "Meta TF" option on the Products link. I have installed version 2.3.2 a couple of weeks ago, so I'll need to take a look and see if I can find what has changed.

It's also worth mentioning that BrainChip has its own Github Repositories, with the source code for its PCIe driver (currently Linux only), as well as their samples. Interestingly, their release notes only seem to go up to 2.3.0. Maybe they have yet to push the changes.

A bonus search for Akida on Github will also unearth an excellent repository with @uiux 's projects that were created to take advantage of the Akida hardware.
 
  • Like
  • Fire
  • Love
Reactions: 34 users
D

Deleted member 118

Guest
A forecast of a 8 trillion neuromorphic computing market by 2030:
------
Update. Embarrassing error, 8 billion of course.


1% of that, is a small 80 billion $
 
Last edited by a moderator:
  • Like
  • Haha
  • Love
Reactions: 13 users

Bravo

If ARM was an arm, BRN would be its biceps💪!
Sorry forum I just got dreadbotted for the first time 🫨 I'll keep it clearly on topic from now on 😅

Lucky you didn't get dreddb0tted twice @skutza, because your reply was actually off-topic too, truth be told. ❤️ from Bra-b0tt.:ROFLMAO::LOL:

I'm just going to squeeze this here so that I don't get in twuble too. I've been wondering about the possibility of our integration into Qualcomm's Flex SOC which due to enter the market in early 2024. I personally think there's a pretty reasonable prospect of that happening IMO.

Screen Shot 2023-04-05 at 12.22.59 pm.png
 
  • Like
  • Fire
  • Haha
Reactions: 17 users

Moonshot

Regular
  • Like
  • Love
Reactions: 2 users
Top Bottom