BRN Discussion Ongoing

There was a discussion about Vivotek about a week ago, and perhaps the comment below has gone unnoticed.....


Vivotek is launching the first-ever facial recognition camera, FD9387-FR-v2,
that
integrates edge computing to help enterprises quickly identify the gender
and age of people in the video on the edge, as well as those who are wearing masks.


Could this be a product for sale right now that has AKIDA?

1680690708167.png





Advertisement for sale:
 
  • Like
  • Fire
  • Thinking
Reactions: 30 users

IloveLamp

Top 20
  • Like
  • Fire
  • Love
Reactions: 26 users

Deadpool

hyper-efficient Ai
I will have my usual gear on like the below and geeing up the investors before Sean and co walk in to deliver their news. Hopefully only all good news. So shouldn’t stand out at all.
Video Games Clap GIF by Call of Duty League


After the AGM I will have a confessional booth set up to the side for BRN management or anyone else who attend to confess anything or get things of their chest. Maybe chat about NDA’s but unfortunately I will not be able to tell anyone. Lol

View attachment 33707
Bless me father for I have sinned, I was meant to buy a 2ltr milk but couldn't resist buying 6 more BRN shares at this ridiculous low price.
 
  • Haha
  • Like
Reactions: 32 users

manny100

Regular
In Dec'20 BRN and Renesas struck a deal.
  • Firstly, the company has signed an intellectual property licensing agreement with major semiconductor manufacturer Renesas Electronics America
  • Under the agreement, Renesas will gain a single-use, royalty-bearing, worldwide design license to use the Akida IP
https://***************.com.au/brai...ments based on the volume of processors sold.

On Dec 2nd 2022 BRN reported that Renesas was Taping out a Chip using BRN Tech.

Renesas had basically 2 years to work with AKIDA testing, talking to clients etc.
There is no way Renesas would have produced the Chip just for it to gather dust.
Its a certainty they would have done their Tech research worked with clients over the 2 year period.
No Company would produce a product unless it was certain of demand and/or advance orders.
IMO the chip was produced for sale simply because that is what Renesas do. No different than Ford making cars to sell.
No dot joining there.
I expect to see revenue start in the 1st quarter. The product sales will appear in Renesas accounts. The Royalty in BRN Accounts.
When and how much depends on the timing of Renesas sales and Royalty payments.
As the sales belong to Renesas BRN cannot make an announcement.
I doubt when the 1st Royalty $ comes in that the ASX will allow an announcement. Simply because the Royalty flows from Renesas sales of which BRN has no control over and therefore would not be able to make a reliable estimate.
Does this make sense or am i missing something??
Happy to be corrected.
 
  • Like
  • Fire
  • Love
Reactions: 35 users

miaeffect

Oat latte lover
 
  • Wow
  • Like
  • Fire
Reactions: 5 users

Steve7777

Regular
There was a discussion about Vivotek about a week ago, and perhaps the comment below has gone unnoticed.....


Vivotek is launching the first-ever facial recognition camera, FD9387-FR-v2,
that
integrates edge computing to help enterprises quickly identify the gender
and age of people in the video on the edge, as well as those who are wearing masks.


Could this be a product for sale right now that has AKIDA?

View attachment 33708




Advertisement for sale:

Good find, this to me, looks like it could well be our 1st product to market?

A trio of edge AI developers have revealed new technologies that bring biometric storage and processing close to the place of application. They include a new camera from Vivotek, a family of vision-processing chips from Hailo, and a real-time data processing platform from BrainChip.

Vivotek has launched its first facial recognition camera, FD9387-FR-v2, that combines edge computing to identify gender and age from video footage even when people wear masks. The company says it can store up to 10,000 profiles with a 99 percent accuracy rate and is compliant with the U.S. National Defense Authorization Act.

 
  • Like
  • Fire
Reactions: 25 users
Good find, this to me, looks like it could well be our 1st product to market?

A trio of edge AI developers have revealed new technologies that bring biometric storage and processing close to the place of application. They include a new camera from Vivotek, a family of vision-processing chips from Hailo, and a real-time data processing platform from BrainChip.

Vivotek has launched its first facial recognition camera, FD9387-FR-v2, that combines edge computing to identify gender and age from video footage even when people wear masks. The company says it can store up to 10,000 profiles with a 99 percent accuracy rate and is compliant with the U.S. National Defense Authorization Act.



Sorry but this has been looked at before.

The preface sentence of the article is about 3 separate products.

The section about Brainchip is talking about 2nd gen chip. Not a camera.

It’ll happen.
 
  • Like
  • Fire
  • Love
Reactions: 20 users

MDhere

Regular
  • Haha
  • Love
Reactions: 3 users

TechGirl

Founding Member

Awesome, thanks so much :)


Recapping Embedded World 2023: Where the World Learns About What's Next​

So much happened at Embedded World 2023. Let's take a look back!​


Philip LingFollow
17 hours ago
embedded-world-2023-award_7TmP4TTBam.jpg

Everyone is a winner at Embedded World 2023. Companies of all sizes had the chance to meet electronics engineers from across the globe.







The exhibition halls at Nuremberg Messe were buzzing for three full days, as hundreds of companies and thousands of visitors met to discuss the technologies that will shape all our tomorrow.
Not surprisingly, machine learning and artificial intelligence were everywhere. We can now use AI in embedded products through an increasing number of hardware and software products. There seems to be no limits to the way innovators are implementing the fundamental features of AI at the edge. That may be in a purely software way running on general purpose microprocessors, like Greenwaves Technologies, or using hardwired IP deeply integrated alongside an MCU from Renesas. It could be dedicated neuromorphic processor technology, such as BrainChip's Akida, or it may be a combination of all these approaches in the form of soft- and hard-IP ported to an SoC, like that of Synaptics. Examples of all these vectors were present at Embedded World.

Other hot topics included Matter. This is the latest – and probably the most comprehensive to date – attempt to simplify and consolidate the smart home industry. The standard was officially released in October 2022 and many major semiconductor vendors, including STMicroelectronics,Silicon Labs, and Nordic Semiconductor, have been actively promoting and supporting it since then. There was clearly a lot of interest from engineers looking to learn more.

What else was caught our attention? How about a TFT display that thinks it’s ePaper? As a bistable technology, the TFT display consumes no power when it isn’t being updated, but because it’s made using regular TFT manufacturing processes, the manufacturer can offer a lot more flexibility, and even a backlight. Pretty cool, right?

And if you read our preview of things to look out for at this year’s Embedded World, you may have noticed that Slint, one of our top companies to see, won an Embedded Award for Best Tool!
2023_ew_f_1994_wO4vFlqmgB.jpg


Also during the show, Infineon announced the winners of our Propel Human-Machine Interactions into the Future challenge, based the company's CAPSENSE technology.
And as if that wasn't enough, NXP had showcased NXP HoverGames3: Land, Sky, Food Supply from their booth.
Next year, the company behind the Embedded World exhibition and conference will launch a North America event in Austin, Texas, as well as another new event in China later this year. We’re all really excited about the expansion of this leading event for the electronics industry.


Philip LingFollow
I was born an engineer, grew up to be a technical journalist and evolved into a technical marketer. I'm now part of Team Avnet.
 
  • Like
  • Love
  • Fire
Reactions: 24 users
That is real beneficial ai
No way any Australian government would want this technology, they would find out that most of the roads need repair or replacing.
 
  • Haha
  • Like
  • Fire
Reactions: 18 users

charles2

Regular
Last edited:
  • Thinking
  • Like
Reactions: 4 users
D

Deleted member 118

Guest
  • Like
Reactions: 2 users
A forecast of a 8 trillion neuromorphic computing market by 2030:
------
Update. Embarrassing error, 8 billion of course.
 
Last edited:
  • Like
  • Fire
  • Haha
Reactions: 24 users

Tothemoon24

Top 20
This blog post was originally published at Qualcomm’s website. It is reprinted here with the permission of Qualcomm.

How we are pushing the boundaries of integer quantization and making it possible at scale.​

Artificial intelligence (AI) is enhancing our daily lives, whether it’s improving collaboration, providing entertainment, or transforming industries through autonomous vehicles, digital health, smart factories, and more. However, these AI-driven benefits come at a high energy consumption cost. To put things in perspective, it is expected that by 2025 deep neural networks will have reached 100 trillion weight parameters, which is similar to the capacity of the human brain, or at least the number of synapses in the brain.

Why quantization continues to be important for edge AI​

To make our vision of intelligent devices possible, AI inference needs to run efficiently on the device — whether a smartphone, car, robot, drone, or any other machine. That is why Qualcomm AI Researchcontinues to work on holistic model efficiency research, which covers areas such as quantization, neural architecture search, compilation, and conditional compute. Quantization is particularly important because it allows for automated reduction of weights and activations to improve power efficiency and performance while maintaining accuracy. Based on how power is consumed in silicon, we see an up to 16X increase in performance per watt and a 4X decrease in memory bandwidth by going from 32-bit floating point (FP32) to 8-bit integer quantization (INT8), which equates to a lot of power savings and/or increased performance. Our webinar explores the latest findings in quantization.

AI engineers and developers can opt for post-training quantization (PTQ) and/or quantization-aware training (QAT) to optimize and then deploy their models efficiently on the device. PTQ consists of taking a pre-trained FP32 model and optimizing it directly into a fixed-point network. This is a straightforward process but may yield lower accuracy than QAT. QAT requires training/fine-tuning the network with the simulated quantization operations in place, but yields higher accuracy, especially for lower bit-widths and certain AI models.

ModelEffPicture2.jpg


Integer quantization is the way to do AI inference​

For the same numbers of bits, integers and floating-point formats have the same number of values but different distributions. Many assume that floating point is more accurate than integer, but it really depends on the distribution of the model. For example, FP8 is better at representing distributions with outliers than INT8, but it is less practical as it consumes more power — also note that for distributions without outliers, INT8 provides a better representation. Furthermore, there are multiple FP8 formats in existence, rather than a one-size-fits-all. Finding the best FP8 format and supporting it in hardware comes at a cost. Crucially, with QAT the benefits offered by FP8 in outlier-heavy distributions are mitigated, and the accuracy of FP8 and INT8 becomes similar — refer to our NeurIPS 2022 paper for more details. In the case of INT16 and FP16, PTQ mitigates the accuracy differences, and INT16 outperforms FP16.

ModelEffPicture3.jpg


Making 4-bit integer quantization possible​

The latest findings by Qualcomm AI Research go one step further and address another issue harming the model accuracy: oscillating weights when training a model. In our ICML 2022 accepted paper, “Overcoming Oscillations in Quantization-Aware Training,” we show that higher oscillation frequencies during QAT negatively impact accuracy, but it turns out that this issue can be solved with two state-of-the-art (SOTA) methods called oscillation dampening and iterative freezing. SOTA accuracy results are achieved for 4-bit and 3-bit quantization thanks to the two novel methods.

Enabling the ecosystem with open-source quantization tools​

Now, AI engineers and developers can efficiently implement their machine learning models on-device with our open-source tools AI Model Efficiency Toolkit (AIMET) and AIMET Model Zoo. AIMET features and APIs are easy to use and can be seamlessly integrated into the AI model development workflow. What does that mean?

  • SOTA quantization and compression tools
  • Support for both TensorFlow and PyTorch
  • Benchmarks and tests for many models
AIMET enables accurate integer quantization for a wide range of use cases and model types, pushing weight bit-width precision all the way down to 4-bit.

ModelEffPicture4.jpg


In addition to AIMET, Qualcomm Innovation Center has also made the AIMET Model Zoo available: a broad range of popular pre-trained models optimized for 8-bit inference across a broad range of categories from image classification, object detection, and semantic segmentation to pose estimation and super resolution. AIMET Model Zoo’s quantized models are created using the same optimization techniques as found in AIMET. Together with the models, AIMET Model Zoo also provides the recipe for quantizing popular 32-bit floating point (FP32) models to 8-bit integer (INT8) models with little loss in accuracy. In the future, AIMET Model Zoo will provide more models covering new use cases and architectures while also enabling models with 4-bit weights quantization via further optimizations. AIMET Model Zoo makes it as simple as possible for an AI developer to grab an accurate quantized model for enhanced performance, latency, performance per watt, and more without investing a lot of time.

ModelEffPicture5.jpg


Model efficiency is key for enabling on-device AI and accelerating the growth of the connected intelligent edge. For this purpose, integer quantization is the optimal solution. Qualcomm AI Research is enabling 4-bit quantization without loss in accuracy, and AIMET makes this technology available at scale.

Tijmen Blankevoort
Director, Engineering, Qualcomm Technologies Netherlands B.V.

Chirag Patel
Engineer, Principal/Mgr., Qualcomm Technologies
 
  • Like
  • Thinking
  • Wow
Reactions: 15 users

Rskiff

Regular
A forecast of a 8 trillion neuromorphic computing market by 2030:
So say if BRN were to capture 1% of market, then US $800 million. That would be US $444, so say PE of 20 = US $8888 then convert to AUS =$13226 per share. Just saying.....
 
  • Like
  • Fire
  • Love
Reactions: 24 users
So say if BRN were to capture 1% of market, then US $800 million. That would be US $444, so say PE of 20 = US $8888 then convert to AUS =$13226 per share. Just saying.....
1% would be 80 billion USD
 
  • Like
  • Fire
Reactions: 10 users

Tothemoon24

Top 20
Why you don't need big data to train ML

When somebody says artificial intelligence (AI), they most often mean machine learning (ML). To create an ML algorithm, most people think you need to collect a labeled dataset, and the dataset must be huge. This is all true if the goal is to describe the process in one sentence. However, if you understand the process a little better, then big data is not as necessary as it first seems.

Why many people think nothing will work without big data

To begin with, let’s discuss what a dataset and training are. A dataset is a collection of objects that are typically labeled by a human so that the algorithm can understand what it should look for. For example, if we want to find cats in photos, we need a set of pictures with cats and, for each picture, the coordinates of the cat, if it exists.

During training, the algorithm is shown the labeled data with the expectation that it will learn how to predict labels for objects, find universal dependencies and be able to solve the problem on data that it has not seen.

>>Don’t miss our special issue: The quest for Nirvana: Applying AI at scale.<<

One of the most common challenges in training such algorithms is called overfitting. Overfitting occurs when the algorithm remembers the training dataset but doesn’t learn how to work with data it has never seen.

Let’s take the same example. If our data contains only photos of black cats, then the algorithm can learn the relationship: black with a tail = a cat. But the false dependency is not always so obvious. If there is little data, and the algorithm is strong, it can remember all the data, focusing on uninterpretable noise.

The easiest way to combat overfitting is to collect more data because this helps prevent the algorithm from creating false dependencies, such as only recognizing black cats.

The caveat here is that the dataset must be representative (e.g., using only photos from a British shorthair fan forum won’t yield good results, no matter how large the pool is). Because more data is the simplest solution, the opinion persists that a lot of data is needed.

Ways to launch products without big data

However, let’s take a closer look. Why do we need data? For the algorithm to find a dependency in them. Why do we need a lot of data? So that it finds the correct dependency. How can we reduce the amount of data? By prompting the algorithm with the correct dependencies.

Skinny algorithms

One option is to use lightweight algorithms. Such algorithms cannot find complex dependencies and, accordingly, are less prone to overfitting. The difficulty with such algorithms is that they require the developer to preprocess the data and look for patterns on their own.

For example, assume you want to predict a store’s daily sales, and your data is the address of the store, the date, and a list of all purchases for that date. A sign that will facilitate the task is the indicator of the day off. If it’s a holiday now, then the customers will probably make purchases more often, and revenue will increase.

Manipulating the data in this way is called feature engineering. This approach works well in problems where such features are easy to create based on common sense.

However, in some tasks, such as working with images, everything is more difficult. This is where deep learning neural networks come in. Because they are capacious algorithms, they can find non-trivial dependencies where a person simply couldn’t understand the nature of the data. Almost all recent advances in computer vision are credited to neural networks. Such algorithms do typically require a lot of data, but they can also be prompted.

Searching the public domain

The first way to do this is by fine-tuning pre-trained models. There are many already-trained neural networks in the public domain. While there may not be one trained for your specific task, there is likely one from a similar area.

These networks have already learned some basic understanding of the world; they just need to be nudged in the right direction. Thus, there is only a need for a small amount of data. Here we can draw an analogy with people: A person who can skateboard will be able to pick up longboarding with much less guidance than someone who has never even stood on a skateboard before.

In some cases, the problem is not with the number of objects, but the number of labeled ones. Sometimes, collecting data is easy, but labeling is very difficult. For example, when the markup is science-intensive, such as when classifying body cells, the few people who are qualified to label this data are expensive to hire.

Even if there is no similar task available in the open-source world, it is still possible to come up with a task for pre-training that does not require labeling. One such example is training an autoencoder, which is a neural network that compresses objects (similar to a .zip archiver) and then decompresses them.

For effective compression, it only needs to find some general patterns in the data, which means we can use this pre-trained network for fine-tuning.

Active learning

Another approach to improving models in the presence of undetected data is called active learning. The essence of this concept is that the neural network itself suggests which examples it needs to label and which examples are labeled incorrectly. The fact is that often, along with the answer, the algorithm gives away its confidence in the result. Accordingly, we can run the intermediate algorithm on unnoticed data in search of those where the output is uncertain, give them to people for labeling, and, after labeling, train again.

It is important to note that this is not an exhaustive list of possible options; these are just a few of the simplest approaches. And remember that each of these approaches is not a panacea. For some tasks, one approach works better; for others, another will yield the best results. The more you try, the better results you will find.
 
  • Like
Reactions: 15 users

Foxdog

Regular
  • Haha
  • Like
  • Love
Reactions: 5 users

charles2

Regular
View attachment 33737
Couldn't find the link for the post. Sometimes it's like those tweets that flash on the screen and then disappear into the nether regions of cyberspace.

If Dr Hebb could see us now.​

Did you ever ask yourself where the concept of SNN originated at the biological level...​


D.O. Hebb, Phd is often referred to as the father of neuropsychology. His research efforts were directed towards giving a biological explanation to psychological processes such as learning.

The Organization of Behavior: Hebbian Theory

Little did I know I was in the presence of a pioneering genius when I took his course at McGill University in 1966. I just thought it was Physiological Psychology 101.

Canadian Association for Neuroscience

Donald Olding Hebb​

by Sarah Ferguson
Donald Hebb
Donald Hebb
Donald Hebb (1904-1985) is often considered the “father of neuropsychology” because of the way he was able to merge the psychological world with the world of neuroscience. This achievement was accomplished largely through his work The Organization of Behavior: A Neuropsychological Theory which was published in 1949.

Beginnings

Donald Olding Hebb was born on July 22, 1904 in Chester, Nova Scotia where he lived until his family moved to Dartmouth when he was 16. He was homeschooled by his mother until he was eight years old. He was placed in the seventh grade at the young age of 10, but in high school, Hebb was not impressed with policy or authority and failed Grade 11 the first time. He was able to graduate from high school and went to Dalhousie University with the desire to become a writer, receiving his BA in 1925.
Hebb then became a teacher. He came to McGill in 1928 as a part-time graduate student in psychology while also working as headmaster at a Montreal school. He received his Master’s in 1932.

Departure from – and return to – Montreal

Two years later, Hebb was looking for a change of scenery due to both his frustrations with the limitations imposed by the Quebec curriculum on his job as headmaster and the direction of psychology research of the McGill department at the time – Hebb was more interested in the physiology of psychology. He was accepted to do a PhD with famous behaviourist Karl Lashley at the University of Chicago in 1934, following him to Harvard the following year and completed his thesis there in 1936. Working with Lashley provided Hebb with the opportunity to study learning and spatial orientation with an emphasis on the neurobiological aspect, which was where his interests lay. It was at Harvard that this interest in the development neural networks started to flourish.
He then returned to Montreal – armed with a PhD – to work with Wilder Penfield at the Montreal Neurological Institute in 1937. There, he researched the impact of surgeries and head injuries on brain functioning. He developed new tests for brain surgery patients to test their functioning post-operation – the Adult Comprehension Test and the Picture Anomaly Test – that measured specific functions as opposed to the typical overall intelligence tests that were being used.
In 1939 he again left Montreal, this time for eight years to first teach at Queen’s University and then to reunite with Lashley at the Yerkes National Primate Research Center. Hebb returned to McGill as a psychology professor in 1947 where he remained until 1974 – serving as Chancellor of the university from 1970-74.

The Organization of Behavior: Hebbian Theory

Hebb’s major contribution to the fields of both neuroscience and psychology was bringing the two together. Published in 1949, The Organization of Behavior: A Neuropsychological Theory is the book in which Hebb outlined his theory about how learning is accomplished within the brain. Perhaps the most well-known part of this work is what has become known as the Hebbian Theory or cell assembly theory.
The Hebbian theory aims to explain how neural pathways are developed based on experiences. As certain connections are used more frequently, they become stronger and faster. This hypothesis is perhaps best described by the following passage from The Organization of Behavior:
“When an axon of cell A is near enough to excite a cell B and repeatedly or persistently takes part in firing it, some growth process or metabolic change takes place in one or both cells such that A’s efficiency, as one of the cells firing B, is increased.”
So, cell A and cell B are located near each other. As cell A is repeatedly involved in the firing of cell B by exciting it, a change occurs in one of or both the cells. This change improves how effective cell A is at contributing to the firing of cell B. The two cells become associated with each other. This is often described as “cells that fire together, wire together.”
Hebbian theory provides a micro, physiological mechanism for the learning and memory processes. This theory has also been extended to computational machines that model biological processes and in artificial intelligence.

The Organization of Behavior: Bringing Two Groups Together

First and foremost, the purpose of The Organization of Behavior was to illustrate Hebb’s theory about behaviour. It also played a huge role in merging the fields of psychology and neuroscience. Hebb states this goal in the introduction to the book:
“Another [goal] is to seek a common ground with the anatomist, physiologist, and neurologist, to show them how psychological theory relates to their problems and at the same time to make it more possible for them to contribute to that theory.”
Hebb saw a need for psychology to work with neurology and physiology to be able to explain human behaviour in a more objective manner. In this way, the more abstract “mind” that psychology tended to focus on was merged with the physical, biological brain function. Hebb argued that this approach was necessary if psychology was going to be viewed as a scientific discipline.
The field of neuropsychology perseveres under the umbrella of both neuroscience and psychology, aiming to explore how behaviour correlates with brain function.

Awards and Honours

1966: elected as a Fellow of the Royal Society
1980: The Donald O. Hebb award was created and named in his honour. Hebb was the first recipient. It is awarded annually to member or fellow of the Canadian Psychological Association for their contribution to Canadian psychology as a science.
2003: Posthumously inducted into the Canadian Medical Hall of Fame
Other Resources
Hebb, Donald O. (1949), The Organization of Behavior: A Neuropsychological Theory. New York: Wiley & Sons.
Hebb, D. O. (1959) A neuropsychological theory. In S. Koch (Ed), Psychology: A Study of a Science. Vol 1. New York: McGraw-Hill
Hebb, D. O. (1980) Essay on Mind. Hillsdale, NJ: Erlbaum





 
  • Like
Reactions: 3 users
Top Bottom