BRN Discussion Ongoing

Xray1

Regular
Nice pick up Mr Romper,

The full documentation is not up on Espacenet yet (published 20230629).

Inventors:
MCLELLAND DOUGLAS [FR]; CARLSON KRISTOFOR D [US]; JOHNSON KEITH WILLIAM [AU]; JOSHI MILIND [AU]

No PvdM as inventor.

Milind Joshi is our patent attorney in Perth.



US2023206066A1 SPIKING NEURAL NETWORK

Disclosed herein are system, method, and computer program embodiments for an improved spiking neural network (SNN) configured to learn and perform unsupervised, semi-supervised, and supervised extraction of features from an input dataset. An embodiment operates by receiving a modification request to modify a base neural network, having N layers and a plurality of spiking neurons, trained using a primary training dataset. The base neural network is modified to include supplementary spiking neurons in the Nth or N + 1th layer of the base neural network. The embodiment includes receiving a secondary training dataset and determining membrane potential values of one or more supplementary spiking neurons in the Nth or Nth + 1 layer which learn features based on secondary training data set to select a supplementary/winning spiking neuron. The embodiment performs a learning function for the modified neural network based on the winning spiking neuron.


View attachment 39151
Disclosed herein are system, method, and computer program embodiments for an improved spiking neural network (SNN) configured to learn and perform unsupervised, semi-supervised, and supervised extraction of features from an input dataset. An embodiment operates by receiving a modification request to modify a base neural network, having N layers and a plurality of spiking neurons, trained using a primary training dataset. The base neural network is modified to include supplementary spiking neurons in the Nth or N + 1th layer of the base neural network. The embodiment includes receiving a secondary training dataset and determining membrane potential values of one or more supplementary spiking neurons in the Nth or Nth + 1 layer which learn features based on secondary training data set to select a supplementary/winning spiking neuron. The embodiment performs a learning function for the modified neural network based on the winning spiking neuron.



View attachment 39152
Looks like the supplementary spiking neurons are 1002, 1004.

One thing it is designed to do is to adjust the multi-bit weights, so I guess that's why they need the ALU.
Nice pick up Mr Romper,

The full documentation is not up on Espacenet yet (published 20230629).

Inventors:
MCLELLAND DOUGLAS [FR]; CARLSON KRISTOFOR D [US]; JOHNSON KEITH WILLIAM [AU]; JOSHI MILIND [AU]

No PvdM as inventor.

Milind Joshi is our patent attorney in Perth.



US2023206066A1 SPIKING NEURAL NETWORK

Disclosed herein are system, method, and computer program embodiments for an improved spiking neural network (SNN) configured to learn and perform unsupervised, semi-supervised, and supervised extraction of features from an input dataset. An embodiment operates by receiving a modification request to modify a base neural network, having N layers and a plurality of spiking neurons, trained using a primary training dataset. The base neural network is modified to include supplementary spiking neurons in the Nth or N + 1th layer of the base neural network. The embodiment includes receiving a secondary training dataset and determining membrane potential values of one or more supplementary spiking neurons in the Nth or Nth + 1 layer which learn features based on secondary training data set to select a supplementary/winning spiking neuron. The embodiment performs a learning function for the modified neural network based on the winning spiking neuron.


View attachment 39151
Disclosed herein are system, method, and computer program embodiments for an improved spiking neural network (SNN) configured to learn and perform unsupervised, semi-supervised, and supervised extraction of features from an input dataset. An embodiment operates by receiving a modification request to modify a base neural network, having N layers and a plurality of spiking neurons, trained using a primary training dataset. The base neural network is modified to include supplementary spiking neurons in the Nth or N + 1th layer of the base neural network. The embodiment includes receiving a secondary training dataset and determining membrane potential values of one or more supplementary spiking neurons in the Nth or Nth + 1 layer which learn features based on secondary training data set to select a supplementary/winning spiking neuron. The embodiment performs a learning function for the modified neural network based on the winning spiking neuron.



View attachment 39152
Looks like the supplementary spiking neurons are 1002, 1004.

One thing it is designed to do is to adjust the multi-bit weights, so I guess that's why they need the ALU.
Diogenese ........... Interesting point that there is " No PvdM " as inventor nor Anil
Could there be a conflicit somewhere for such non inclusion ?
 
  • Haha
  • Thinking
Reactions: 2 users

Diogenese

Top 20
This patent, which was filed in December 2021, addresses supervised learning, semi-supervised learning, and autonomous learning in multi-bit (4-bit) weights and activations.

Autonomous learning is a significant feature.

Akida 1000 has 4-bit weights and activations, so this must be an alternative improved technique.

The patent provides for alternative ways of recalculating the weights and activations apart from the ALU, but the ALUs are shown at 152 in figure 1.

View attachment 39177


View attachment 39176

The claims set out the specifics of the invention. In this case the invention is directed to machine learning, and it does this by adding NPUs to the final layer of a NN. The final layer is where the learning takes place. The "secondary training data set" spikes could be the activation event spikes from the sensor (camera/microphone/...), the primary training data set having been provided by the model library data used in initial configuration.

Supplementary spiking neurons (NPUs) are added to the final layer (Fig 10, 1002, 1004) where Akida does its learning, presumably to incorporate newly learned features. The ALUs of Figure 1 would be involved in the step of "performing a learning function ... by performing a synaptic weight value variation ...", bearing in mind that this is for multi-bit weights and activations.


1688352452854.png
 
  • Like
  • Love
  • Fire
Reactions: 18 users

Diogenese

Top 20
The claims set out the specifics of the invention. In this case the invention is directed to machine learning, and it does this by adding NPUs to the final layer of a NN. The final layer is where the learning takes place. The "secondary training data set" spikes could be the activation event spikes from the sensor (camera/microphone/...), the primary training data set having been provided by the model library data used in initial configuration.

Supplementary spiking neurons (NPUs) are added to the final layer (Fig 10, 1002, 1004) where Akida does its learning, presumably to incorporate newly learned features. The ALUs of Figure 1 would be involved in the step of "performing a learning function ... by performing a synaptic weight value variation ...", bearing in mind that this is for multi-bit weights and activations.


View attachment 39178
Aha! Penny's dropped. Remember when 8-bit weights were announced?

This change may be to accommodate 8-bit weights/activations.

The ALUs may be more efficient at handling the multi-bit "spikes" than the original Akida configuration.

I found this Sanskrit engraving on Eric von Dunnycan's tomb:

https://doc.brainchipinc.com/_modules/akida_models/imagenet/model_mobilenet.html
...
weight_quantization (int, optional): sets all weights in the model to have a particular quantization bitwidth except for the weights in the first layer.
Defaults to 0.

* '0' implements floating point 32-bit weights.
* '2' through '8' implements n-bit weights where n is from 2-8 bits.
activ_quantization (int, optional): sets all activations in the model to have a particular activation quantization bitwidth.
Defaults to 0.
...
input_scaling (tuple, optional): scale factor and offset to apply to
inputs. Defaults to (128, -1). Note that following Akida convention, the scale factor is an integer used as a divider
.
...
© Copyright 2022, BrainChip Holdings Ltd. All Rights Reserved.

If I recall correctly, it is only the weights that are 8-bit, and only for the purpose of compatibility with 3rd party model libraries.

If there are 8-bit weights and 4-bit activations, an 8*4 matrix would be used.
 
Last edited:
  • Like
  • Love
  • Fire
Reactions: 72 users

Gemmax

Regular
Aha! Penny's dropped. Remember when 8-bit weights were announced?

This change may be to accommodate 8-bit weights/activations.

The ALUs may be more efficient at handling the multi-bit "spikes" than the original Akida configuration.

I found this Sanskrit engraving on Eric von Dunnycan's tomb:

https://doc.brainchipinc.com/_modules/akida_models/imagenet/model_mobilenet.html
...
weight_quantization (int, optional): sets all weights in the model to have a particular quantization bitwidth except for the weights in the first layer.
Defaults to 0.

* '0' implements floating point 32-bit weights.
* '2' through '8' implements n-bit weights where n is from 2-8 bits.
activ_quantization (int, optional): sets all activations in the model to have a particular activation quantization bitwidth.
Defaults to 0.
...
input_scaling (tuple, optional): scale factor and offset to apply to
inputs. Defaults to (128, -1). Note that following Akida convention, the scale factor is an integer used as a divider
.
...
© Copyright 2022, BrainChip Holdings Ltd. All Rights Reserved.

If I recall correctly, it is only the weights that are 8-bit, and only for the purpose of compatibility with 3rd party model libraries.

If there are 8-bit weights and 4-bit activations, an 8*4 matrix would be used.
I was thinking the same thing Dodgy. 😆😆 Thank you for your contributions!
 
  • Haha
  • Like
Reactions: 17 users

TECH

Regular
Great watch ... Thanks MrNick (y):cool:(y)

Hey Guys,

Did you bother to check out the cars number plate ?

SEQ (Sequence) 1010 (binary) E (Edge)

Brainchips in there somewhere :ROFLMAO::ROFLMAO::ROFLMAO:
 
  • Like
  • Haha
  • Fire
Reactions: 24 users
Aha! Penny's dropped. Remember when 8-bit weights were announced?

This change may be to accommodate 8-bit weights/activations.

The ALUs may be more efficient at handling the multi-bit "spikes" than the original Akida configuration.

I found this Sanskrit engraving on Eric von Dunnycan's tomb:

https://doc.brainchipinc.com/_modules/akida_models/imagenet/model_mobilenet.html
...
weight_quantization (int, optional): sets all weights in the model to have a particular quantization bitwidth except for the weights in the first layer.
Defaults to 0.

* '0' implements floating point 32-bit weights.
* '2' through '8' implements n-bit weights where n is from 2-8 bits.
activ_quantization (int, optional): sets all activations in the model to have a particular activation quantization bitwidth.
Defaults to 0.
...
input_scaling (tuple, optional): scale factor and offset to apply to
inputs. Defaults to (128, -1). Note that following Akida convention, the scale factor is an integer used as a divider
.
...
© Copyright 2022, BrainChip Holdings Ltd. All Rights Reserved.

If I recall correctly, it is only the weights that are 8-bit, and only for the purpose of compatibility with 3rd party model libraries.

If there are 8-bit weights and 4-bit activations, an 8*4 matrix would be used.
U had me.gif
 
Last edited:
  • Haha
  • Like
Reactions: 14 users

buena suerte :-)

BOB Bank of Brainchip
Hey Guys,

Did you bother to check out the cars number plate ?

SEQ (Sequence) 1010 (binary) E (Edge)

Brainchips in there somewhere :ROFLMAO::ROFLMAO::ROFLMAO:
Nice work TECH ... 🔎🔎 🕵️‍♂️ ;)
 
  • Like
  • Haha
Reactions: 8 users
Interest - familiar phrasing and poster


IMG_4291.jpeg

IMG_4292.jpeg



IMG_4293.jpeg

IMG_4294.jpeg


 
  • Like
  • Fire
  • Love
Reactions: 51 users

Learning

Learning to the Top 🕵‍♂️
It's just fundamental essential to have a low-power processor for "the future of mobility & Environment".

Screenshot_20230703_203119_LinkedIn.jpg



Learning 🏖
 
  • Like
  • Fire
  • Love
Reactions: 29 users

TopCat

Regular
  • Like
  • Fire
Reactions: 12 users

IloveLamp

Top 20
  • Like
  • Fire
Reactions: 5 users

Jasonk

Regular
IPR Silicone IP Ltd, has anyone come across useful information? I was curious if IPR resided in a tech precinct.
After working at ARM Mauro Diamant started IPro Silicon while simultaneously working for Tiempo, Signature IP, and SiFive all of which are listed as IP vendor partners.

I feel I am missing something....

Update: I did come across an earnings guide for $5 million. Currency was not listed.

1688387977190.png


1688388023556.png
 
Last edited:
  • Thinking
  • Like
Reactions: 6 users
I know there has been some, shall we say conjecture recently on Nviso but this popped up in a Google search & dated week or so ago.

Pushing the neuromorphic mobile EVK.

Works for me if can get some traction :)





1920x768_Insights-Develop-1024x410.jpg


HUMAN BEHAVIOUR AI​

MOBILE PHONES​

NVISO’s Human Behaviour AI SDK allows application developers to build innovative solutions to transform our lives using AI on mobile phones. Understand people and their behavior to make autonomous devices safe, secure, and personalized for humans.

DOWNLOAD TRIAL EVK


AI-ENABLED​

HUMAN MACHINE INTERFACES​

NVISO’s Mobile SDK provides a robust real-time human behaviour AI API, NVISO Neuro Models™ interoperable and optimised for neuromorphic computing, the ability for flexible sensor integration and placement while delivering faster development cycles and time-to-value for software developers and integrators. It enables solutions that can sense, comprehend, and act upon human behavior including emotion recognition, gaze detection, distraction detection, drowsiness detection, gesture recognition, 3d face tracking, face analysis, facial recognition, object detection, and human pose estimation. Designed for real-world environments using edge computing it uniquely targets deep learning for embedded systems,

nviso_mobile_sdk_overview2.png


NVISO delivers real-time perception and observation of people and objects in contextual situations combined with the reasoning and semantics of human behavior based on trusted scientific research. The NVISO Mobile SDK is supported through a long term maintenance agreement for multi-party implementation of tools for AI systems development and can be used with large-scale neuromorphic computing systems. When used with neuromorphic chips, the NVISO Mobile SDK can be used to build gaze detection systems, distraction and drowsiness detection systems, facial emotion recognition software, and a range of other applications of neuromorphic computing where understanding human behaviour in real-time is mission critical.

Screenshot_2023-07-03-21-18-29-12_4641ebc0df1485bf6b47ebd018b5ee76.jpg
 
Last edited:
  • Like
  • Fire
  • Love
Reactions: 44 users

Diogenese

Top 20
I know there has been some, shall we say conjecture recently on Nviso but this popped up in a Google search & dated week or so ago.

Pushing the neuromorphic mobile EVK.

Works for me if can get some teaction :)





1920x768_Insights-Develop-1024x410.jpg


HUMAN BEHAVIOUR AI​

MOBILE PHONES​

NVISO’s Human Behaviour AI SDK allows application developers to build innovative solutions to transform our lives using AI on mobile phones. Understand people and their behavior to make autonomous devices safe, secure, and personalized for humans.

DOWNLOAD TRIAL EVK


AI-ENABLED​

HUMAN MACHINE INTERFACES​

NVISO’s Mobile SDK provides a robust real-time human behaviour AI API, NVISO Neuro Models™ interoperable and optimised for neuromorphic computing, the ability for flexible sensor integration and placement while delivering faster development cycles and time-to-value for software developers and integrators. It enables solutions that can sense, comprehend, and act upon human behavior including emotion recognition, gaze detection, distraction detection, drowsiness detection, gesture recognition, 3d face tracking, face analysis, facial recognition, object detection, and human pose estimation. Designed for real-world environments using edge computing it uniquely targets deep learning for embedded systems,

nviso_mobile_sdk_overview2.png


NVISO delivers real-time perception and observation of people and objects in contextual situations combined with the reasoning and semantics of human behavior based on trusted scientific research. The NVISO Mobile SDK is supported through a long term maintenance agreement for multi-party implementation of tools for AI systems development and can be used with large-scale neuromorphic computing systems. When used with neuromorphic chips, the NVISO Mobile SDK can be used to build gaze detection systems, distraction and drowsiness detection systems, facial emotion recognition software, and a range of other applications of neuromorphic computing where understanding human behaviour in real-time is mission critical.

View attachment 39212

Hi Fmf,

This bit is especially interesting:


MICROCONTROLLER UNIT (MCU)

AI functionality is implemented in low-cost MCUs via inference engines specifically targeting MCU embedding design requirements which are configured for low-power operations for continuous monitoring to discover trigger events in a sound, image, or vibration and more. In
addition, the availability of AI-dedicated co-processors is allowing MCU suppliers to accelerate the deployment of machine learning functions.
 
  • Like
  • Fire
  • Thinking
Reactions: 44 users
Hi Fmf,

This bit is especially interesting:


MICROCONTROLLER UNIT (MCU)

AI functionality is implemented in low-cost MCUs via inference engines specifically targeting MCU embedding design requirements which are configured for low-power operations for continuous monitoring to discover trigger events in a sound, image, or vibration and more. In
addition, the availability of AI-dedicated co-processors is allowing MCU suppliers to accelerate the deployment of machine learning functions.
Hi D

You thinking a potential tie in with someone in particular :unsure:
 
  • Like
  • Love
  • Haha
Reactions: 8 users

Diogenese

Top 20
Hi D

You thinking a potential tie in with someone in particular :unsure:
Do we know where nViso gets their MCUs?

Akida 1500 could work with MCUs. All that is needed is sufficient processing power and memory to enable configuration of Akida and to handle the output from Akida.

We know Renesas has a licence for 2 nodes (8 NPUs), but I don't know how many nodes are required for nViso's functions like face recognition.

nViso is a BrainChip partner, so it may not need a licence:


1688395950646.png


BrainChip and NVISO are targeting battery-powered applications in robotics and mobility devices addressing the need for high levels of AI performance in ultra low power environments. Implementing NVISO’s AI solutions with BrainChip’s Akida drive next generation solutions.



This is another signpost to Akida:

NVISO NEURO MODELS™​

ULTRA-EFFICIENT DEEP LEARNING AT THE EDGE​

NVISO Neuro Models™ are purpose built for a new class of ultra-efficient machine learning processors designed for ultra-low power edge devices. Supporting a wide range of heterogenous computing platforms ranging from CPU, GPU, DSP, NPU, and neuromorphic computing they reduce the high barriers-to-entry into the AI space through cost-effective standardized AI Apps which work optimally at the extreme edge (low power, on-device, without requiring an internet connection). NVISO uses low and mixed precision activations and weights data types (1 to 8-bit) combined with state-of-the-art unstructured sparsity to reduce memory bandwidth and power consumption. Proprietary compact network architectures can be fully sequential suitable for ultra-low power mixed signal inference engines and fully interoperable with both GPUs and neuromorphic processors.
 
Last edited:
  • Like
  • Fire
  • Love
Reactions: 47 users
Recent Indian article on MBRDI in India.

Neuromorphic and ChatGPT anyone?

Couple of excerpts.

How Mercedes-Benz is driving a culture of innovation at its research and development centre By embracing innovation, the company is empowering its workforce to adopt a mindset that encourages thinking beyond conventional boundaries

By Radhika Sharma | HRKatha -June 23, 2023


Read more at: https://www.hrkatha.com/features/how-mercedes-benz-is-driving-a-culture-of-innovation/

Mercedes-Benz Research and Development India (MBRDI) recognises the pivotal role of innovation in promoting a healthy work culture. “One aspect that sets us apart is our work culture and emphasis on employee well-being. As the largest research and development centre of Mercedes-Benz outside of Germany, with a growing workforce, we have established ourselves as a powerhouse for delivering cutting-edge technology and innovation. Our primary focus is to drive the future of sustainable mobility, ensuring high quality, safety and innovation. Our commitment to innovation extends beyond technology and is reflected in our progressive work culture,” points out Mahesh Medhekar, vice president-human resources, Mercedes-Benz Research and Development India.


Innovation learning: The company focuses on innovation learning for its employees. Recently, it hosted a seminar on biomimicry — a field that combines generative AI and neuromorphic computing to emulate the human brain’s behaviour. This cutting-edge science allows MBRDI to enhance the capabilities of its vehicles, enabling them to understand and respond to the driver’s behaviour, ensuring safety and comfort. “Innovation learning encompasses various aspects, including deep learning, software patterns and techniques such as design thinking, all aimed at fostering innovation within our organisation. These practices serve as key pillars in our pursuit of continuous innovation,” enunciates Medhekar.

1688395900133.png


“Innovation learning encompasses various aspects, including deep learning, software patterns and techniques such as design thinking, all aimed at fostering innovation within our organisation. These practices serve as key pillars in our pursuit of continuous innovation.” Mahesh Medhekar, vice president-human resources, Mercedes-Benz Research and Development India
 
  • Like
  • Fire
  • Love
Reactions: 22 users
Do we know where nViso gets their MCUs?

Akida 1500 could work with MCUs. All that is needed is sufficient processing power and memory to enable configuration od Akida and to handle the output from Akida.

We know Renesas has a licence for 2 nodes (8 NPUs), but I don't know how many nodes are required for nViso's functions like face recognition.

nViso is a BrainChip partner, so it may not need a licence:


View attachment 39219

BrainChip and NVISO are targeting battery-powered applications in robotics and mobility devices addressing the need for high levels of AI performance in ultra low power environments. Implementing NVISO’s AI solutions with BrainChip’s Akida drive next generation solutions.



This is another signpost to Akida:

NVISO NEURO MODELS™​

ULTRA-EFFICIENT DEEP LEARNING AT THE EDGE​

NVISO Neuro Models™ are purpose built for a new class of ultra-efficient machine learning processors designed for ultra-low power edge devices. Supporting a wide range of heterogenous computing platforms ranging from CPU, GPU, DSP, NPU, and neuromorphic computing they reduce the high barriers-to-entry into the AI space through cost-effective standardized AI Apps which work optimally at the extreme edge (low power, on-device, without requiring an internet connection). NVISO uses low and mixed precision activations and weights data types (1 to 8-bit) combined with state-of-the-art unstructured sparsity to reduce memory bandwidth and power consumption. Proprietary compact network architectures can be fully sequential suitable for ultra-low power mixed signal inference engines and fully interoperable with both GPUs and neuromorphic processors.
Wondered if you were fishing in Renesas' pond :)

Not inconceivable....they taped out for an unnamed 3rs party :unsure:
 
  • Like
  • Fire
  • Thinking
Reactions: 10 users
Top Bottom