BRN Discussion Ongoing

Nothing to suggest Akida - the highlighted bits suggest not Akida.

[0255] FIG. 22 is a block diagram of a neuromorphic processor 2200, according to at least one embodiment. In at least one embodiment, neuromorphic processor 2200 may receive one or more inputs from sources external to neuromorphic processor 2200. In at least one embodiment, these inputs may be transmitted to one or more neurons 2202 within neuromorphic processor 2200. In at least one embodiment, neurons 2202 and components thereof may be implemented using circuitry or logic, including one or more arithmetic logic units (ALUs). In at least one embodiment, neuromorphic processor 2200 may include, without limitation, thousands or millions of instances of neurons 2202, but any suitable number of neurons 2202 may be used. In at least one embodiment, each instance of neuron 2202 may include a neuron input 2204 and a neuron output 2206. In at least one embodiment, neurons 2202 may generate outputs that may be transmitted to inputs of other instances of neurons 2202. For example, in at least one embodiment, neuron inputs 2204 and neuron outputs 2206 may be interconnected via synapses 2208.

[0256] In at least one embodiment, neurons 2202 and synapses 2208 may be interconnected such that neuromorphic processor 2200 operates to process or analyze information received by neuromorphic processor 2200. In at least one embodiment, neurons 2202 may transmit an output pulse (or “fire” or “spike”) when inputs received through neuron input 2204 exceed a threshold. In at least one embodiment, neurons 2202 may sum or integrate signals received at neuron inputs 2204. For example, in at least one embodiment, neurons 2202 may be implemented as leaky integrate-and-fire neurons, wherein if a sum (referred to as a “membrane potential”) exceeds a threshold value, neuron 2202 may generate an output (or “fire”) using a transfer function such as a sigmoid or threshold function. In at least one embodiment, a leaky integrate-and-fire neuron may sum signals received at neuron inputs 2204 into a membrane potential and may also apply a decay factor (or leak) to reduce a membrane potential. In at least one embodiment, a leaky integrate-and-fire neuron may fire if multiple input signals are received at neuron inputs 2204 rapidly enough to exceed a threshold value (i.e., before a membrane potential decays too low to fire). In at least one embodiment, neurons 2202 may be implemented using circuits or logic that receive inputs, integrate inputs into a membrane potential, and decay a membrane potential. In at least one embodiment, inputs may be averaged, or any other suitable transfer function may be used. Furthermore, in at least one embodiment, neurons 2202 may include, without limitation, comparator circuits or logic that generate an output spike at neuron output 2206 when result of applying a transfer function to neuron input 2204 exceeds a threshold. In at least one embodiment, once neuron 2202 fires, it may disregard previously received input information by, for example, resetting a membrane potential to 0 or another suitable default value. In at least one embodiment, once membrane potential is reset to 0, neuron 2202 may resume normal operation after a suitable period of time (or refractory period).

[0342] Tensor cores are configured to perform matrix operations in accordance with at least one embodiment. In at least one embodiment, one or more tensor cores are included in processing cores 3010. In at least one embodiment, tensor cores are configured to perform deep learning matrix arithmetic, such as convolution operations for neural network training and inferencing. In at least one embodiment, each tensor core operates on a 4x4 matrix and performs a matrix multiply and accumulate operation D = A X B + C, where A, B, C, and D are 4x4 matrices.

[0359] In at least one embodiment, training pipeline 3204 (FIG. 32) may include a scenario where facility 3102 is training their own machine learning model, or has an existing machine learning model that needs to be optimized or updated. In at least one embodiment, imaging data 3108 generated by imaging device(s), sequencing devices, and/or other device types may be received. In at least one embodiment, once imaging data 3108 is received, AI-assisted annotation 3110 may be used to aid in generating annotations corresponding to imaging data 3108 to be used as ground truth data for a machine learning model. In at least one embodiment, AI-assisted annotation 3110 may include one or more machine learning models (e.g., convolutional neural networks (C Ns)) that may be trained to generate annotations corresponding to certain types of imaging data 3108 (e.g., from certain devices) and/or certain types of anomalies in imaging data 3108.

This is the Akida NPU:

View attachment 25111
There is no sigmoid function.

The synapse elements 105, 106, 113 are closely tied to the neuron circuit elements, including via the learning feedback loop.

Just a refresher on Akida - these changes were implemented after customer feedback (remember when the whole team was burning the candle at both ends?):

WO2020092691A1 AN IMPROVED SPIKING NEURAL NETWORK

[0038] But conventional SNNs can suffer from several technological problems. First, conventional SNNs are unable to switch between convolution and fully connected operation. For example, a conventional SNN may be configured at design time to use a fully-connected feedforward architecture to learn features and classify data. Embodiments herein (e.g., the neuromorphic integrated circuit) solve this technological problem by combining the features of a CNN and a SNN into a spiking convolutional neural network (SCNN) that can be configured to switch between a convolution operation or a fully- connected neural network function. The SCNN may also reduce the number of synapse weights for each neuron. This can also allow the SCNN to be deeper (e.g., have more layers) than a conventional SNN with fewer synapse weights for each neuron.

Embodiments herein further improve the convolution operation by using a winner-take-all (WTA) approach for each neuron acting as a filter at particular position of the input space. This can improve the selectivity and invariance of the network. In other words, this can improve the accuracy of an inference operation.

[0039] Second, conventional SNNs are not reconfigurable. Embodiments herein solve this technological problem by allowing the connections between neurons and synapses of a SNN to be reprogrammed based on a user defined configuration. For example, the connections between layers and neural processors can be reprogrammed using a user defined configuration file.

[0040] Third, conventional SNNs do not provide buffering between different layers of the SNN. But buffering can allow for a time delay for passing output spikes to a next layer. Embodiments herein solve this technological problem by adding input spike buffers and output spike buffers between layers of a SCNN.

[0041] Fourth, conventional SNNs do not support synapse weight sharing. Embodiments herein solve this technological problem by allowing kernels of a SCNN to share synapse weights when performing convolution. This can reduce memory requirements of the SCNN.

[0042] Fifth, conventional SNNs often use l-bit synapse weights. But the use of l-bit synapse weights does not provide a way to inhibit connections. Embodiments herein solve this technological problem by using ternary synapse weights. For example, embodiments herein can use two-bit synapse weights. These ternary synapse weights can have positive, zero, or negative values. The use of negative weights can provide a way to inhibit connections which can improve selectivity. In other words, this can improve the accuracy of an inference operation.

[0043] Sixth, conventional SNNs do not perform pooling. This results in increased memory requirements for conventional SNNs. Embodiments herein solve this technological problem by performing pooling on previous layer outputs. For example, embodiments herein can perform pooling on a potential array outputted by a previous layer. This pooling operation reduces the dimensionality of the potential array while retaining the most important information.

[0044] Seventh, conventional SNN often store spikes in a bit array. Embodiments herein provide an improved way to represent and process spikes. For example, embodiments herein can use a connection list instead of bit array. This connection list is optimized such that each input layer neuron has a set of offset indexes that it must update. This enables embodiments herein to only have to consider a single connection list to update all the membrane potential values of connected neurons in the current layer.

[0045] Eighth, conventional SNNs often process spike by spike. In contrast, embodiments herein can process packets of spikes. This can cause the potential array to be updated as soon as a spike is processed. This can allow for greater hardware parallelization.

[0046] Finally, conventional SNNs do not provide a way to import learning (e.g., synapse weights) from an external source. For example, SNNs do not provide a way to import learning performed offline using backpropagation. Embodiments herein solve this technological problem by allowing a user to import learning performed offline into the neuromorphic integrated circuit.

[0047] In some embodiments, a SCNN can include one or more neural processors. Each neural processor can be interconnected through a reprogrammable fabric. Each neural processor can be reconfigurable. Each neuron processor can be configured to perform either convolution or classification in fully connected layers

[0048] Each neural processor can include a plurality of neurons and a plurality of synapses. The neurons can be simplified Integrate and Fire (I&F) neurons. The neurons and synapses can be interconnected through the reprogrammable fabric. Each neuron of the neural processor can be implemented in hardware or software. A neuron implemented in hardware can be referred to as a neuron circuit.

[0049] In some embodiments, each neuron can use an increment or decrement function to set the membrane potential value of the neuron. This can be more efficient than using an addition function of a conventional I&F neuron.

[0050] In some embodiments, a SCNN can use different learning functions. For example, a SCNN can use a STDP learning function. In some other embodiments, the SCNN can implement an improved version of the STDP learning function using synapse weight swapping. This improved STDP learning function can offer built-in homeostasis (e.g., stable learned weights) and improved efficiency.

[0051] In some embodiments, an input to a SCNN is derived from an audio stream. An Analog to Digital (A/D) converter can convert the audio stream to digital data. The A/D converter can output the digital data in the form of Pulse Code Modulation (PCM) data. A data to spike converter can convert the digital data to a series of spatially and temporally distributed spikes representing the spectrum of the audio stream.

[0052] In some embodiments, an input to a SCNN is derived from a video stream. The A/D converter can convert the video stream to digital data. For example, the A/D converter can convert the video stream to pixel information in which the intensity of each pixel is expressed as a digital value. A digital camera can provide such pixel information. For example, the digital camera can provide pixel information in the form of three 8-bit values for red, green and blue pixels. The pixel information can be captured and stored in memory. The data to spike converter can convert the pixel information to spatially and temporally distributed spikes by means of sensory neurons that simulate the actions of the human visual tract.

[0053]
In some embodiments, an input to a SCNN is derived from data in the shape of binary values. The data to spike converter can convert the data in the shape of binary values to spikes by means of Gaussian receptive fields. As would be appreciated by a person of ordinary skill in the art, the data to spike convert can convert the data in the shape of binary values to spikes by other means.

[0054] In some embodiments, a digital vision sensor (e.g., a Dynamic Vision Sensor (DVS) from supplied by iniVation AG or other manufacture) is connected to a spike input interface of a SCNN. The digital vision sensor can transmit pixel event information in the form of spikes. The digital vision sensor can encode the spikes over an Address-event representation (AER) bus. Pixel events can occur when a pixel is increased or decreased in intensity
.



Special guest appearance:

View attachment 25112 View attachment 25108
I apply the nappy theory to whether it is AKIDA or not.

AKIDA is all grown up and does not leak when it fires neurons.

All the others are leaky and need nappies except for Loihi which still leaks but is now in trainers. 😂🤣😂🤣🤡😂🤣

My opinion only DYOR
FF

AKIDA BALLISTA
 
  • Haha
  • Like
  • Love
Reactions: 32 users
Just thinking out loud: “I don’t expect a late Xmas announcement.”

If I was sitting on something special I would time it to be announced at CES 2023 which is when you would get the most eyeballs/media on it!

Brainchip has had an excellent year establishing all the partnerships setting the scene for growth in 2023.

Brainchip has achieved a lot already, becoming the first commercially available neuromorphic AI chip/IP. Congratulations to PVDM and all the team! And it will take a hard working team to achieve further greatness so I hope they all enjoy the Xmas break and come back refreshed, happy and raring to go next year! Good things can happen for those who work hard.

The share price will eventually break free of it’s current sideways trading price. There will be several break-outs as the commercialisation unfolds and well done to all those who have quietly and patiently held when there were beating drums of doom. I’m content to pick a great company and follow it’s pathway to success!

Although the SP hasn’t exploded this year the company is in a much stronger position now than it was at the start of the year.



I’m not religious but we actually have a Xmas tree up all year round; every day is Xmas for me!

Every day we should strive to be respectful, have empathy, practice kindness and love. I don’t need a holiday to live those behaviours.

Hoping next year brings everyone their hopes, dreams and good health.

Thanks to @zeeb0t for creating this site.

Thanks to all the valued contributors; without all the shared knowledge and insights I probably wouldn’t be holding the no. of shares I do so it is very much appreciated!

Most of all for my Xmas wish I am hoping Putin gets struck by lightning so Ukraine can have some peace to rebuild their country!

:)
 
  • Like
  • Love
  • Fire
Reactions: 66 users
Provider of industry-leading networking & imaging technologies & solutions

Socionext America, Inc.​



Home > Automotive > Custom SoCs are Critical for the Success of Autonomous Car Makers

Custom SoCs are Critical for the Success of Autonomous Car Makers​

Select a category 3D Audio Software SynQuacer Graphics Milbeaut Sensors Visual Systems General Purpose Processors Custom ASIC Networking ASIC Codecs Company News Custom SoC Automotive


By Rick Fiorenzi on November 10, 2022
featured image

Share this post​

Share TweetShare

Whether automakers and vehicle OEMs will need advanced driver assistance (ADAS) applications to be successful in the future is not a question of "if" but "when." And high-performance custom SoCs will be critical to that success.
In my previous blog post, I talked about how the move toward electric and self-driving vehicles is accelerating and looked at how custom SoCs were becoming pivotal to the success of automakers looking to remain competitive in this shifting landscape.
This time around, I would like to focus on ADAS and self-driving vehicle technology. Next-generation autonomous driving platforms require higher levels of performance to make split-second decisions. A vehicle must quickly comprehend, translate, and accurately perceive its surrounding environment to react safely to changes. Future ADAS and autonomous vehicle implementations demand higher performance, real-time edge computing with AI processing capabilities, and increased bandwidth interfaces to accommodate multiple high-resolution sensors, including radar, LiDAR, and cameras.
Improving the "seeing” or “vision" capabilities of advanced driver assistance systems extends beyond cameras and LiDARs, requiring the integration of smart sensors to handle complex driving scenarios that the auto industry coins Level 4, or "high" automation. Vehicle OEMs and automakers looking to meet these requirements have two fundamental choices when engineering their solutions: use a custom SoC or plug in an off-the-shelf chip. Read on for an analysis of each option.
Future-of-Mobility_1

Custom SoC Versus Off-the-Shelf Solutions​

Let’s start with some simple definitions:
Custom system-on-chip (SoC) solutions use multi-purpose IP blocks specifically architected and integrated to provide the functions required by their target application and use case. A purpose-designed SoC can help OEMs achieve optimal performance and efficiency levels, reduce size requirements, and lower overall BOM costs.
Standard “off-the-shelf” (OTS) silicon solutions appeal to a broader-based market by meeting general-purpose requirements. As such, OTS silicon devices support functions that are not fully optimized or, in some cases, even utilized in the design, often resulting in a larger footprint, unnecessary power consumption, and performance inefficiency.
When auto OEMs decide whether to go with a custom SoC versus an off-the-shelf product in their design, there are many factors to consider. For example:
  • Are they designing the vehicle for a broad-based market with little differentiation?
  • Which critical IP should they bring in-house versus relying on generic solutions from external vendors?
  • What tradeoffs are there regarding power, performance, size, and cost between custom and general-purpose chips?
Ultimately, automotive vendors must decide what is most suitable based on the available options. The diagram below lists some critical factors to consider when deciding between custom SoC and standard off-the-shelf products.
Future-of-Mobility_2

Why Custom SoC Solutions are the Optimal Choice when Designing Your Next Automotive Application​

I highlighted several of the compelling benefits provided by SoC solutions above. But custom SoCs also offer particular advantages in two critical areas: competitive differentiation and supply chain security. Let’s briefly touch on each of these.

Custom SoCs and Competitive Differentiation​

Custom SoC solutions provide OEMs and tier-one automakers the opportunity for complete ownership of key differentiating technologies in ADAS and autonomy. Proprietary chips allow companies to develop in-depth knowledge and in-house expertise, enabling greater control of future designs and product implementations.
Future-of-Mobility_3

Custom SoCs and Supply Chain Security​

Supply chain interruptions are a primary concern for auto OEMs today. Unanticipated “black swan” events, such as natural disasters, international border blockades, government sanctions, economic downturns, and geo-political and social unrest, can disrupt the supply flow. Supply of materials is never guaranteed. However, the odds for continued production are more favorable when an OEM or automaker doesn’t have to compete with several other companies all vying for the same off-the-shelf components.
More and more car manufacturers are realizing that general-purpose chips offer features that cater to multiple customers, limiting their product competitiveness and restricting them to the suppliers' timelines and delivery schedules.

A Mini Case Study: Custom SoCs and a Successful Automotive Industry Disruptor​

Now and again, a new company comes along and alters established business models. Netflix did it with the video rental industry. Telsa has done it in the automotive industry.
Tesla is a company that has shattered the traditional automotive business model with its early launch of autonomous driving technology, a direct purchasing program, unconventional automotive designs with large interior displays, and the construction of battery giga-factories. Unlike other automakers, Tesla was also early to recognize the importance of over-the-air (OTA) software updates for adding features to improve safety and performance. Tesla’s success is driving traditional automakers to adapt their playbooks rapidly.
Tesla joined other tech giants like Google, Amazon, Cruise, and many others that have decided to develop their own proprietary autonomous driving platforms. To further that effort, the company started developing proprietary chips in 2016, and custom SoCs have been a fundamental ingredient in Tesla’s success:
  • Tesla was also one of the earliest companies to implement autonomous driving technologies with its 1st generation autopilot launch in 2016.
  • In 2019 at Autonomy Day, Tesla unveiled Hardware 3.0. Elon Musk claimed it was “objectively the best chip in the world.”
  • Earlier in 2022, rumors circulated that Tesla was working with Samsung to develop a new 5nm semiconductor chip that would assist with its autonomous driving software.
To build a self-driving car, automakers must combine hardware, software, and data to train the deep neural networks that allow a vehicle to perceive and move safely through its environment. Deep neural networks, with their algorithms specifically designed to mimic the working of neurons in the human brain, are the artificial intelligence engine that will enable autonomous vehicles. They are the backbone of deep learning. The evolution of Tesla's autopilot and full self-driving features forced carmakers to take a closer look at the use of cameras and ultrasonic sensors.
Tesla acquires a tremendous amount of data from its nearly two million autopilot-enabled vehicles, each equipped with eight camera arrays to generate data to train the neural networks to detect objects, segment images, and measure depth in real time. The car’s onboard supercomputer FSD (full self-driving) chip runs the deep neural networks, analyzing the camera's computer vision inputs in real-time to understand, make decisions, and move the car through the environment.
As AI becomes more critical and costly to deploy, other companies, such as Google and Amazon, are also designing custom chips.
In addition to being a crucial component of full self-driving capabilities, autonomous vehicle OEMs aim to develop proprietary chips to differentiate themselves from their competition.

Choosing the Right Partner​

Creating a proprietary chip requires a complex, highly structured framework with a complete support system for addressing each phase of the development process. Most companies seeking to design custom chips do not have the full capabilities in-house. They require assistance from highly specialized companies with extensive engineering skills, know-how, and experience to support end-to-end system-level SoC design, development, and implementation.
A company such as Socionext offers the right combination of IPs, design expertise, and support to implement large-scale, fully customizable automotive SoC solutions to meet the most demanding and rigorous automotive application performance requirements.
Future-of-Mobility_4

Socionext has an in-house automotive design team to help to facilitate the early development and large-scale production of high-performance SoCs for automotive applications. As a leading “Solution SoC” provider, Socionext is committed to using leading-edge technologies, such as 5nm and 7nm processes, to produce automotive-grade SoCs that ensure functional safety while accelerating software development and system verification.
Contact us today to learn more about how Socionext can assist with your next custom SoC development.

About the Author​


Rick Fiorenzi
Rick Fiorenzi is a Field Application Engineer at Socionext America. Rick's technical experience in the automotive electronics industry spans over two decades ranging from small to large OEM, Tier1 and Tier2 companies. Rick holds a Bachelor of Science in Electrical Engineering (BSEE) from the University of Michigan in Dearborn.

You may also like:​

Synonyms of Critical: crucial, decisive, momentous, deciding factor
 
  • Like
  • Love
  • Fire
Reactions: 12 users
From Xailient

 
  • Like
  • Fire
  • Thinking
Reactions: 24 users

Diogenese

Top 20
Tight fisted share holders:

1671681999493.png
After the first 50 minutes, trading has been quite sporadic - even at these give-away prices.
 
  • Like
  • Haha
  • Love
Reactions: 19 users
Nothing to suggest Akida - the highlighted bits suggest not Akida.

[0255] FIG. 22 is a block diagram of a neuromorphic processor 2200, according to at least one embodiment. In at least one embodiment, neuromorphic processor 2200 may receive one or more inputs from sources external to neuromorphic processor 2200. In at least one embodiment, these inputs may be transmitted to one or more neurons 2202 within neuromorphic processor 2200. In at least one embodiment, neurons 2202 and components thereof may be implemented using circuitry or logic, including one or more arithmetic logic units (ALUs). In at least one embodiment, neuromorphic processor 2200 may include, without limitation, thousands or millions of instances of neurons 2202, but any suitable number of neurons 2202 may be used. In at least one embodiment, each instance of neuron 2202 may include a neuron input 2204 and a neuron output 2206. In at least one embodiment, neurons 2202 may generate outputs that may be transmitted to inputs of other instances of neurons 2202. For example, in at least one embodiment, neuron inputs 2204 and neuron outputs 2206 may be interconnected via synapses 2208.

[0256] In at least one embodiment, neurons 2202 and synapses 2208 may be interconnected such that neuromorphic processor 2200 operates to process or analyze information received by neuromorphic processor 2200. In at least one embodiment, neurons 2202 may transmit an output pulse (or “fire” or “spike”) when inputs received through neuron input 2204 exceed a threshold. In at least one embodiment, neurons 2202 may sum or integrate signals received at neuron inputs 2204. For example, in at least one embodiment, neurons 2202 may be implemented as leaky integrate-and-fire neurons, wherein if a sum (referred to as a “membrane potential”) exceeds a threshold value, neuron 2202 may generate an output (or “fire”) using a transfer function such as a sigmoid or threshold function. In at least one embodiment, a leaky integrate-and-fire neuron may sum signals received at neuron inputs 2204 into a membrane potential and may also apply a decay factor (or leak) to reduce a membrane potential. In at least one embodiment, a leaky integrate-and-fire neuron may fire if multiple input signals are received at neuron inputs 2204 rapidly enough to exceed a threshold value (i.e., before a membrane potential decays too low to fire). In at least one embodiment, neurons 2202 may be implemented using circuits or logic that receive inputs, integrate inputs into a membrane potential, and decay a membrane potential. In at least one embodiment, inputs may be averaged, or any other suitable transfer function may be used. Furthermore, in at least one embodiment, neurons 2202 may include, without limitation, comparator circuits or logic that generate an output spike at neuron output 2206 when result of applying a transfer function to neuron input 2204 exceeds a threshold. In at least one embodiment, once neuron 2202 fires, it may disregard previously received input information by, for example, resetting a membrane potential to 0 or another suitable default value. In at least one embodiment, once membrane potential is reset to 0, neuron 2202 may resume normal operation after a suitable period of time (or refractory period).

[0342] Tensor cores are configured to perform matrix operations in accordance with at least one embodiment. In at least one embodiment, one or more tensor cores are included in processing cores 3010. In at least one embodiment, tensor cores are configured to perform deep learning matrix arithmetic, such as convolution operations for neural network training and inferencing. In at least one embodiment, each tensor core operates on a 4x4 matrix and performs a matrix multiply and accumulate operation D = A X B + C, where A, B, C, and D are 4x4 matrices.

[0359] In at least one embodiment, training pipeline 3204 (FIG. 32) may include a scenario where facility 3102 is training their own machine learning model, or has an existing machine learning model that needs to be optimized or updated. In at least one embodiment, imaging data 3108 generated by imaging device(s), sequencing devices, and/or other device types may be received. In at least one embodiment, once imaging data 3108 is received, AI-assisted annotation 3110 may be used to aid in generating annotations corresponding to imaging data 3108 to be used as ground truth data for a machine learning model. In at least one embodiment, AI-assisted annotation 3110 may include one or more machine learning models (e.g., convolutional neural networks (C Ns)) that may be trained to generate annotations corresponding to certain types of imaging data 3108 (e.g., from certain devices) and/or certain types of anomalies in imaging data 3108.

This is the Akida NPU:

View attachment 25111
There is no sigmoid function.

The synapse elements 105, 106, 113 are closely tied to the neuron circuit elements, including via the learning feedback loop.

Just a refresher on Akida - these changes were implemented after customer feedback (remember when the whole team was burning the candle at both ends?):

WO2020092691A1 AN IMPROVED SPIKING NEURAL NETWORK

[0038] But conventional SNNs can suffer from several technological problems. First, conventional SNNs are unable to switch between convolution and fully connected operation. For example, a conventional SNN may be configured at design time to use a fully-connected feedforward architecture to learn features and classify data. Embodiments herein (e.g., the neuromorphic integrated circuit) solve this technological problem by combining the features of a CNN and a SNN into a spiking convolutional neural network (SCNN) that can be configured to switch between a convolution operation or a fully- connected neural network function. The SCNN may also reduce the number of synapse weights for each neuron. This can also allow the SCNN to be deeper (e.g., have more layers) than a conventional SNN with fewer synapse weights for each neuron.

Embodiments herein further improve the convolution operation by using a winner-take-all (WTA) approach for each neuron acting as a filter at particular position of the input space. This can improve the selectivity and invariance of the network. In other words, this can improve the accuracy of an inference operation.

[0039] Second, conventional SNNs are not reconfigurable. Embodiments herein solve this technological problem by allowing the connections between neurons and synapses of a SNN to be reprogrammed based on a user defined configuration. For example, the connections between layers and neural processors can be reprogrammed using a user defined configuration file.

[0040] Third, conventional SNNs do not provide buffering between different layers of the SNN. But buffering can allow for a time delay for passing output spikes to a next layer. Embodiments herein solve this technological problem by adding input spike buffers and output spike buffers between layers of a SCNN.

[0041] Fourth, conventional SNNs do not support synapse weight sharing. Embodiments herein solve this technological problem by allowing kernels of a SCNN to share synapse weights when performing convolution. This can reduce memory requirements of the SCNN.

[0042] Fifth, conventional SNNs often use l-bit synapse weights. But the use of l-bit synapse weights does not provide a way to inhibit connections. Embodiments herein solve this technological problem by using ternary synapse weights. For example, embodiments herein can use two-bit synapse weights. These ternary synapse weights can have positive, zero, or negative values. The use of negative weights can provide a way to inhibit connections which can improve selectivity. In other words, this can improve the accuracy of an inference operation.

[0043] Sixth, conventional SNNs do not perform pooling. This results in increased memory requirements for conventional SNNs. Embodiments herein solve this technological problem by performing pooling on previous layer outputs. For example, embodiments herein can perform pooling on a potential array outputted by a previous layer. This pooling operation reduces the dimensionality of the potential array while retaining the most important information.

[0044] Seventh, conventional SNN often store spikes in a bit array. Embodiments herein provide an improved way to represent and process spikes. For example, embodiments herein can use a connection list instead of bit array. This connection list is optimized such that each input layer neuron has a set of offset indexes that it must update. This enables embodiments herein to only have to consider a single connection list to update all the membrane potential values of connected neurons in the current layer.

[0045] Eighth, conventional SNNs often process spike by spike. In contrast, embodiments herein can process packets of spikes. This can cause the potential array to be updated as soon as a spike is processed. This can allow for greater hardware parallelization.

[0046] Finally, conventional SNNs do not provide a way to import learning (e.g., synapse weights) from an external source. For example, SNNs do not provide a way to import learning performed offline using backpropagation. Embodiments herein solve this technological problem by allowing a user to import learning performed offline into the neuromorphic integrated circuit.

[0047] In some embodiments, a SCNN can include one or more neural processors. Each neural processor can be interconnected through a reprogrammable fabric. Each neural processor can be reconfigurable. Each neuron processor can be configured to perform either convolution or classification in fully connected layers

[0048] Each neural processor can include a plurality of neurons and a plurality of synapses. The neurons can be simplified Integrate and Fire (I&F) neurons. The neurons and synapses can be interconnected through the reprogrammable fabric. Each neuron of the neural processor can be implemented in hardware or software. A neuron implemented in hardware can be referred to as a neuron circuit.

[0049] In some embodiments, each neuron can use an increment or decrement function to set the membrane potential value of the neuron. This can be more efficient than using an addition function of a conventional I&F neuron.

[0050] In some embodiments, a SCNN can use different learning functions. For example, a SCNN can use a STDP learning function. In some other embodiments, the SCNN can implement an improved version of the STDP learning function using synapse weight swapping. This improved STDP learning function can offer built-in homeostasis (e.g., stable learned weights) and improved efficiency.

[0051] In some embodiments, an input to a SCNN is derived from an audio stream. An Analog to Digital (A/D) converter can convert the audio stream to digital data. The A/D converter can output the digital data in the form of Pulse Code Modulation (PCM) data. A data to spike converter can convert the digital data to a series of spatially and temporally distributed spikes representing the spectrum of the audio stream.

[0052] In some embodiments, an input to a SCNN is derived from a video stream. The A/D converter can convert the video stream to digital data. For example, the A/D converter can convert the video stream to pixel information in which the intensity of each pixel is expressed as a digital value. A digital camera can provide such pixel information. For example, the digital camera can provide pixel information in the form of three 8-bit values for red, green and blue pixels. The pixel information can be captured and stored in memory. The data to spike converter can convert the pixel information to spatially and temporally distributed spikes by means of sensory neurons that simulate the actions of the human visual tract.

[0053]
In some embodiments, an input to a SCNN is derived from data in the shape of binary values. The data to spike converter can convert the data in the shape of binary values to spikes by means of Gaussian receptive fields. As would be appreciated by a person of ordinary skill in the art, the data to spike convert can convert the data in the shape of binary values to spikes by other means.

[0054] In some embodiments, a digital vision sensor (e.g., a Dynamic Vision Sensor (DVS) from supplied by iniVation AG or other manufacture) is connected to a spike input interface of a SCNN. The digital vision sensor can transmit pixel event information in the form of spikes. The digital vision sensor can encode the spikes over an Address-event representation (AER) bus. Pixel events can occur when a pixel is increased or decreased in intensity
.



Special guest appearance:

View attachment 25112 View attachment 25108
@Diogenese

Appreciate your time & education (y)

Guess it's back in the.....

63a3d8662090a878168632.gif
 
Last edited:
  • Haha
  • Like
  • Sad
Reactions: 41 users

cosors

👀
  • Like
  • Haha
  • Love
Reactions: 8 users
Ok....try a different track haha

Mathworks had their Automotive conference recently in Oct.

Was a few presentations (slides & vids) and obviously revolves around Matlab, Simulink etc.

NXP, MAN, Stellantis amongst others there. Didn't notice NVIDIA on glance through.

I did find MB had a couple of presso's and one looked interesting.

Mention of work being done...e.g LSTM's.

How nice would it be for MB to have a prototype Akida 2.0 (LSTM) for test & design work?

Though, I might need to keep that box handy if @Diogenese reads this post :LOL:

Links which have the vids and slide packs below.




Couple snips.


1671683191120.png


1671683222673.png


1671683365115.png


1671683444291.png
 
  • Like
  • Fire
Reactions: 23 users

jk6199

Regular
Merry Christmas me, even with Socio's news these prices are good value in my opinion, and just bought more.

Hope everyone has a safe and very Merry Christmas & a safe and Happy New Year.

Lets hope the couch surfing groupies of this site can get a great update to their lounges in 2023.
 
  • Like
  • Fire
  • Love
Reactions: 29 users
Don’t suppose Foxconn and Socionext having a long association and multiple product partnerships would have any implications for Brainchip. 😂🤣😂🤡😂🤣😎😵‍💫😇
Just think how much more power efficient this could be with a little bit of AKIDA by its side:


A little bit of AKIDA in my life
A little bit of AKIDA by my side
A little bit of AKIDA’s all I need
A little bit of AKIDA what I see
A little bit of AKIDA in the sun
A little bit of AKIDA all night long
A little bit of AKIDA , here I am
A little bit of AKIDA makes SOCIONEXT a fan (ah)

My opinion only DYOR
FF

AKIDA BALLISTA
 
  • Like
  • Haha
  • Love
Reactions: 39 users

HopalongPetrovski

I'm Spartacus!
  • Haha
  • Like
  • Love
Reactions: 5 users
Just think how much more power efficient this could be with a little bit of AKIDA by its side:


A little bit of AKIDA in my life
A little bit of AKIDA by my side
A little bit of AKIDA’s all I need
A little bit of AKIDA what I see
A little bit of AKIDA in the sun
A little bit of AKIDA all night long
A little bit of AKIDA , here I am
A little bit of AKIDA makes SOCIONEXT a fan (ah)

My opinion only DYOR
FF

AKIDA BALLISTA


Not sure I’d want to be involved with Foxconn via Socionext; not sure they’re big enough to worry about:

In 2018, Foxconn achieved US$175 billion in revenue and has received an array of international accolades and recognition. The company was ranked 23rd in the Fortune Global 500 rankings in 2018 and 215th in the Forbes ranking of the World’s Best Employers that year. In 2019, the company was ranked 21st for Sales and was ranked 123rd overall in the Forbes Global 2000.

😂
 
  • Like
  • Haha
  • Fire
Reactions: 25 users

BaconLover

Founding Member
Christmas Shopping GIF


Merry Christmas to you all.

When we get our Christmas present 🎁 don't forget to have a plan. We don't want to be blind sighted when the quick run up comes along.

Enjoy HIS birthday and have a great New Year 🥳🥳🥳🥳
 
Last edited:
  • Like
  • Love
  • Haha
Reactions: 38 users
This article preceded last years CES event:

In-Car Voice Makes Itself Heard at CES​

BY PYMNTS | JANUARY 4, 2022
|
CES, connected cars, automotive


Before CES even opened to the industry, in-car voice technology was making itself heard. On Monday (Jan. 3), the first of two media days preceding the opening of the show, connected mobility supplier Cerence received an award for its in-car voice assistant, Mercedes-Benz released details about two voice technologies featured on its new prototype electric car and the Consumer Technology Association (CTA) projected that auto tech will grow 7% in 2022.
Read more: Connected Cars to Strut Their Stuff at CES
These companies are among more than 200 from the transportation and vehicle technology industry — a record number for the event — represented at this year’s edition of the annual tech event. A total of 2,200 companies are taking part in person or in the event’s digital venues.
Proactively Delivering Information to Drivers
On Monday, Cerence received a CES 2022 Innovation Award for Cerence Co-Pilot, its new in-car voice assistant that is powered by artificial intelligence (AI).
The assistant not only responds to voice commands, but also uses data from the car’s sensors to understand situations inside and outside the vehicle and proactively deliver information when it’s needed. For example, as the vehicle nears the driver’s home, Cerence Co-Pilot may ask if they’d like it to initiate a smart home routine. This in-car voice assistant also integrates with cloud services.
“AI is deeply fundamental to the future of mobility, and we see our role as critical, not only in bringing convenient, enjoyable and safe experiences to drivers, but also giving OEMs the ability to maintain control of their brands and data while still giving drivers the secure, seamless and personalized connected experiences they want,” Cerence CEO Stefan Ortmanns said in a press release.
Sounding Impressively Real, Natural and Intuitive
On the same day, Mercedes-Benz previewed two voice technologies that will be displayed on its VISION EQXX, a research prototype car featuring an electric drivetrain and advanced software. The automaker says this prototype demonstrates its transformation into “an all-electric and software-driven company.”
One voice technology featured on the VISION EQXX makes it “fun to talk to,” Mercedes-Benz says. Developed in collaboration with voice synthesis experts Sonantic and with the help of machine learning, this version of the “Hey Mercedes” voice assistant has a distinctive character and personality.
“As well as sounding impressively real, the emotional expression places the conversation between driver and car on a completely new level that is more natural and intuitive,” Mercedes-Benz said in a press release.
The second voice-related technology previewed by Mercedes-Benz features neuromorphic computing, a form of information processing that reduces energy consumption, and AI software and hardware from BrainChip that is five to 10 times more efficient than conventional voice control.
“Although neuromorphic computing is still in its infancy, systems like these will be available on the market in just a few years,” Mercedes-Benz said. “When applied on scale throughout a vehicle, they have the potential to radically reduce the energy needed to run the latest AI technologies.”
Growing Demand for Automotive Tech
Also on Jan. 3, CTA announced that it projects that factory-installed automotive tech will grow 7% this year — from $14.9 billion in shipment revenues in 2021 to $16 billion in 2022 — driven by the beginning of a recovery in chip supplies as well as greater demand.
“Demand for automotive tech is increasing as auto manufacturers produce more and continue to develop advanced driver assistance systems that make vehicles more efficient and safer,” the association said when announcing the release of its twice-yearly “U.S. Consumer Technology One-Year Industry Forecast.”
In other vehicle tech news from CES, VinFast announced that customers in the U.S. and Vietnam will be able to make reservations for its first two electric vehicle models beginning Wednesday (Jan. 5) and that it will use blockchain technologies to certify reservations, payments and eventually vehicle ownership.
 
  • Like
  • Fire
  • Wow
Reactions: 28 users

alwaysgreen

Top 20
Most of all for my Xmas wish I am hoping Putin gets struck by lightning so Ukraine can have some peace to rebuild their country!

:)
Isn't he riddled with cancer?
 

Quatrojos

Regular
I’m hoping for a trading halt before CES. Surely, if there’s sensitive info, this must occur…
 
  • Like
  • Love
  • Fire
Reactions: 15 users
This is a positive view of where Brainchip is going in automotive. Mercedes Benz sells around an average of 3 million passenger vehicles and Jerome Nadel places 70 AKIDA chips pre processing sensor inputs before passing on as meta data. If Blind Freddie's mental arithmetic is correct that is 210 million AKIDA smart sensors.


"BrainChip Akida


Mercedes-Benz's EQXX concept car, which debuted at CES earlier this year, uses BrainChip's Akida neuromorphic processor for in-vehicle keyword recognition. Billed as "the most efficient car Mercedes has ever made," the car utilizes neuromorphic technology that consumes less power than a deep learning-based keyword spotting system. That's crucial for a car with a range of 620 miles, or 167 miles more than Mercedes' flagship electric car, the EQS.

Mercedes said at the time that BrainChip's solution was five to 10 times more efficient than traditional voice controls at recognizing the wake word "Hey Mercedes."

Application of SNN in Vehicle Field


Mercedes said, “Although neuromorphic computing is still in its infancy, such systems will soon be on the market within a few years. When applied at scale throughout vehicles, they have the potential to radically reduce the amount of effort required to run the latest AI technologies. power consumption."


BrainChip's CMO Jerome Nadel said: "Mercedes is focused on big issues like battery management and transmission, but every milliwatt counts, and when you think about energy efficiency, even the most basic reasoning, like finding keywords, matters. important."

A typical car could have as many as 70 different sensors by 2022, Nadel said. For cockpit applications, these sensors can enable face detection, gaze assessment, emotion classification, and more.

He said: “From a system architecture perspective, we can do a 1:1 approach where there is a sensor that will do some preprocessing and then the data will be forwarded. The AI will do inference near the sensor...it will Instead of the full array of data from sensors, the inference metadata is passed forward.”

The idea is to minimize the size and complexity of packets sent to AI accelerators, while reducing latency and minimizing power consumption. Each vehicle will likely have 70 Akida chips or sensors with Akida technology, each of which will be "low-cost parts that won't notice them at all," Nadel said. He noted that attention needs to be paid to the BOM of all these sensors.


Application of SNN in Vehicle Field


BrainChip expects to have its neuromorphic processor next to every sensor on the vehicle

Going forward, Nadel said, neuromorphic processing will also be used in ADAS and autonomous driving systems. This has the potential to reduce the need for other types of power-hungry AI accelerators.

"If every sensor could have Akida configured on one or two nodes, it would do adequate inference, and the data passed would be an order of magnitude less, because that would be inference metadata...that would affect the servers you need," he said. power."


BrainChip's Akida chip accelerates SNNs (spike neural networks) and CNNs (by converting to SNNs). It's not tailored for any specific use case or sensor, so it can be paired with visual sensing for face recognition or people detection, or other audio applications like speaker ID. BrainChip also demonstrated Akida's smell and taste sensors, although it's hard to imagine how these could be used in cars (perhaps to detect air pollution or fuel quality through smell and taste).

Akida is set up to handle SNNs or deep learning CNNs that have been transformed into SNNs. Unlike the native spike network, the transformed CNN preserves some spike-level information, so it may require 2 or 4 bits of computation. However, this approach allows exploiting the properties of CNNs, including their ability to extract features from large datasets. Both types of networks can be updated at the edge using STDP. In the case of Mercedes-Benz, this might mean retraining the network after deployment to discover more or different keywords.

Application of SNN in Vehicle Field


According to Autocar, Mercedes-Benz confirmed that "many innovations" from the EQXX concept car, including "specific components and technologies," will be used in the production model. There's no word yet on whether new Mercedes-Benz models will feature artificial brains."

I do hope you read the whole post and not just the orange text😂🤣😂🤣 - (🐫x1000)

My opinion only DYOR
FF

AKIDA BALLISTA
 
Last edited:
  • Like
  • Love
  • Fire
Reactions: 75 users

buena suerte :-)

BOB Bank of Brainchip
If my wife reads this she will confirm that I am a camel.

This is a positive view of where Brainchip is going in automotive. Mercedes Benz sells around an average of 3 million passenger vehicles and Jerome Nadel places 70 AKIDA chips pre processing sensor inputs before passing on as meta data. If Blind Freddie's mental arithmetic is correct that is 270 million AKIDA smart sensors.


"BrainChip Akida


Mercedes-Benz's EQXX concept car, which debuted at CES earlier this year, uses BrainChip's Akida neuromorphic processor for in-vehicle keyword recognition. Billed as "the most efficient car Mercedes has ever made," the car utilizes neuromorphic technology that consumes less power than a deep learning-based keyword spotting system. That's crucial for a car with a range of 620 miles, or 167 miles more than Mercedes' flagship electric car, the EQS.

Mercedes said at the time that BrainChip's solution was five to 10 times more efficient than traditional voice controls at recognizing the wake word "Hey Mercedes."

Application of SNN in Vehicle Field


Mercedes said, “Although neuromorphic computing is still in its infancy, such systems will soon be on the market within a few years. When applied at scale throughout vehicles, they have the potential to radically reduce the amount of effort required to run the latest AI technologies. power consumption."


BrainChip's CMO Jerome Nadel said: "Mercedes is focused on big issues like battery management and transmission, but every milliwatt counts, and when you think about energy efficiency, even the most basic reasoning, like finding keywords, matters. important."

A typical car could have as many as 70 different sensors by 2022, Nadel said. For cockpit applications, these sensors can enable face detection, gaze assessment, emotion classification, and more.

He said: “From a system architecture perspective, we can do a 1:1 approach where there is a sensor that will do some preprocessing and then the data will be forwarded. The AI will do inference near the sensor...it will Instead of the full array of data from sensors, the inference metadata is passed forward.”

The idea is to minimize the size and complexity of packets sent to AI accelerators, while reducing latency and minimizing power consumption. Each vehicle will likely have 70 Akida chips or sensors with Akida technology, each of which will be "low-cost parts that won't notice them at all," Nadel said. He noted that attention needs to be paid to the BOM of all these sensors.


Application of SNN in Vehicle Field


BrainChip expects to have its neuromorphic processor next to every sensor on the vehicle

Going forward, Nadel said, neuromorphic processing will also be used in ADAS and autonomous driving systems. This has the potential to reduce the need for other types of power-hungry AI accelerators.

"If every sensor could have Akida configured on one or two nodes, it would do adequate inference, and the data passed would be an order of magnitude less, because that would be inference metadata...that would affect the servers you need," he said. power."


BrainChip's Akida chip accelerates SNNs (spike neural networks) and CNNs (by converting to SNNs). It's not tailored for any specific use case or sensor, so it can be paired with visual sensing for face recognition or people detection, or other audio applications like speaker ID. BrainChip also demonstrated Akida's smell and taste sensors, although it's hard to imagine how these could be used in cars (perhaps to detect air pollution or fuel quality through smell and taste).

Akida is set up to handle SNNs or deep learning CNNs that have been transformed into SNNs. Unlike the native spike network, the transformed CNN preserves some spike-level information, so it may require 2 or 4 bits of computation. However, this approach allows exploiting the properties of CNNs, including their ability to extract features from large datasets. Both types of networks can be updated at the edge using STDP. In the case of Mercedes-Benz, this might mean retraining the network after deployment to discover more or different keywords.

Application of SNN in Vehicle Field


According to Autocar, Mercedes-Benz confirmed that "many innovations" from the EQXX concept car, including "specific components and technologies," will be used in the production model. There's no word yet on whether new Mercedes-Benz models will feature artificial brains."

I do hope you read the whole post and not just the orange text😂🤣😂🤣 - (🐫x1000)

My opinion only DYOR
FF

AKIDA BALLISTA
Each vehicle will likely have 70 Akida chips or sensors with Akida technology, each of which will be "low-cost parts that won't notice them at all," Nadel said. He noted that attention needs to be paid to the BOM of all these sensors.

Awesome post FF ..... :ROFLMAO::ROFLMAO::ROFLMAO::ROFLMAO:😝😝 🤪🤪
 
  • Like
  • Haha
  • Fire
Reactions: 24 users

ChipMan

Founding Member
Not sure I’d want to be involved with Foxconn via Socionext; not sure they’re big enough to worry about:

In 2018, Foxconn achieved US$175 billion in revenue and has received an array of international accolades and recognition. The company was ranked 23rd in the Fortune Global 500 rankings in 2018 and 215th in the Forbes ranking of the World’s Best Employers that year. In 2019, the company was ranked 21st for Sales and was ranked 123rd overall in the Forbes Global 2000.

😂
Foxconn getting into chip making


Foxconn and Vedanta to build $19bn India chip factory​

    • Published
      14 September
Share
In this photo illustration the Vedanta Limited logo seen displayed on a smartphone.
IMAGE SOURCE, GETTY IMAGES
Image caption,
Vedanta and Foxconn announce $19.5bn chip factory in India
Foxconn and Vedanta have announced $19.5bn (£16.9bn) to build one of the first chipmaking factories in India.
The Taiwanese firm and the Indian mining giant are tying up as the government pushes to boost chip manufacturing in the country.
Prime Minister Narendra Modi's government announced a $10bn package last year to attract investors.
The facility, which will be built in Mr Modi's home state of Gujarat, has been promised incentives.
Vedanta's chairman Anil Agarwal said they were still on the lookout for a site - about 400 acres of land - close to Gujarat's capital, Ahmedabad.
ADVERTISEMENT

But both Indian and foreign firms have struggled in the past to acquire large tracts of land for projects. And experts say that despite Mr Modi's signature 'Make in India' policy - designed to attract global manufacturers - challenges remain when it comes to navigating the country's red tape.

Gujarat Chief Minister Bhupendrabhai Patel, however, said the project "will be met with red carpet... instead of any red tapism".
The project is expected to create 100,000 jobs in the state, which is headed for elections in December, where the BJP is facing stiff competition from oppositions parties.
ADVERTISEMENT

ADVERTISEMENT

According to the Memorandum of Understanding, the facility is expected to start manufacturing chips within two years.
"India's own Silicon Valley is a step closer now," Mr Agarwal said in a tweet.
India has vowed to spend $30bn to overhaul its tech industry. The government said it will also expand incentives beyond the initial $10 billion for chipmakers in order to become less reliant on chip producers in places like Taiwan, the US and China.
"Gujarat has been recognized for its industrial development, green energy, and smart cities. The improving infrastructure and the government's active and strong support increases confidence in setting up a semiconductor factory," according to Brian Ho, a vice president of Foxconn Semiconductor Group.

Foxconn is the technical partner. Vedanta is financing the project as it looks to diversify its investments into the tech sector.
Vedanta is the third company to announce plans to build a chip plant in India. A partnership between ISMC and Singapore-based IGSS Ventures also said it had signed deals to build semiconductor plants in the country over the next five years.
 
  • Like
  • Fire
  • Love
Reactions: 20 users
Top Bottom