BRN Discussion Ongoing

manny100

Top 20
Here is a follow-up of my post on Ericsson (see above).

When I recently searched for info on Ericsson’s interest in neuromorphic technology besides the Dec 2023 paper, in which six Ericsson researchers described how they had built a prototype of an AI-enabled ZeroEnergy-IoT device utilising Akida, I not only came across an Ericsson Senior Researcher for Future Computing Platforms who was very much engaged with Intel’s Loihi (he even gave a presentation at the Intel Neuromorphic Research Community’s Virtual Summer 2023 Workshop), as well as an Intel podcast with Ericsson’s VP of Emerging Technologies, Mischa Dohler.

I also spotted the following LinkedIn post by a Greek lady, who had had a successful career at Ericsson spanning more than 23 years before taking the plunge into self-employment two years ago:

View attachment 61042

View attachment 61043


361eabcc-0b13-4ea1-a40a-fe23e802b8a1-jpeg.61045



Since Maria Boura concluded her post by sharing that very Intel podcast with Mischa Dohler mentioned earlier, my gut feeling was that those Ericsson 6G researchers she had talked to at MWC (Mobile World Congress) 2024 in Barcelona at the end of February had most likely been collaborating with Intel, but a quick Google search didn’t come up with any results at the time I first saw that post of hers back in March.

Then, last night, while reading an article on Intel’s newly revealed Hala Point (https://www.eejournal.com/industry_...morphic-system-to-enable-more-sustainable-ai/), there it was - the undeniable evidence that those Ericsson researchers had indeed been utilising Loihi 2:

“Advancing on its predecessor, Pohoiki Springs, with numerous improvements, Hala Point now brings neuromorphic performance and efficiency gains to mainstream conventional deep learning models, notably those processing real-time workloads such as video, speech and wireless communications. For example, Ericsson Research is applying Loihi 2 to optimize telecom infrastructure efficiency, as highlighted at this year’s Mobile World Congress.”

The blue link connects to the following article on the Intel website, published yesterday:


Ericsson Research Demonstrates How Intel Labs’ Neuromorphic AI Accelerator Reduces Compute Costs​

Subscribe
Article Options

large


Philipp_Stratmann

Philipp_Stratmann
Employee
‎04-17-2024
00344
Philipp Stratmann is a research scientist at Intel Labs, where he explores new neural network architectures for Loihi, Intel’s neuromorphic research AI accelerator. Co-author Péter Hága is a master researcher at Ericsson Research, where he leads research activities focusing on the applicability of neuromorphic and AI technologies to telecommunication tasks.

Highlights
  • Using neuromorphic computing technology from Intel Labs, Ericsson Research is developing custom telecommunications AI models to optimize telecom architecture.
  • Ericsson Research developed a radio receiver prototype for Intel’s Loihi 2 neuromorphic AI accelerator based on neuromorphic spiking neural networks, which reduced the data communication by 75 to 99% for energy efficient radio access networks (RANs).
  • As a member of Intel’s Neuromorphic Research Community, Ericsson Research is searching for new AI technologies that provide energy efficiency and low latency inference in telecom systems.

Using neuromorphic computing technology from Intel Labs, Ericsson Research is developing custom telecommunications artificial intelligence (AI) models to optimize telecom architecture. Ericsson currently uses AI-based network performance diagnostics to analyze communications service providers’ radio access networks (RANs) to resolve network issues efficiently and provide specific parameter change recommendations. At Mobile World Congress (MWC) Barcelona 2024, Ericsson Research demoed a radio receiver algorithm prototype targeted for Intel’s Loihi 2 neuromorphic research AI accelerator, demonstrating a significant reduction in computational cost to improve signals across the RAN.

In 2021, Ericsson Research joined the Intel Neuromorphic Research Community (INRC), a collaborative research effort that brings together academic, government, and industry partners to work with Intel to drive advances in real-world commercial usages of neuromorphic computing.

Ericsson Research is actively searching for new AI technologies that provide low latency inference and energy efficiency in telecom systems. Telecom networks face many challenges, including tight latency constraints driven by the need for data to travel quickly over the network, and energy constraints due to mobile system battery limitations. AI will play a central role in future networks by optimizing, controlling, and even replacing key components across the telecom architecture. AI could provide more efficient resource utilization and network management as well as higher capacity.

Neuromorphic computing draws insights from neuroscience to create chips that function more like the biological brain instead of conventional computers. It can deliver orders of magnitude improvements in energy efficiency, speed of computation, and adaptability across a range of applications, including real-time optimization, planning, and decision-making from edge to data center systems. Intel's Loihi 2 comes with Lava, an open-source software framework for developing neuro-inspired applications.

Radio Receiver Algorithm Prototype

Ericsson Research’s working prototype of a radio receiver algorithm was implemented in Lava for Loihi 2. In the demonstration, the neural network performs a common complex task of recognizing the effects of reflections and noise on radio signals as they propagate from the sender (base station) to the receiver (mobile). Then the neural network must reverse these environmental effects so that the information can be correctly decoded.

During training, researchers rewarded the model based on accuracy and the amount of communication between neurons. As a result, the neural communication was reduced, or sparsified, by 75 to 99% depending on the difficulty of the radio environment and the amount of work needed by the AI to correct the environmental effects on the signal.

Loihi 2 is built to leverage such sparse messaging and computation. With its asynchronous spike-based communication, neurons do not need to compute or communicate information when there is no change. Furthermore, Loihi 2 can compute with substantially less power due to its tight compute-memory integration. This reduces the energy and latency involved in moving data between the compute unit and the memory.

Like the human brain’s biological neural circuits that can intelligently process, respond to, and learn from real-world data at microwatt power levels and millisecond response times, neuromorphic computing can unlock orders of magnitude gains in efficiency and performance.

Neuromorphic computing AI solutions could address the computational power needed for future intelligent telecom networks. Complex telecom computation results must be produced in tight deadlines down to the millisecond range. Instead of using GPUs that draw substantial amounts of power, neuromorphic computing can provide faster processing and improved energy efficiency.

Emerging Technologies and Telecommunications

Learn more about emerging technologies and telecommunications in this episode of InTechnology. Host Camille Morhardt interviews Mischa Dohler, VP of Emerging Technologies at Ericsson, about neuromorphic computing, quantum computing, and more.



While Ericsson being deeply engaged with Intel even in the area of neuromorphic research doesn’t preclude them from also knocking on BrainChip’s door, this new reveal reffirms my hesitation about adding Ericsson to our list of companies above the waterline, given the lack of any official acknowledgment by either party to date.

So to sum it up: Ericsson collaborating with Intel in neuromorphic research is a verifiable fact, while an NDA with BrainChip is merely speculation so far.
Although it would be grossly negligent for the Ericsson Team to test only the Intel chip (which according to BRN is still in research) when there are others available.
I very much doubt they would not be aware of BRN. I would also be very surprised if Ericsson and other telcos have not had high level contact from BRN.
The word slipped that we were tied with Mercedes. After that you can bet that every Auto is testing AKIDA. No one wants to be caught short.
The use for AKIDA in Autos is obvious.
For the 'layman' it's a little harder to identify Telco company uses.
BRN has worked hard to set up an eco system and together with enormous industry exposure its hard to imagine any decent Research departments of big business would be unaware of BRN. It would be just a matter of how, if at all, hey see AKIDA improving their business.
 
  • Like
  • Love
  • Fire
Reactions: 18 users

Esq.111

Fascinatingly Intuitive.
BrainChip selected by U.S. Air Force Research Laboratory to develop AI-based radar

2024-04-19 17:52

BrainChip, the world's first commercial producer of neuromorphic artificial intelligence chips and IP, today announced that Information Systems Laboratories (ISL) is developing an AI-based radar for the U.S. Air Force Research Laboratory (AFRL) based on its Akida™ Neural Network Processor Research solutions.

ISL is a specialist in expert research and complex analysis, software and systems engineering, advanced hardware design and development, and high-quality manufacturing for a variety of clients worldwide.

Military and Space are always leaders of new technologies 😉


Evening BrainShit ,

1 , if one could find a more salubrious forum name would be great.
But yes , truely sense & feel ones torment.

2, Not 100% certain , but pretty sure this is a rehashed announcement from some time ago..... with the pressent day date attached , purely to confuse.

Regards,
Esq.
 
  • Like
Reactions: 15 users

DK6161

Regular
Me every Friday after the market closes.
Sad Season 3 GIF by The Lonely Island

Oh well. Next week perhaps fellow chippers.
 
  • Haha
  • Like
Reactions: 15 users
It means Apple have their own inferior in-house NN.

They have also purchased Xnor which has the tech which is a trade-off between efficiency and accuracy:

US10691975B2 Lookup-based convolutional neural network 20170717
[0019] … Lookup-based convolutional neural network (LCNN) structures are described that encode convolutions by few lookups to a dictionary that is trained to cover the space of weights in convolutional neural networks. For example, training an LCNN may include jointly learning a dictionary and a small set of linear combinations. The size of the dictionary may naturally traces a spectrum of trade-offs between efficiency and accuracy.


They have also developed an LLM compression technique which can be applied to edge applications:

US11651192B2 Compressed convolutional neural network models 20190212 Rastegari nee Xnor:

[0019] As described in further detail below, the subject technology includes systems and processes for building a compressed CNN model suitable for deployment on different types of computing platforms having different processing, power and memory capabilities.

Thus Apple have in-house technology which, on the face of it, is capable of implementing GenAI at the edge. That presents a substantial obstacle to Akida getting a look in.
Assuming Apple make the decision to pursue their own way and manage to cobble something together which is workable (which is highly likely, given their resources).

What are the chances of them offering this tech, to their competitors, to also use?...

The problems Apple had, with their flagship iPhone 15 overheating last year, goes to show they are maybe not as "hot shit" as they think they are..

Maybe they are literally...

Happy to change my opinion, if they come to their senses 😛
 
Last edited:
  • Like
  • Fire
Reactions: 8 users

Diogenese

Top 20
Evening BrainShit ,

1 , if one could find a more salubrious forum name would be great.
But yes , truely sense & feel ones torment.

2, Not 100% certain , but pretty sure this is a rehashed announcement from some time ago..... with the pressent day date attached , purely to confuse.

Regards,
Esq.
I wonder how long ISL had been playing with Akida before they got the Airforce radar SBIR?

This patent was filed in mid 2021, so 5 months before the announcement.

US11256988B1 Process and method for real-time sensor neuromorphic processing 20210719

1713527356114.png

1713527429102.png


[0003] The invention relates to the general field of deep learning neural networks, which has enjoyed a great deal of success in recent years. This is attributed to more advanced neural architectures that more closely resemble the human brain. Neural networks work with functionalities similar to the human brain. The invention includes both a training cycle and a live (online) operation. The training cycle includes five elements and comprises the build portion of the deep learning process. The training cycle requirements ensure adequate convergence and performance. The live (online) operation includes the live operation of a Spiking Neural Network (SNN) designed by the five steps of the training cycle. The invention is part of a new generation of neuromorphic computing architectures, including Integrated Circuits (IC). This new generation of neuromorphic computing architectures includes IC, deep learning and machine learning.



1. A method of providing real-time sensor neuromorphic processing, the method comprising the steps of:
providing a training cycle and a live operation cycle;
wherein the training cycle includes:
(1) the establishment of a build portion or training cycle of a deep learning process with AI;
wherein the build portion or training cycle begins with the process taking performance requirements in the form of generation of a scenario as inputs;
(2) selecting a sensor model application, with associated performance specifications set forth in the generation of a scenario;
(3) providing a Hi-Fi Radio-Frequency (RF) sensor model which is used to augment any real data for training;
(4) providing a computer model surrogate which is used instead of, or in addition to, a non-surrogate computer model;
(5) the sensor model application being one or more of radar, sonar or LIDAR;
(6) specifying an operating environment details wherein the Hi-Fi sensor model generates requisite training data and/or training environment;
(7) the Hi-Fi sensor model generates training data in a quantity to ensure convergence of DNN neuron weights, wherein as an input enters the node, the input gets multiplied by a weight value and the resulting output is either observed, or passed to the next layer in the neural network;
(8) raw sensor training data is preprocessed into a format suitable for presentation to a DNN from a DNN interface, the sensor model application training data is forwarded to the DNN;
(9) the training environment information is then output from the DNN to a DNN-to-SNN operation through a DNN-to-SNN conversion; and
(10) thereafter, the DNN is converted to an SNN and the SNN outputs the SNN information to an neuromorphic integrated circuit (IC), creating a neuromorphic sensor application;
(11) providing and utilizing a statistical method which ensures reliable performance of the neuromorphic integrated circuit, wherein reliability is 99%
.
 
Last edited:
  • Like
  • Fire
  • Love
Reactions: 39 users

Diogenese

Top 20
Assuming Apple make the decision to pursue their own way and manage to cobble something together which is workable (which is highly likely, given their resources).

What are the chances of them offering this tech, to their competitors, to also use?...

The problems Apple had, with their flagship iPhone 15 overheating last year, goes to show they are maybe not as "hot shit" as they think they are..

Maybe they are literally...

Happy to change my opinion, if they come to their senses 😛
If their potential customers have done their DD, they will be aware of Akida.

Using MACs makes the Apple system much less efficient and slower than Akida.

That would make it a different equation for the potential customers in not having the sunk costs of developing a second-rate in-house system.

As you point out, using the iPhone as a handwarmer did use up the battery quickly.
 
  • Like
  • Fire
  • Haha
Reactions: 23 users

manny100

Top 20
If their potential customers have done their DD, they will be aware of Akida.

Using MACs makes the Apple system much less efficient and slower than Akida.

That would make it a different equation for the potential customers in not having the sunk costs of developing a second-rate in-house system.

As you point out, using the iPhone as a handwarmer did use up the battery quickly.
Diogenes, just a query concerning tge original patent that expires in 2028.
It's been improved over and over with several more patents added over the years until we got GEN 2 with GEN 3 next year with possibly another patent application.
How do you see the expiry of the original patent effecting our business?
My view is that its old tech now with ongoing further patents improving the original, but it will still enable others to add their own improvements.
I guess it depends mainly on how much market traction we can gain over the next few years with Edge AI starting to take off.
 
  • Like
  • Thinking
Reactions: 7 users
I wonder how long ISL had been playing with Akida before they got the Airforce radar SBIR?

This patent was filed in mid 2021, so 5 months before the announcement.

US11256988B1 Process and method for real-time sensor neuromorphic processing 20210719

View attachment 61118
View attachment 61119

[0003] The invention relates to the general field of deep learning neural networks, which has enjoyed a great deal of success in recent years. This is attributed to more advanced neural architectures that more closely resemble the human brain. Neural networks work with functionalities similar to the human brain. The invention includes both a training cycle and a live (online) operation. The training cycle includes five elements and comprises the build portion of the deep learning process. The training cycle requirements ensure adequate convergence and performance. The live (online) operation includes the live operation of a Spiking Neural Network (SNN) designed by the five steps of the training cycle. The invention is part of a new generation of neuromorphic computing architectures, including Integrated Circuits (IC). This new generation of neuromorphic computing architectures includes IC, deep learning and machine learning.



1. A method of providing real-time sensor neuromorphic processing, the method comprising the steps of:
providing a training cycle and a live operation cycle;
wherein the training cycle includes:
(1) the establishment of a build portion or training cycle of a deep learning process with AI;
wherein the build portion or training cycle begins with the process taking performance requirements in the form of generation of a scenario as inputs;
(2) selecting a sensor model application, with associated performance specifications set forth in the generation of a scenario;
(3) providing a Hi-Fi Radio-Frequency (RF) sensor model which is used to augment any real data for training;
(4) providing a computer model surrogate which is used instead of, or in addition to, a non-surrogate computer model;
(5) the sensor model application being one or more of radar, sonar or LIDAR;
(6) specifying an operating environment details wherein the Hi-Fi sensor model generates requisite training data and/or training environment;
(7) the Hi-Fi sensor model generates training data in a quantity to ensure convergence of DNN neuron weights, wherein as an input enters the node, the input gets multiplied by a weight value and the resulting output is either observed, or passed to the next layer in the neural network;
(8) raw sensor training data is preprocessed into a format suitable for presentation to a DNN from a DNN interface, the sensor model application training data is forwarded to the DNN;
(9) the training environment information is then output from the DNN to a DNN-to-SNN operation through a DNN-to-SNN conversion; and
(10) thereafter, the DNN is converted to an SNN and the SNN outputs the SNN information to an neuromorphic integrated circuit (IC), creating a neuromorphic sensor application;
(11) providing and utilizing a statistical method which ensures reliable performance of the neuromorphic integrated circuit, wherein reliability is 99%
.
Maybe they were one of the reasons AKD1000 was delayed because early adopters wanted changes such as CNN2SNN converters. Makes sense to me.

SC
 
  • Like
  • Thinking
Reactions: 5 users

Diogenese

Top 20
Diogenes, just a query concerning tge original patent that expires in 2028.
It's been improved over and over with several more patents added over the years until we got GEN 2 with GEN 3 next year with possibly another patent application.
How do you see the expiry of the original patent effecting our business?
My view is that its old tech now with ongoing further patents improving the original, but it will still enable others to add their own improvements.
I guess it depends mainly on how much market traction we can gain over the next few years with Edge AI starting to take off.
Hi manny,

The 2008 patent is of great value in that the main claim is very broad in scope and is directed to ML:

US2010076916A1 Autonomous Learning Dynamic Artificial Neural Computing Device and Brain Inspired System 20080921

An information processing system intended for use in artificial intelligence, consisting of a plurality of artificial neuron circuits connected in an array, comprising:

a first plurality of dynamic synapse circuits, comprising a means of learning and responding to input signals by producing a compounding strength value simulating a biological Post Synaptic Potential,

a temporal integrator circuit that integrates and combines individually simulated Post Synaptic Potential values over time, and thus constitutes an artificial membrane potential value,

a second plurality of dynamic soma circuits each capable of producing one or more pulses when the integrated membrane potential value has reached or exceeded a stored variable threshold value
,

That said, there are a number of subsequent developments which greatly improve performance, chief of these (until TeNNs) is N-of-M coding which greatly increased sparsity and reduced power usage and latency.

Another landmark patent is ...

US11468299B2 Spiking neural network 20181101

1713536435888.png


A neuromorphic integrated circuit, comprising:
a spike converter circuit configured to generate spikes from input data;
a reconfigurable neuron fabric comprising a neural processor comprising a plurality of spiking neuron circuits configured to perform a task based on the spikes and a neural network configuration; and
a memory comprising the neural network configuration, wherein the neural network configuration comprises a potential array and a plurality of synapses, and the neural network configuration defines connections between the plurality of spiking neuron circuits and the plurality of synapses, the potential array comprising membrane potential values for the plurality of spiking neuron circuits, and the plurality of synapses having corresponding synaptic weights,
wherein the neural processor is configured to:
select a spiking neuron circuit in the plurality of spiking neuron circuits based on the selected spiking neuron circuit having a membrane potential value that is a highest value among the membrane potential values for the plurality of spiking neuron circuits;
determine that the membrane potential value of the selected spiking neuron circuit reached a learning threshold value associated with the selected spiking neuron circuit; and
perform a Spike Time Dependent Plasticity (STDP) learning function based on the determination that the membrane potential value of the selected spiking neuron circuit reached the learning threshold value associated with the selected spiking neuron circuit
.


... which showcases my all-time favourite drawing. As you can see, this has a priority of 2018, so it will last well into the 2030s.


CNN2SNN is certainly a vital commercial development, but, for me, not as exciting as N-of-M or TeNNS.

This patent introduced a number of "minor" improvements:

WO2020092691A1 AN IMPROVED SPIKING NEURAL NETWORK 20181101

[0038] But conventional SNNs can suffer from several technological problems. First, conventional SNNs are unable to switch between convolution and fully connected operation. For example, a conventional SNN may be configured at design time to use a fully-connected feedforward architecture to learn features and classify data. Embodiments herein (e.g., the neuromorphic integrated circuit) solve this technological problem by combining the features of a CNN and a SNN into a spiking convolutional neural network (SCNN) that can be configured to switch between a convolution operation or a fully- connected neural network function. The SCNN may also reduce the number of synapse weights for each neuron. This can also allow the SCNN to be deeper (e.g., have more layers) than a conventional SNN with fewer synapse weights for each neuron.
Embodiments herein further improve the convolution operation by using a winner-take-all (WTA) approach for each neuron acting as a filter at particular position of the input space. This can improve the selectivity and invariance of the network. In other words, this can improve the accuracy of an inference operation.

[0039] Second, conventional SNNs are not reconfigurable. Embodiments herein solve this technological problem by allowing the connections between neurons and synapses of a SNN to be reprogrammed based on a user defined configuration. For example, the connections between layers and neural processors can be reprogrammed using a user defined configuration file.

[0040]
Third, conventional SNNs do not provide buffering between different layers of the SNN. But buffering can allow for a time delay for passing output spikes to a next layer. Embodiments herein solve this technological problem by adding input spike buffers and output spike buffers between layers of a SCNN.

[0041] Fourth, conventional SNNs do not support synapse weight sharing. Embodiments herein solve this technological problem by allowing kernels of a SCNN to share synapse weights when performing convolution. This can reduce memory requirements of the SCNN.

[0042] Fifth, conventional SNNs often use l-bit synapse weights. But the use of l-bit synapse weights does not provide a way to inhibit connections. Embodiments herein solve this technological problem by using ternary synapse weights. For example, embodiments herein can use two-bit synapse weights. These ternary synapse weights can have positive, zero, or negative values. The use of negative weights can provide a way to inhibit connections which can improve selectivity. In other words, this can improve the accuracy of an inference operation.

[0043] Sixth, conventional SNNs do not perform pooling. This results in increased memory requirements for conventional SNNs. Embodiments herein solve this technological problem by performing pooling on previous layer outputs. For example, embodiments herein can perform pooling on a potential array outputted by a previous layer. This pooling operation reduces the dimensionality of the potential array while retaining the most important information.

[0044] Seventh, conventional SNN often store spikes in a bit array. Embodiments herein provide an improved way to represent and process spikes. For example, embodiments herein can use a connection list instead of bit array. This connection list is optimized such that each input layer neuron has a set of offset indexes that it must update. This enables embodiments herein to only have to consider a single connection list to update all the membrane potential values of connected neurons in the current layer.

[0045]
Eighth, conventional SNNs often process spike by spike. In contrast, embodiments herein can process packets of spikes. This can cause the potential array to be updated as soon as a spike is processed. This can allow for greater hardware parallelization.

[0046] Finally, conventional SNNs do not provide a way to import learning (e.g., synapse weights) from an external source. For example, SNNs do not provide a way to import learning performed offline using backpropagation. Embodiments herein solve this technological problem by allowing a user to import learning performed offline into the neuromorphic integrated circuit
.


The TeNNs patents:


WO2023250092A1 METHOD AND SYSTEM FOR PROCESSING EVENT-BASED DATA IN EVENT-BASED SPATIOTEMPORAL NEURAL NETWORKS 20220622



WO2023250093A1 METHOD AND SYSTEM FOR IMPLEMENTING TEMPORAL CONVOLUTION IN SPATIOTEMPORAL NEURAL NETWORS 20220622

1713538720562.png



1713538738493.png



Disclosed is a neural network system generally relates to the field of neural networks (NNs). In particular, the present disclosure relates to event-based convolutional neural networks (NNs) that are trained to process spatial and temporal data using kernels represented by polynomial expansion. The event-based convolutional neural networks (NNs) are spatiotemporal neural networks. According to an embodiment, an explicit temporal convolution capability is added through Temporal Event-based Neural Networks (TENN) models, or TENNs in the spatiotemporal neural networks. The TENNs includes a plurality of temporal and spatial convolution layers that combine spatial and temporal features of data for low-level and high-level features. The TENNs as disclosed herein are configured to perform in a buffer mode and recurrent mode that effectively learns both spatial and temporal correlations from the input data.


The FIFO (first-in - first-out) buffer acts like a data conveyor belt which facilitates comparison of incoming data with previously temporarily stored data.
 
  • Like
  • Love
  • Fire
Reactions: 45 users

IloveLamp

Top 20
  • Like
  • Fire
Reactions: 8 users

IloveLamp

Top 20
1000015235.jpg
 
  • Like
  • Fire
  • Love
Reactions: 67 users
  • Like
  • Thinking
  • Love
Reactions: 16 users
Screenshot_20240420-070818_LinkedIn.jpg
 
  • Like
  • Love
  • Fire
Reactions: 64 users

Boab

I wish I could paint like Vincent
New on the website
1713563020740.png
 
  • Like
  • Fire
  • Love
Reactions: 45 users

manny100

Top 20
Hi manny,

The 2008 patent is of great value in that the main claim is very broad in scope and is directed to ML:

US2010076916A1 Autonomous Learning Dynamic Artificial Neural Computing Device and Brain Inspired System 20080921

An information processing system intended for use in artificial intelligence, consisting of a plurality of artificial neuron circuits connected in an array, comprising:

a first plurality of dynamic synapse circuits, comprising a means of learning and responding to input signals by producing a compounding strength value simulating a biological Post Synaptic Potential,

a temporal integrator circuit that integrates and combines individually simulated Post Synaptic Potential values over time, and thus constitutes an artificial membrane potential value,

a second plurality of dynamic soma circuits each capable of producing one or more pulses when the integrated membrane potential value has reached or exceeded a stored variable threshold value
,

That said, there are a number of subsequent developments which greatly improve performance, chief of these (until TeNNs) is N-of-M coding which greatly increased sparsity and reduced power usage and latency.

Another landmark patent is ...

US11468299B2 Spiking neural network 20181101

View attachment 61140

A neuromorphic integrated circuit, comprising:
a spike converter circuit configured to generate spikes from input data;
a reconfigurable neuron fabric comprising a neural processor comprising a plurality of spiking neuron circuits configured to perform a task based on the spikes and a neural network configuration; and
a memory comprising the neural network configuration, wherein the neural network configuration comprises a potential array and a plurality of synapses, and the neural network configuration defines connections between the plurality of spiking neuron circuits and the plurality of synapses, the potential array comprising membrane potential values for the plurality of spiking neuron circuits, and the plurality of synapses having corresponding synaptic weights,
wherein the neural processor is configured to:
select a spiking neuron circuit in the plurality of spiking neuron circuits based on the selected spiking neuron circuit having a membrane potential value that is a highest value among the membrane potential values for the plurality of spiking neuron circuits;
determine that the membrane potential value of the selected spiking neuron circuit reached a learning threshold value associated with the selected spiking neuron circuit; and
perform a Spike Time Dependent Plasticity (STDP) learning function based on the determination that the membrane potential value of the selected spiking neuron circuit reached the learning threshold value associated with the selected spiking neuron circuit
.


... which showcases my all-time favourite drawing. As you can see, this has a priority of 2018, so it will last well into the 2030s.


CNN2SNN is certainly a vital commercial development, but, for me, not as exciting as N-of-M or TeNNS.

This patent introduced a number of "minor" improvements:

WO2020092691A1 AN IMPROVED SPIKING NEURAL NETWORK 20181101

[0038] But conventional SNNs can suffer from several technological problems. First, conventional SNNs are unable to switch between convolution and fully connected operation. For example, a conventional SNN may be configured at design time to use a fully-connected feedforward architecture to learn features and classify data. Embodiments herein (e.g., the neuromorphic integrated circuit) solve this technological problem by combining the features of a CNN and a SNN into a spiking convolutional neural network (SCNN) that can be configured to switch between a convolution operation or a fully- connected neural network function. The SCNN may also reduce the number of synapse weights for each neuron. This can also allow the SCNN to be deeper (e.g., have more layers) than a conventional SNN with fewer synapse weights for each neuron.
Embodiments herein further improve the convolution operation by using a winner-take-all (WTA) approach for each neuron acting as a filter at particular position of the input space. This can improve the selectivity and invariance of the network. In other words, this can improve the accuracy of an inference operation.

[0039] Second, conventional SNNs are not reconfigurable. Embodiments herein solve this technological problem by allowing the connections between neurons and synapses of a SNN to be reprogrammed based on a user defined configuration. For example, the connections between layers and neural processors can be reprogrammed using a user defined configuration file.

[0040]
Third, conventional SNNs do not provide buffering between different layers of the SNN. But buffering can allow for a time delay for passing output spikes to a next layer. Embodiments herein solve this technological problem by adding input spike buffers and output spike buffers between layers of a SCNN.

[0041] Fourth, conventional SNNs do not support synapse weight sharing. Embodiments herein solve this technological problem by allowing kernels of a SCNN to share synapse weights when performing convolution. This can reduce memory requirements of the SCNN.

[0042] Fifth, conventional SNNs often use l-bit synapse weights. But the use of l-bit synapse weights does not provide a way to inhibit connections. Embodiments herein solve this technological problem by using ternary synapse weights. For example, embodiments herein can use two-bit synapse weights. These ternary synapse weights can have positive, zero, or negative values. The use of negative weights can provide a way to inhibit connections which can improve selectivity. In other words, this can improve the accuracy of an inference operation.

[0043] Sixth, conventional SNNs do not perform pooling. This results in increased memory requirements for conventional SNNs. Embodiments herein solve this technological problem by performing pooling on previous layer outputs. For example, embodiments herein can perform pooling on a potential array outputted by a previous layer. This pooling operation reduces the dimensionality of the potential array while retaining the most important information.

[0044] Seventh, conventional SNN often store spikes in a bit array. Embodiments herein provide an improved way to represent and process spikes. For example, embodiments herein can use a connection list instead of bit array. This connection list is optimized such that each input layer neuron has a set of offset indexes that it must update. This enables embodiments herein to only have to consider a single connection list to update all the membrane potential values of connected neurons in the current layer.

[0045]
Eighth, conventional SNNs often process spike by spike. In contrast, embodiments herein can process packets of spikes. This can cause the potential array to be updated as soon as a spike is processed. This can allow for greater hardware parallelization.

[0046] Finally, conventional SNNs do not provide a way to import learning (e.g., synapse weights) from an external source. For example, SNNs do not provide a way to import learning performed offline using backpropagation. Embodiments herein solve this technological problem by allowing a user to import learning performed offline into the neuromorphic integrated circuit
.


The TeNNs patents:


WO2023250092A1 METHOD AND SYSTEM FOR PROCESSING EVENT-BASED DATA IN EVENT-BASED SPATIOTEMPORAL NEURAL NETWORKS 20220622



WO2023250093A1 METHOD AND SYSTEM FOR IMPLEMENTING TEMPORAL CONVOLUTION IN SPATIOTEMPORAL NEURAL NETWORS 20220622

View attachment 61141


View attachment 61142


Disclosed is a neural network system generally relates to the field of neural networks (NNs). In particular, the present disclosure relates to event-based convolutional neural networks (NNs) that are trained to process spatial and temporal data using kernels represented by polynomial expansion. The event-based convolutional neural networks (NNs) are spatiotemporal neural networks. According to an embodiment, an explicit temporal convolution capability is added through Temporal Event-based Neural Networks (TENN) models, or TENNs in the spatiotemporal neural networks. The TENNs includes a plurality of temporal and spatial convolution layers that combine spatial and temporal features of data for low-level and high-level features. The TENNs as disclosed herein are configured to perform in a buffer mode and recurrent mode that effectively learns both spatial and temporal correlations from the input data.


The FIFO (first-in - first-out) buffer acts like a data conveyor belt which facilitates comparison of incoming data with previously temporarily stored data.
Thanks Diogenese, it's important for holders to understand the playing field is still not not even when our 1st patent expires in 2028.
It's been 'playing' on my mind given 'it's a matter of when not if BRN takes off'. Time is marching on.
Subsequent improvements will maintain our lead.
By 2028 we should be on Gen 4 or 5.
 
  • Like
Reactions: 9 users

BrainShit

Regular
Evening BrainShit ,

1 , if one could find a more salubrious forum name would be great.
But yes , truely sense & feel ones torment.

2, Not 100% certain , but pretty sure this is a rehashed announcement from some time ago..... with the pressent day date attached , purely to confuse.

Regards,
Esq.
Fully agreed (the name as well), but consider... it could be:

1.1.) a shit
1.2) the shit

... but in both ways it's correct, which Wolowitz would confirm.

2.) right, the first announcement was in 2022.... the second: today, but the message is still valid.
If I'm wrong please throw a stone. (In memory of "Life of Brain") .... what a name .... Brain *giggles*



For futher introduction into shit:

Pls.:
It's about shit ;-)
 
Last edited:
  • Love
  • Haha
Reactions: 2 users
Not sure if this article has been posted but here is another one about Intel's super neuromorphic computer:


"Powered by 1,152 of Intel's new Loihi 2 processors — a neuromorphic research chip — this large-scale system comprises 1.15 billion artificial neurons and 128 billion artificial synapses distributed over 140,544 processing cores."

"Neuromorphic computing is still a developing field, with few other machines like Hala Point in deployment, if any. Researchers with the International Centre for Neuromorphic Systems (ICNS) at Western Sydney University in Australia, however, announced plans to deploy a similar machine in December 2023.

Their computer, called "DeepSouth," emulates large networks of spiking neurons at 228 trillion synaptic operations per second, the ICNS researchers said in the statement, which they said was equivalent to the rate of operations of the human brain.

Hala Point meanwhile is a "starting point," a research prototype that will eventually feed into future systems that could be deployed commercially, according to Intel representatives.

These future neuromorphic computers might even lead to large language models (LLMs) like ChatGPT learning continuously from new data, which would reduce the massive training burden inherent in current AI deployments."
 
  • Like
  • Thinking
Reactions: 5 users

7für7

Top 20
BrainChip selected by U.S. Air Force Research Laboratory to develop AI-based radar

2024-04-19 17:52

BrainChip, the world's first commercial producer of neuromorphic artificial intelligence chips and IP, today announced that Information Systems Laboratories (ISL) is developing an AI-based radar for the U.S. Air Force Research Laboratory (AFRL) based on its Akida™ Neural Network Processor Research solutions.

ISL is a specialist in expert research and complex analysis, software and systems engineering, advanced hardware design and development, and high-quality manufacturing for a variety of clients worldwide.

Military and Space are always leaders of new technologies 😉


I postet the same here from the German forum and deleted it because it’s old news with new date 🤷🏻‍♂️
 
  • Like
Reactions: 2 users

TECH

Regular

Excellent post...thank you.

Many would have noticed how our share price has started it's usual, slow, possibly deliberate slide south again.
An never ending cycle, why many would be asking ?

  • Q1 4C results due within the next 6 business days.
  • LDA Capital off-loading their recent allocated shares.
  • Geo-Political mounting issues.
  • Worldwide Economic headwinds.
  • Robotic manipulation attempting to trigger "stop-loss" positions...i.e. 0.35....0.30....0.25
I personally feel that all within the company would agree that the journey to revenue has been a slower process than hoped, some
could go as far as saying, our 'real' business journey didn't really commence until Sean joined the company, meaning, considering
everything that has been put in place over the last 2.5 years, it's taken 5.

Remember I have been banging on about Digimarc Corporation for at least 4 years now, just check out the latest X feed posted on
the company website, it possibly sends a little subtle message in my opinion.

Cheers....Tech ;)


https://twitter.com/BrainChip_inc
 
  • Like
  • Thinking
  • Fire
Reactions: 22 users
Top Bottom