BRN Discussion Ongoing

Tothemoon24

Top 20

Ultra-Low Power AI​

This concept addresses two major areas of interest - AIand Energy Efficiency - topics that might seem mutually exclusive! There is rapid growth both in interest and usage of AI in mobile network operations, solutions and applications. There’s a risk that the massive amounts of data processing demanded in many AI scenarios lead to high power consumption in parallel. With increased usage of AI expected in the networks, it is important to have solutions to address this.

This live prototype for ultra-low power AI used a novel neuromorphic-AI-based approach for radio channel estimation and showcased the feasibility of low-compute and low-energy AI using AI-based radio receiver use-cases. The trick here is that in a neuromorphic neural network (like our human brain) only the neurons detecting a change are active, whereas no computations are needed for neurons in remember state. The fraction of inactive neurons translates directly to an energy efficiency gain as compared to a traditional deep neural network where computations are always needed for all neurons. The live demo showed how neural activity in the channel estimation computations varied with changes in the radio channel and how energy consumption could be reduced when less or no computations were ongoing. Radio channel estimation is only one of many areas where this exciting AI technology can be used. Keep an eye out for upcoming blogs on this topic from Ericsson soon!
 
  • Like
  • Love
  • Fire
Reactions: 19 users
4:50 Mark: "You've got foundries. You've got IBM (?) ... "

Interesting that Sean referred to TeNNs as an algorithm.

Algorithms can be implemented in software at some cost to speed and energy consumption.

We know that Akida 2 was taped out but did not proceed to silicon.

The TeNNs patents were filed mid-2022, so we could have been discussing TeNNs with EAPs from then on, ~ 2 years.

Because TeNNs is such a great improvement on transformers, it could be implemented in software and still outperform transformer NNs by a mile. In fact, Akida NNs can be implemented in software. As I may have said before there is a significant advantage in implementing new tech which is in a state of flux in software, and waiting til the tech is more settled before committing to silicon.

We know Valeo uses software image interpretation in SCALA 3, and we have been working with them in a Joint Development partnership for a few years. Clearly they would have been among the first to hear about TeNNs.

Although they have switched to radio silence on Akida, Mercedes makes a great deal of noise about its software-defined vehicle. Mercedes would also have been among to first to hear about TeNNs, but they have swapped to Luminar for most of their lidar, although I forget when that takes effect. However Luminar also uses software image interpretation. There is thus a possibility that Mercedes could use TeNNs with or without Akida 2 in conjunction with any "legacy" Scala 3 or the new Luminar lidar.

In any event, we know that Valeo has $1B+ forward orders with Stellantis and Toyota, so it is possible that TeNNs/Akida 2 software will be used in these applications.

The best image temporal classification technology, TeNNs, has not yet been reduced to silicon, but it is available in software form*. BRN are busy building the NN models for TeNNs for various applications. The beauty of a software implementation during this development phase is that it can be readily updated.

Similar considerations also apply to Prophesee. In fact, all EAPs would have been consulted about TeNNs - is TeNNs the reason that Akida 1 was labeled too narrow?

*I wonder if Tony's AGM demo of TeNNs sentence building v GPT2 was software or FPGA?

View attachment 63536
4:50 Mark: "You've got foundries. You've got IBM (?) ... "

You’ve got IDM’s.
 
  • Like
Reactions: 8 users

Tothemoon24

Top 20
IMG_8955.jpeg





🚀 Having had discussions on 6G at #ConnectX last week and seeing presentation summaries of 6G at #NetworkX this week, it felt like a good time to think about the question: Is it too early to talk 6G?

As 5G adoption and monetization discussions are still ongoing, it might seem premature to discuss 6G. However, the groundwork for 6G must start now to ensure readiness for the first deployments expected in 2030. Early exploration and preparation are crucial for defining 6G standards, building a robust ecosystem, and ensuring a smooth transition to 6G.

Current Status of 5G Growth:
📈 With 302 live 5G networks globally, 5G still has years of evolution ahead.

Need for Preparation:
🗓️ Start defining 6G standards in mid-2024 and prepare the ecosystem for 2030 deployments.

Most common questions:
🤔 What new concepts should 6G include?
🔄 What can we reuse from 5G?
🌐 How will networks evolve to support future needs?

What are some new advancements do we expect in 6G?
Ultra-Low Power AI:
🤖 High demand for data processing in AI scenarios can lead to increased power consumption, especially with growing AI usage in networks. However, AI and energy efficiency improvements can be done in parallel.
🧠 A neuromorphic AI approach for radio channel estimation, can lead to significant energy efficiency.

Ultra-lean design for enhanced energy efficiency:
🍃 In 5G, the concept of lean design, which removes unnecessary signals for data transmission, significantly improved energy efficiency. For 6G, an enhanced version of this concept aims to further decrease mandatory transmissions, potentially improving energy efficiency up to a factor of four.

Dynamic Compute Offload:
💻 Offloading computational tasks to the network for enhanced device performance.

What 6G use Cases did Ericsson showcase at MWC?
Precise Indoor Positioning:
📍 A network-centric solution using the Ericsson Radio Dot System and Network Location platform to achieve sub-meter accuracy in real-time.

Spectrum Utilization:
📡 Showcase of new centimeter wave (cmWave) spectrum to improve coverage and performance.

Dynamic Compute Offload for MR (Mixed Reality):
🕶️ Demonstrated the offloading of computationally heavy tasks from lightweight headsets to network compute sites, enhancing mixed reality experiences.

6G will build on 5G Standalone and 5G-Advanced, incorporating new and evolved concepts to meet the needs of 2030 and beyond. 6G will be a mix of both new and evolved concepts and use cases.
 
  • Like
  • Fire
  • Love
Reactions: 19 users
Finished with my favourite colour green
 
  • Like
  • Haha
  • Fire
Reactions: 6 users

7für7

Top 20
Whom ever made this video could at least advised Sean which camera to look at , unprofessional to say the least.
Actually it’s nowadays common to make interviews like that, this way… no one looks directly to the cam… it looks stupid to be honest… from my point of view it’s fresh and up to date!

Edit: it should give you the expression that he talks to someone
I have disagreed mate, iv been in TV my whole life that’s not how you do interviews
As you can see, it was on the beginning a very normal response… I had no problem with you at all… in a forum everyone can respond on a topic. Btw.in most of the interviews I saw and made, no one watches directly into the cam. Again… I have no problem… it’s just my point of view
 
  • Like
Reactions: 3 users
As you can see, it was on the beginning a very normal response… I had no problem with you at all… in a forum everyone can respond on a topic. Btw.in most of the interviews I saw and made, no one watches directly into the cam. Again… I have no problem… it’s just my point of view
I simply made a comment on the camera and you feel the need to start this I mean listen to ya carry on below, just a attach or rant what every you want to call it. Moving on my opinion


7für7 said:
Mate….I know people whose grandparents also run a business in which they specialized. That doesn't mean they ( the grandchildren) automatically know everything. The media landscape is constantly changing. Camera work, which is meant to evoke a certain drama, is also evolving. Imagine if films were still shot the same way they were in your grandparents' time… I come from this field as well, so relax… my great-great-great-grandparents were already doing theater, and my ancestors invented drama and comedy…so!? 🤷🏻‍♂️🤦🏻‍♂️
 
  • Like
Reactions: 4 users

stockduck

Regular
Tony Dawe just advised that the 2024 AGM webcast is up on the site.
Hit RESOURCES tab.
Then EVENTS.
past events....BRAINCHIP AGM.
Hit AGM WEBCAST.

It's all there. 🤣


AGM Webcast
Wow, that is wonderfull, thank you for sharing and thank`s for implementation it on the webside.

Good job from management at the AGM 2024, I belief in what was said, our CEO has the right mental spirit.
 
  • Like
Reactions: 10 users

Diogenese

Top 20
I simply made a comment on the camera and you feel the need to start this I mean listen to ya carry on below, just a attach or rant what every you want to call it. Moving on my opinion


7für7 said:
Mate….I know people whose grandparents also run a business in which they specialized. That doesn't mean they ( the grandchildren) automatically know everything. The media landscape is constantly changing. Camera work, which is meant to evoke a certain drama, is also evolving. Imagine if films were still shot the same way they were in your grandparents' time… I come from this field as well, so relax… my great-great-great-grandparents were already doing theater, and my ancestors invented drama and comedy…so!? 🤷🏻‍♂️🤦🏻‍♂️
Is that you Janus?
 
  • Haha
  • Like
Reactions: 9 users

HopalongPetrovski

I'm Spartacus!
  • Haha
Reactions: 4 users

cosors

👀
The principles again:

"New Approach: How Spiking Neural Networks Function and Their Emerging Applications​

Ali Oraji
Ali Oraji
17 min read
6 days ago
medium1.jpg

Spike Neural Networks (SNNs) offer a compelling approach to neural computation, mimicking the biological processes of neurons more closely than traditional Artificial Neural Networks (ANNs). This report delves into the architecture, mechanisms, and applications of SNNs, highlighting their potential for energy efficiency and real-time processing. We explore their operational principles, compare them with ANNs, and discuss current challenges and future research directions in the field.

First things first, you can find the link to the notebooks here. I believe these will be useful for you to see the outputs and better understand the spikes themselves.

SNN Notebook1 & SNN Notebook2

Introduction​

Spike Neural Networks (SNNs) are a type of artificial neural network that simulates the way biological neurons process information through discrete events, called ‘spikes’ Unlike ANNs, which use continuous values for activations and computations, SNNs operate using binary spikes, where the presence of a spike can be thought of as a neuron firing and the absence as it resting. This architecture makes SNNs potentially more similar to biological neural processing and possibly more efficient in terms of power consumption for certain tasks.

Understanding Spike Neural Networks​

Fundamental Operations of Spiking Neural Networks:

Spiking Neural Networks (SNNs) represent an advanced approach in neural network technology, aiming to more closely emulate the operational dynamics of biological neural networks than traditional Artificial Neural Networks (ANNs). Unlike ANNs, which manipulate continuously variable signals, SNNs operate using discrete events known as spikes that occur at specific moments. These spikes, often organized into spike trains, serve as both the input and output of the system. The following points outline the operational mechanisms of SNNs:

1. Neuronal Potential: At any given time, each neuron in an SNN holds a potential analogous to the electrical potential observed in biological neurons.

2. Potential Modulation: The potential within a neuron can increase or decrease in response to spikes received from upstream neurons, based on a predefined mathematical model.

3. Threshold and Impulse Generation: If a neuron’s potential surpasses a certain threshold, it generates an impulse transmitted to each connected downstream neuron. Following impulse transmission, the neuron’s potential rapidly decreases, simulating the refractory period seen in biological neurons. Over time, the potential gradually returns to its baseline level.

medium2.gif


The diagram illustrates the basic concept of an action potential in a neuron, critical to understanding SNN functioning.

1. Resting Potential: Illustrated as a horizontal line, this parameter represents the voltage across the neuronal membrane when the neuron is not actively transmitting signals. The typical value is around -70 millivolts (mV). This state of polarization, where the inside of the neuron is negatively charged relative to the outside, sets the stage for the activation of the neuron.

2. Threshold Level: Shown as a dashed line, this marks the critical level of membrane depolarization necessary to trigger an action potential. The threshold is generally set between -55 mV and -50 mV. When the neuron’s membrane potential reaches this level, it is sufficiently depolarized to activate voltage-gated ion channels, leading to the rapid onset of the action potential.

3. Action Potential (Spike): This is represented on the graph by a dramatic, swift ascent and subsequent descent in voltage. The action potential begins when the depolarization of the neuron reaches the threshold level, prompting sodium channels to open. This opening allows positively charged sodium ions to flood into the neuron, rapidly increasing the membrane potential, sometimes surging to +30 mV or higher. This phase is known as depolarization.

4. Repolarization: Following the peak of the action potential, the graph illustrates a decline in the membrane potential. This phase is characterized by the closing of sodium channels and the opening of potassium channels. As potassium ions flow out of the neuron, the internal charge becomes more negative, leading to a decrease in the membrane potential back towards the resting potential.

5. Hyperpolarization: After the action potential, the potential often overshoots the resting level, temporarily making the neuron even more negative than its usual resting state. This is depicted as a downward dip below the resting potential line on the graph. This hyperpolarization helps to reset the neuronal environment, ensuring that subsequent action potentials are not triggered too easily and that the signal transmission remains unidirectional.

6. Time Scale: The entire action potential process is remarkably rapid, typically occurring within just a few milliseconds. This time scale, often shown at the bottom of the graph, highlights the efficiency and speed at which neurons can react to stimuli and communicate with each other through electrical impulses.

1716447496718.png


In an SNN, each neuron calculates its action potential based on the inputs received, and the firing of these potentials facilitates information propagation through the network. The timing and pattern of spikes are vital for encoding and transmitting information, thereby mirroring the functional attributes of biological systems in artificial environments

Structure and Working Mechanism of ANN and SNN​

To fully understand the structure and working mechanisms of both Artificial Neural Networks (ANNs) and Spiking Neural Networks (SNNs), as well as their differences, it’s essential to delve into the fundamental aspects of each network type.

Structure

Artificial Neural Networks (ANNs):

  • Layers: ANNs consist of multiple layers including input, hidden, and output layers. Each layer contains a number of neurons, and each neuron in one layer is typically connected to all neurons in the next layer (in fully connected networks).
  • Neurons: The basic computation units in ANNs are modeled simplistically compared to biological neurons. Each neuron receives input from its previous layer, processes it, and passes its output to the next layer.
  • Weights and Biases: Connections between neurons are defined by weights, which determine the strength and sign of the input, and biases, which are added to the input to affect the neuron’s output threshold.
Spiking Neural Networks (SNNs):

  • Layer Structure: Like ANNs, SNNs can have multiple layers, including input, hidden, and output layers. However, the key difference lies in the nature of neuron communication and processing.
  • Spiking Neurons: Neurons in SNNs communicate via discrete events known as spikes. This mimics the actual firing of biological neurons more closely than the continuous-valued outputs in traditional ANNs.
  • Dynamics and State: Each neuron in an SNN has an associated membrane potential that changes over time based on the input spikes it receives and its own leaky integrative properties.

Working Mechanism

ANNs:

  • Input Signal Processing: Inputs (e.g., pixel values from an image) are fed directly into the network.
  • Weighted Sum and Activation: Each neuron computes a weighted sum of its inputs, which is then passed through a nonlinear activation function like Sigmoid or ReLU to introduce non-linearity.
  • Backpropagation: Learning involves adjusting the weights and biases based on the error between the actual output and the desired output. This is typically done using gradient descent and backpropagation, where gradients are calculated for each weight in the network.
SNNs:

  • Input Conversion to Spikes: Inputs are first converted into spikes using different encoding methods such as rate or temporal encoding. This step is crucial as it transforms analog signals into a format suitable for spiking neurons.
  • Spiking Neurons: Neurons accumulate input spikes, which affect their membrane potential. If the membrane potential reaches a certain threshold, the neuron fires (emits a spike), which travels to other neurons and affects their potentials.
  • Temporal Dynamics: The timing and pattern of spikes carries information. SNNs can process this information directly, leveraging the precise timing of spikes to encode and decode data efficiently.
1716447550974.png


Differences Between ANNs and SNNs

  • Nature of Processing: Artificial neural networks (ANNs) process information continuously, while spiking neural networks (SNNs) do so discretely through spikes.
  • Computation Efficiency: SNNs are potentially more energy-efficient because they process information only when neurons fire, unlike ANNs, which continuously process through matrix multiplications.
  • Temporal Data Handling: SNNs inherently handle temporal dynamics better due to the timing of spikes. ANNs require special architectures, like recurrent neural networks, to manage temporal data.
  • Biological Plausibility: SNNs are considered more biologically plausible because they closely mimic actual neuronal firing and dynamics.
  • Development and Usage: ANNs are more mature in terms of development tools, frameworks, and applications. SNNs, being relatively newer, often require more specialized knowledge and tools.
This comparative analysis highlights that while both network types excel in their respective domains, they are suited for different kinds of problems based on their inherent characteristics and computational models. Understanding these differences is crucial for deploying the right type of neural network for specific tasks, especially those involving complex temporal patterns or where power efficiency is critical.

Pros of ANNs:

  • Flexibility and Versatility: ANNs can model complex nonlinear relationships and have been successfully applied to a wide range of tasks from image and speech recognition to playing complex games like Go and Chess.
  • Scalability: With the support of modern computing resources, ANNs can be scaled to handle large datasets and complex architectures, making them effective for big data applications.
  • Well-established Frameworks and Community: There is a vast array of tools, libraries (like TensorFlow and PyTorch), and community support available that make designing, training, and deploying ANNs relatively straightforward.
  • Continuous Output: ANNs are well-suited for applications requiring continuous output values and not just classifications, making them versatile in different types of prediction tasks.
Cons of ANNs:

  • Computational Intensity: Training ANNs, especially deep networks, requires significant computational resources, often necessitating powerful GPUs and large datasets for optimal performance.
  • Lack of Temporal Dynamics: Standard ANNs lack the ability to efficiently process time-based data, requiring additional modifications like recurrent layers, which can complicate the architecture and increase computational burden.
  • Opaque Decision-Making: ANNs are often criticized as being “black boxes,” as it can be challenging to interpret how decisions are being made within the network, complicating debugging and trust in sensitive applications.
  • Overfitting Risk: Without careful tuning and regularization, ANNs can easily overfit to training data, leading to poor generalization to new, unseen data.
Pros of SNNs:

  • Biologically Plausible: SNNs closely mimic the way real neurons operate in the brain, potentially leading to more robust and fundamentally different computing paradigms than traditional ANNs.
  • Energy Efficiency: Due to their event-driven nature, where neurons only process information, when necessary (i.e., upon receiving a spike), SNNs can be significantly more energy-efficient, particularly on neuromorphic hardware.
  • Handling Temporal Information: SNNs naturally process temporal information through the dynamics of spikes, making them inherently suitable for time-series data and dynamic environments without needing specialized structures.
  • Potential for Real-Time Processing: The structure and operation of SNNs lend themselves well to real-time applications, as processing can be done in an online, incremental manner.
Cons of SNNs:

  • Less Mature Technology: SNNs are not as developed as ANNs in terms of available frameworks, tools, and established best practices. This can make them more challenging to implement and optimize.
  • Complex Implementation: Designing and training SNNs, particularly with respect to their learning mechanisms and the translation of inputs into spike forms, can be complex and computationally intensive.
  • Limited Research and Applications: There are fewer examples of successful applications of SNNs compared to ANNs, partly due to their relative novelty and the challenges associated with their unique dynamics.
  • Dependency on Hardware: To fully realize their energy efficiency benefits, SNNs often rely on specialized hardware like neuromorphic chips, which are not as widely available or developed as traditional CPUs and GPUs.

Key Characteristics of Spiking Neural Networks:

SNNs exhibit distinctive characteristics that differentiate them from conventional ANNs. These traits primarily stem from their design, which aims to emulate the behavior of biological neural networks more closely, resulting in unique computational capabilities and applications. The following outlines the principal characteristics of SNNs:

  1. Spiking Neurons
  • Binary Communication: Neurons in SNNs communicate using spikes — discrete events that occur at precise moments. This binary communication method is similar to the all-or-nothing action potentials seen in biological neurons.
  • Energy Efficiency: Spikes are sparse and event-driven, generally leading to reduced energy consumption compared to the continuous operations of ANNs. This attribute makes SNNs particularly advantageous for energy-sensitive environments like mobile devices or embedded systems.
2. Temporal Dynamics

  • Time-Dependent Processing: In contrast to ANNs, where outputs at any layer are determined by the current input and learned weights, the output of SNN neurons also depends on the timing of input spikes. This temporal element enables SNNs to more naturally process time-series data.
  • Memory and Adaptation: SNNs possess inherent memory and adaptability through their dynamic state, influenced by the timing and interaction of spikes. This capability allows them to efficiently handle tasks involving sequences and timing without needing additional mechanisms such as recurrent connections.
3. Biologically Plausible Learning

  • Spike-Timing-Dependent Plasticity (STDP): SNNs often employ learning rules inspired by biological processes, particularly STDP, where synaptic strength is adjusted based on the relative timing of spikes between pre- and post-synaptic neurons. This mechanism enables the network to learn directly from the temporal structure of spike data.
  • Local Learning Rules: Unlike traditional ANNs that use backpropagation, SNNs can implement local learning rules that rely solely on the activity of adjacent neurons. This approach is more aligned with biological processes and can potentially foster more robust and scalable learning architectures.
4. Neuromorphic Engineering

  • Hardware Compatibility: SNNs are ideally suited for implementation on neuromorphic hardware designed to mimic the neural structure and dynamics of the brain. This type of hardware often uses physical analogs to neurons and synapses to achieve high-speed and low-power computation.
  • Real-Time Interaction: The design and operational mode of SNNs make them excellent for real-time applications, such as robotic control systems or interactive simulation environments.
5. Efficiency in Sparse Data Environments

  • Sparse Data Handling: The event-driven nature of SNNs allows them to operate efficiently in environments where data are inherently sparse and only significant at specific time intervals. This characteristic makes them ideal for processing sensory data, such as touch or auditory signals.
6. Scalability and Flexibility

  • Scalable Network Architecture: The local and temporal nature of processing in SNNs enables them to effectively scale to large networks. Biological inspiration also promotes modular and flexible network designs that can be adapted for various tasks.
  • Fault Tolerance: The distributed and redundant nature of computation in SNNs can result in higher fault tolerance, akin to that of biological neural networks.
Encoding Models in SNNs

In SNNs, the conversion of analog or continuous input signals into spikes (discrete events) is a crucial step for processing information. This process is known as encoding. There are several methods for encoding data into spikes, each with its own mathematical representation and application domain. Below are some of the most commonly used encoding methods:
 
  • Like
  • Fire
  • Love
Reactions: 25 users

cosors

👀

1. Rate Coding​

Description: Rate coding is one of the simplest and most widely used encoding methods in SNNs. In this method, the frequency of the spikes represents the magnitude of the input signal. Higher rates (more spikes per unit time) correspond to higher values of the input signal, and lower rates correspond to lower values.

Mathematical Representation: If x is the input signal magnitude, the firing rate r (number of spikes per second) could be represented as: r = k ⋅ x where k is a constant that scales the input signal to an appropriate firing rate.

2. Temporal Coding​

Description: Temporal coding uses the timing of spikes to convey information, rather than the rate of firing. It is believed to be a more efficient and precise encoding method than rate coding, as it can represent information in the precise timing differences between spikes.

Mathematical Representation: If ti represents the timing of the i-th spike and x is the input signal magnitude, the relationship might be modeled as: ti=f(x) where f is a function that converts the magnitude of the input signal into a spike time. The specific form of f depends on the implementation and the nature of the input signal.

3. Population Coding​

Description: Population coding involves using a group of neurons to represent a single input signal. Each neuron in the population responds to a different range of the input signal, allowing for a more detailed and robust representation. This method can capture more complex patterns and relationships in the input data.

Mathematical Representation: For a population of N neurons, each neuron ni fires at a rate ri based on its tuning curve fi (x) relative to the input signal x:

ri = fi (x)

where fi is the tuning curve of the i-th neuron, designed to respond maximally to a specific range of input values and less so to others.

4. Phase Coding​

Description: Phase coding encodes information in the phase of spikes relative to an underlying oscillatory signal. This method can encode information in both the frequency of spikes and their phase, potentially allowing for the representation of more complex information.

Mathematical Representation: Assuming an oscillatory baseline with frequency ω, the phase ϕ of a spike emitted in response to an input signal x might be represented as: ϕ = g(x) where g is a function mapping the input signal to a phase angle in the oscillatory cycle.

5. Burst Coding​

Description: Burst coding involves encoding information in the patterns of bursts of spikes, where a burst is a rapid series of spikes followed by silence. The number of spikes in a burst, the duration of the burst, and the interval between bursts can all convey information.

Mathematical Representation: If b represents the number of spikes in a burst and x is the input signal magnitude, one might define: b = h(x) where h is a function that determines the burst pattern based on the input signal. The exact formulation of h would depend on the specific characteristics of the input signal and the desired resolution of the encoding.

Each of these encoding methods has its advantages and applications, depending on the type of data being processed and the requirements of the task. Rate coding is straightforward and widely applicable, while temporal and phase coding can provide higher precision and efficiency for certain types of temporal or spatial information. Population coding enhances the robustness and dimensionality of the representation, and burst coding offers a unique way to encode multiple signal features within complex firing patterns.

Temporal Dynamics and Computation

Temporal dynamics in SNNs are crucial as they deal with how information is processed over time, which is fundamentally different from traditional ANNs.

  • Dynamic State: Neurons in an SNN have a dynamic state influenced by incoming spikes, which affects their potential and firing patterns over time.
  • Memory and Decay: The state of each neuron includes a memory effect, where past inputs influence future activity. This is often implemented via mechanisms like the leaky integrate-and-fire model.
  • Spike Timing-Dependent Plasticity (STDP): A significant aspect of learning in SNNs, where the synaptic weights are adjusted based on the timing of spikes between pre- and post-synaptic neurons. This allows the network to learn temporal patterns in a biologically plausible way.
Neuron Models in SNNs

Neuron models in SNNs are designed to capture the complex dynamics of biological neurons. Here are detailed descriptions of the major models:

  1. Leaky Integrate-and-Fire (LIF) Model
Description: In the LIF model, the neuron’s membrane potential increases with incoming spikes and naturally decays over time. If the potential reaches a certain threshold, the neuron fires (emits a spike), and then the potential is reset.

Formula: The membrane potential V(t) is governed by:

1*Zd042lyTf3INjfUSEI0VBQ.png

where τ is the membrane time constant, Vrest is the resting potential, R is the membrane resistance, and I is the input current.

2. Izhikevich Model

Description
: This model is computationally efficient and biologically plausible, capable of reproducing the firing patterns of different types of neurons.

Formula: Governed by:

1*zkpl4MPc8Fz62eB5EIxxDQ.png

After a spike: if v ≥ 30 mV:
v← c
u← u + d

3. Hodgkin-Huxley Model

Description:
A detailed model that describes how action potentials in neurons are initiated and propagated. It accounts for the ionic currents through the membrane, providing a highly detailed understanding of neuronal dynamics.

Formula: The Hodgkin-Huxley model involves multiple differential equations that govern the dynamics of ion channels and potentials. Here are the key equations:

1*beT7Rc3kBxw_w1WC4kMSCw.png

where C is the membrane capacitance, V is the membrane potential, I-ext is the external current, n ,m, h are gating variables for potassium and sodium channels, and ,g ̅K, g ̅Na, g ̅L , are the maximum conductance’s for potassium, sodium, and leak channels, respectively. are the equilibrium potentials for these ions. The α and β functions are voltage-dependent rates that govern the opening and closing of ion channels

Applications of SNNs

  • Robotics: SNNs are utilized in robotics for real-time sensory processing and motor control, enabling robots to interact dynamically with their environments.
  • Edge Computing: Ideal for deployment in edge devices due to their low power consumption, SNNs help in processing data locally, reducing the need for constant communication with central servers.
  • Pattern Recognition: Useful in dynamic pattern recognition tasks, such as speech and gesture recognition, where temporal patterns are crucial. SNNs excel in processing the timing-related aspects of these patterns.
  • Medical Diagnostics: In the medical field, SNNs can be employed to analyze complex physiological data. Their ability to handle temporal sequences makes them particularly effective for monitoring and predicting cardiac and neurological events based on real-time patient data.
  • Neurological Prosthetics: SNNs contribute significantly to the development of neurological prosthetics, such as cochlear implants and artificial limbs. By mimicking the timing of biological neuron activity, they can provide more natural responses and interactions for the users.
  • Drug Discovery: In the scientific realm, SNNs are valuable for simulating biological processes at the cellular level, which can be crucial in understanding disease mechanisms and testing potential drug effects. Their biologically inspired processing capabilities allow for more accurate modeling of neuronal behavior, which is essential in neuropharmacology.
  • Environmental Monitoring: SNNs can be use in scientific research for real-time analysis of environmental data. Their energy-efficient nature allows deployment in remote locations to monitor climatic and ecological changes over extended periods without requiring frequent maintenance.
Each of these applications leverages the unique capabilities of SNNs, such as their energy efficiency, real-time processing, and ability to handle sparse and temporal data, making them suitable for a wide range of tasks across various disciplines.

Simulation Tools for SNNs

Several tools and libraries are available for simulating and working with SNNs, such as:
  • NEST: A simulator for large scale networks of various neuron models.
  • Brian: A Python-based simulator that is flexible and easy to use.
  • SpiNNaker and Loihi: Neuromorphic hardware platforms designed to run SNNs efficiently.
  • SNNTorch: designed to be intuitively used with PyTorch, as though each spiking neuron were simply another activation in a sequence of layers. It is therefore agnostic to fully-connected layers, convolutional layers, residual connections, etc (my favorite one)
Among my experimental use of packages for SNN, I found SNNTorch to be the easiest and cleanest package available for building your SNN :)
Perhaps he could have a look around Brainchip?

Challenges and Future Directions​

  • Learning Rules: Developing efficient learning rules for Spiking Neural Networks (SNNs) that are computationally efficient and capable of solving complex tasks remains a major area of ongoing research.
  • Hardware Implementation: Although neuromorphic chips specifically designed for SNNs exist, broader adoption requires further advances in hardware technology.
  • Challenges in SNN Utilization: There are notable challenges associated with working with SNNs, such as the complexity of training them using traditional methods like backpropagation, and the absence of a standardized framework or toolkit as mature as those available for traditional Artificial Neural Networks (ANNs).
  • Research Efforts: Ongoing research efforts are focused on overcoming these challenges by developing new learning rules tailored for SNNs and exploring their applications in fields like robotics, where real-time processing and energy efficiency are essential.

Potential and Insights

  • Biological Realism and Computational Efficiency: SNNs offer promising avenues for advancing the capabilities of artificial neural systems, especially in areas where biological realism and computational efficiency are paramount. The models and mechanisms of SNNs enable them to process information in ways that closely resemble biological systems, potentially allowing them to handle tasks involving complex temporal dynamics and sensory processing efficiently.
  • Contribution to Neuroscience and AI: This bio-inspired approach also provides valuable insights into brain functions, making SNNs a significant tool for advancing AI and understanding neural processes. The development of SNNs continues to be an active and evolving field of study in both academia and industry.
In conclusion, we explored the multifaceted landscape of Spiking Neural Networks (SNNs), detailing their architecture, operational principles, and practical applications. By juxtaposing SNNs with conventional Artificial Neural Networks (ANNs), we have illuminated the unique advantages that SNNs offer, particularly in terms of energy efficiency and their capability to process real-time data. Despite the challenges in training and implementation that currently hinder their widespread adoption, SNNs promise significant advancements in both computational neuroscience and real-world applications.

The future of SNNs holds the potential for transformative breakthroughs in how we approach the design and implementation of neural systems, paving the way for more sustainable, efficient, and biologically realistic computing models. As we continue to refine the technologies and methodologies underpinning SNNs, their integration into everyday technology seems not just feasible but inevitable, heralding a new era of neural computation."

All clear and understood?
 
Last edited:
  • Like
  • Haha
  • Fire
Reactions: 20 users

The Pope

Regular
Our 2024 Embedded Vision Summit’s booth is much better-looking than that tiny space at the 2024 embedded world (which was part of the tinyML Pavilion)!

Looks like one of the VVDN Edge AI Boxes on the left?

I can also spot the audio denoising demo that was presented at the AGM (orange screen on the right).

View attachment 63506
Maybe it is just me but I really like Qualcomm being right next door to the BRN displays. Purely my point of view….hmmm may not just me.
 
  • Like
  • Love
  • Wow
Reactions: 18 users

stockduck

Regular
100% correct Dio.

Mouna worked with us for about 6 months on a project that resulted in a shared Patent Application in 2017.

Named inventors are Peter, Mouna and Nic Oros.

US10157629B2 Low power neuromorphic voice activation system and method

ALSO....something very new, this AUSTRALIAN FILING is the first that I believe names us in the Agriculture sector !!
It was only filed 6 days ago !!

AU2023255049A1 System and Method for growing fungal mycelium Patent Translate

Love Tech and Akida 💘

Research Alzheimers GIF by MIT

Thank you @TECH for sharing!

If brainchip is trying to solve agricultural problems with neuromorphic technology, management should deal with the latest findings in syntropic agriculture and biocyclic humus soil. In my opinion, both are great pillars of hope for sufficient, affordable and healthy nutrition for the world population in times of accelerated climate change caused by man. (google translator)



 
  • Like
  • Wow
Reactions: 4 users
So as I can understand, you never worked at this field. Is that correct? Because normally you would respond with facts. I gave you some points why the interview is made how it is. All you can do is telling me your Blood is boiling and insulting me. Bro. You are nothing that’s all. Igno
If you wants some fact on point of view on to production sure let’s talk off here as this is BRN forum not your personal insult forum. I would be glad to rub your new re into . Let me know how to get in connection with you
 
  • Like
  • Fire
Reactions: 3 users

The Pope

Regular
How do we know Sean was going to South Korea on the way home? Did he say that?
Yes he did. I had a chat to him. All good.
 
  • Like
  • Love
  • Fire
Reactions: 11 users
Just to close the topic (because you’re so professional… I found this for you in English

In documentaries and other non-fiction films, interviewees typically do not look directly at the camera during interviews for a few reasons:

  1. Maintaining Authenticity: Looking directly at the camera can break the illusion of a natural conversation between the interviewer and the interviewee. By looking at the interviewer or off to the side, the interviewee appears to be speaking directly to the interviewer rather than to the audience.
  2. Avoiding Distraction: When an interviewee looks directly at the camera, it can be distracting for the viewer. It can feel like the interviewee is addressing the audience rather than having a conversation with the interviewer.
  3. Focus on the Interviewer: By looking at the interviewer, the interviewee can maintain focus on the conversation and the questions being asked. This can help them provide more thoughtful and genuine responses.
  4. Conventional Style: Not looking at the camera during interviews has become a convention in documentary filmmaking. This style helps maintain consistency across different interviews and documentaries.
Overall, not looking at the camera during interviews in documentaries is a stylistic choice that helps create a more natural and engaging viewing experience for the audience.
In documentaries and other non-fiction films, interviewees typically do not look directly at the camera during interviews for a few reasons:

There is difference when in the presentation style of two talking heads as they call it there is no third party as you put it . The third party is the camera the People who are watching the TV . Talking to us , looking us in the eye, showing us his intentions his mannerism ex.
As I said to you mate if you wish to continue to attack me let’s move it from here
 
Last edited:
  • Like
Reactions: 2 users

Hrdwk

Regular


Some interesting capabilities happening on device
 
  • Like
  • Wow
  • Love
Reactions: 6 users

CHIPS

Regular
  • Like
Reactions: 3 users
In documentaries and other non-fiction films, interviewees typically do not look directly at the camera during interviews for a few reasons:

There is difference when in the presentation style of two talking heads as they call it there is no third party as you put it . The third party is the camera the People who are watching the TV . Talking to us , looking us in the eye, showing us his intentions his mannerism ex.
As I said to you mate if you wish to continue to attack me let’s move it from here
OK, you both win, now....
1716454962013.png
 
  • Like
  • Haha
  • Fire
Reactions: 36 users
Top Bottom