BRN Discussion Ongoing

IloveLamp

Top 20

1000018026.jpg
1000018028.jpg
1000018022.jpg
 
  • Like
  • Fire
  • Love
Reactions: 60 users
We're all a little frustrated. All good. Just.pointing out it is hard to understand what someone means in written word on a forum. Unless of course someone fires both barrels at someone and leaves no doubt. 🤣

SC
I think it's just a cultural thing SC..
I tried Googling "German comedians" and this is all that came up 🤔..


20240830_205157.jpg

20240830_205221.jpg
 
  • Haha
  • Like
Reactions: 14 users

Rach2512

Regular
  • Like
  • Love
  • Fire
Reactions: 45 users

7für7

Top 20
Its pretty simple, your post insinuated that if i didn't put people on ignore then i might know what has been posted.
i wasn't having a go at you, my comment was nothing more than that, you are the only person i have thought about ignoring.
Conversation over.
It's not my fault that you took my first post so seriously. I even added smileys, but it seems like you have a problem with me. Just ignore me if you want peace of mind, like 3/4 of the forum does. So, let's consider the topic closed now
 
  • Like
  • Sad
Reactions: 2 users
We're all a little frustrated. All good. Just.pointing out it is hard to understand what someone means in written word on a forum. Unless of course someone fires both barrels at someone and leaves no doubt. 🤣

SC
So glad I only use pictures

1725023813937.gif
 
  • Haha
Reactions: 7 users
Be nice if they were speaking to us...might help them.

At least they acknowledging the potential power of SNN as they put it and are exploring ...it's a start.



Microsoft Research Blog​

Innovations in AI: Brain-inspired design for more capable and sustainable technology​

Published August 29, 2024
By Dongsheng Li , Principal Research Manager Dongqi Han , Researcher Yansen Wang , Researcher


As AI research and technology development continue to advance, there is also a need to account for the energy and infrastructure resources required to manage large datasets and execute difficult computations. When we look to nature for models of efficiency, the human brain stands out, resourcefully handling complex tasks. Inspired by this, researchers at Microsoft are seeking to understand the brain’s efficient processes and replicate them in AI.

At Microsoft Research Asia(opens in new tab), in collaboration with Fudan University(opens in new tab), Shanghai Jiao Tong University(opens in new tab), and the Okinawa Institute of Technology(opens in new tab), three notable projects are underway. One introduces a neural network that simulates the way the brain learns and computes information; another enhances the accuracy and efficiency of predictive models for future events; and a third improves AI’s proficiency in language processing and pattern prediction. These projects, highlighted in this blog post, aim not only to boost performance but also significantly reduce power consumption, paving the way for more sustainable AI technologies.

CircuitNet simulates brain-like neural patterns​

Many AI applications rely on artificial neural networks, designed to mimic the brain’s complex neural patterns. These networks typically replicate only one or two types of connectivity patterns. In contrast, the brain propagates information using a variety of neural connection patterns, including feedforward excitation and inhibition, mutual inhibition, lateral inhibition, and feedback inhibition (Figure 1). These networks contain densely interconnected local areas with fewer connections between distant regions. Each neuron forms thousands of synapses to carry out specific tasks within its region, while some synapses link different functional clusters—groups of interconnected neurons that work together to perform specific functions.

Diagram illustrating four common neural connectivity patterns in the biological neural networks: Feedforward, Mutual, Lateral, and Feedback. Each pattern consists of circles representing neurons and arrows representing synapses. 
Figure 1: The four neural connectivity patterns in the brain. Each circle represents a neuron, and each arrow represents a synapse.
Inspired by this biological architecture, researchers have developed CircuitNet, a neural network that replicates multiple types of connectivity patterns. CircuitNet’s design features a combination of densely connected local nodes and fewer connections between distant regions, enabling enhanced signal transmission through circuit motif units (CMUs)—small, recurring patterns of connections that help to process information. This structure, shown in Figure 2, supports multiple rounds of signal processing, potentially advancing how AI systems handle complex information.

Diagram illustrating CircuitNet's architecture. On the left, diagrams labeled “Model Inputs” and “Model Outputs” show that CircuitNet can handle various input forms and produce corresponding outputs. The middle section, labeled “CircuitNet”, depicts several interconnected blocks called Circuit Motif Units (CMUs for short), which maintain locally dense communications through direct connections and globally sparse communications through their input and output ports. On the right, a detailed view of a single CMU reveals densely interconnected neurons, demonstrating how each CMU models a universal circuit motif.
Figure 2. CircuitNet’s architecture: A generic neural network performs various tasks, accepts different inputs, and generates corresponding outputs (left). CMUs keep most connections local with few long-distance connections, promoting efficiency (middle). Each CMU has densely interconnected neurons to model universal circuit patterns (right).
Evaluation results are promising. CircuitNet outperformed several popular neural network architectures in function approximation, reinforcement learning, image classification, and time-series prediction. It also achieved comparable or better performance than other neural networks, often with fewer parameters, demonstrating its effectiveness and strong generalization capabilities across various machine learning tasks. Our next step is to test CircuitNet’s performance on large-scale models with billions of parameters.

Spiking neural networks: A new framework for time-series prediction​

Spiking neural networks (SNNs) are emerging as a powerful type of artificial neural network, noted for their energy efficiency and potential application in fields like robotics, edge computing, and real-time processing. Unlike traditional neural networks, which process signals continuously, SNNs activate neurons only upon reaching a specific threshold, generating spikes. This approach simulates the way the brain processes information and conserves energy. However, SNNs are not strong at predicting future events based on historical data, a key function in sectors like transportation and energy.

To improve SNN’s predictive capabilities, researchers have proposed an SNN framework designed to predict trends over time, such as electricity consumption or traffic patterns. This approach utilizes the efficiency of spiking neurons in processing temporal information and synchronizes time-series data—collected at regular intervals—and SNNs. Two encoding layers transform the time-series data into spike sequences, allowing the SNNs to process them and make accurate predictions, shown in Figure 3.

Diagram illustrating a new framework for SNN-based time-series prediction. The image shows the process starting with time series input, which is encoded into spikes by a novel spike encoder. These spikes are then fed into different SNN models: (a) Spike-TCN, (b) Spike-RNN, and (c) Spike-Transformer. Finally, the learned features are input into a projection layer for prediction.
Figure 3. A new framework for SNN-based time-series prediction: Time series data is encoded into spikes using a novel spike encoder (middle, bottom). The spikes are then processed by SNN models (Spike-TCN, Spike-RNN, and Spike-Transformer) for learning (top). Finally, the learned features are fed into the projection layer for prediction (bottom-right).
Tests show that this SNN approach is very effective for time-series prediction, often matching or outperforming traditional methods while significantly reducing energy consumption. SNNs successfully capture temporal dependencies and model time-series dynamics, offering an energy-efficient approach closely aligns with how the brain processes information. We plan to continue exploring ways to further improve SNNs based on the way the brain processes information.

Refining SNN sequence prediction​

While SNNs can help models predict future events, research has shown that its reliance on spike-based communication makes it challenging to directly apply many techniques from artificial neural networks. For example, SNNs struggle to effectively process rhythmic and periodic patterns found in natural language processing and time-series analysis. In response, researchers developed a new approach for SNNs called CPG-PE, which combines two techniques:

  1. Central pattern generators (CPGs): Neural networks in the brainstem and spinal cord that autonomously generate rhythmic patterns, controlling function like moving, breathing, and chewing
  1. Positional encoding (PE): A process that helps artificial neural networks discern the order and relative positions of elements within a sequence
By integrating these two techniques, CPG-PE helps SNNs discern the position and timing of signals, improving their ability to process time-based information. This process is shown in Figure 4.

Diagram illustrating the application of CPG-PE in a SNN. It shows three main components: an input spike matrix labeled “X”, a transformation process involving positional encoding and linear transformation to produce “X’”, and the output from a spiking neuron layer labeled “X_output”. The input matrix “X” has multiple rows corresponding to different channels or neurons, each containing spikes over time steps. The transformation process maps the dimensionality from (D + 2N) to D. The spiking neuron layer takes the transformed input “X’” and produces the output spike matrix “X_output”.
Figure 4: Application of CPG-PE in an SNN. X, X′, and X-output are spike matrices.
We evaluated CPG-PE using four real-world datasets: two covering traffic patterns, and one each for electricity consumption and solar energy. Results demonstrate that SNNs using this method significantly outperform those without positional encoding (PE), shown in Table 1. Moreover, CPG-PE can be easily integrated into any SNN designed for sequence processing, making it adaptable to a wide range of neuromorphic chips and SNN hardware.

Table showing experimental results of time-series forecasting on two datasets, Metr-la and Pems-bay, with prediction lengths of 6, 24, 48, and 96. The table compares the performance of various models, including different configurations of SNN, RNN, and Transformers. Performance metrics such as RSE and R^2 are reported. The best SNN results are highlighted in bold, and up-arrows indicate higher scores, representing better performance.
Table 1: Evaluation results of time-series forecasting on two benchmarks with prediction lengths 6, 24, 48, 96. “Metr-la” and “Pems-bay” are traffic-pattern datasets. The best SNN results are in bold. The up-arrows indicate a higher score, representing better performance.

Ongoing AI research for greater capability, efficiency, and sustainability​

The innovations highlighted in this blog demonstrate the potential to create AI that is not only more capable but also more efficient. Looking ahead, we’re excited to deepen our collaborations and continue applying insights from neuroscience to AI research, continuing our commitment to exploring ways to develop more sustainable technology.
 
  • Like
  • Fire
  • Love
Reactions: 31 users

Bravo

If ARM was an arm, BRN would be its biceps💪!
Be nice if they were speaking to us...might help them.

At least they acknowledging the potential power of SNN as they put it and are exploring ...it's a start.



Microsoft Research Blog​

Innovations in AI: Brain-inspired design for more capable and sustainable technology​

Published August 29, 2024
By Dongsheng Li , Principal Research Manager Dongqi Han , Researcher Yansen Wang , Researcher


As AI research and technology development continue to advance, there is also a need to account for the energy and infrastructure resources required to manage large datasets and execute difficult computations. When we look to nature for models of efficiency, the human brain stands out, resourcefully handling complex tasks. Inspired by this, researchers at Microsoft are seeking to understand the brain’s efficient processes and replicate them in AI.

At Microsoft Research Asia(opens in new tab), in collaboration with Fudan University(opens in new tab), Shanghai Jiao Tong University(opens in new tab), and the Okinawa Institute of Technology(opens in new tab), three notable projects are underway. One introduces a neural network that simulates the way the brain learns and computes information; another enhances the accuracy and efficiency of predictive models for future events; and a third improves AI’s proficiency in language processing and pattern prediction. These projects, highlighted in this blog post, aim not only to boost performance but also significantly reduce power consumption, paving the way for more sustainable AI technologies.

CircuitNet simulates brain-like neural patterns​

Many AI applications rely on artificial neural networks, designed to mimic the brain’s complex neural patterns. These networks typically replicate only one or two types of connectivity patterns. In contrast, the brain propagates information using a variety of neural connection patterns, including feedforward excitation and inhibition, mutual inhibition, lateral inhibition, and feedback inhibition (Figure 1). These networks contain densely interconnected local areas with fewer connections between distant regions. Each neuron forms thousands of synapses to carry out specific tasks within its region, while some synapses link different functional clusters—groups of interconnected neurons that work together to perform specific functions.

Diagram illustrating four common neural connectivity patterns in the biological neural networks: Feedforward, Mutual, Lateral, and Feedback. Each pattern consists of circles representing neurons and arrows representing synapses. 
Figure 1: The four neural connectivity patterns in the brain. Each circle represents a neuron, and each arrow represents a synapse.
Inspired by this biological architecture, researchers have developed CircuitNet, a neural network that replicates multiple types of connectivity patterns. CircuitNet’s design features a combination of densely connected local nodes and fewer connections between distant regions, enabling enhanced signal transmission through circuit motif units (CMUs)—small, recurring patterns of connections that help to process information. This structure, shown in Figure 2, supports multiple rounds of signal processing, potentially advancing how AI systems handle complex information.

Diagram illustrating CircuitNet's architecture. On the left, diagrams labeled “Model Inputs” and “Model Outputs” show that CircuitNet can handle various input forms and produce corresponding outputs. The middle section, labeled “CircuitNet”, depicts several interconnected blocks called Circuit Motif Units (CMUs for short), which maintain locally dense communications through direct connections and globally sparse communications through their input and output ports. On the right, a detailed view of a single CMU reveals densely interconnected neurons, demonstrating how each CMU models a universal circuit motif.'s architecture. On the left, diagrams labeled “Model Inputs” and “Model Outputs” show that CircuitNet can handle various input forms and produce corresponding outputs. The middle section, labeled “CircuitNet”, depicts several interconnected blocks called Circuit Motif Units (CMUs for short), which maintain locally dense communications through direct connections and globally sparse communications through their input and output ports. On the right, a detailed view of a single CMU reveals densely interconnected neurons, demonstrating how each CMU models a universal circuit motif.
Figure 2. CircuitNet’s architecture: A generic neural network performs various tasks, accepts different inputs, and generates corresponding outputs (left). CMUs keep most connections local with few long-distance connections, promoting efficiency (middle). Each CMU has densely interconnected neurons to model universal circuit patterns (right).
Evaluation results are promising. CircuitNet outperformed several popular neural network architectures in function approximation, reinforcement learning, image classification, and time-series prediction. It also achieved comparable or better performance than other neural networks, often with fewer parameters, demonstrating its effectiveness and strong generalization capabilities across various machine learning tasks. Our next step is to test CircuitNet’s performance on large-scale models with billions of parameters.

Spiking neural networks: A new framework for time-series prediction​

Spiking neural networks (SNNs) are emerging as a powerful type of artificial neural network, noted for their energy efficiency and potential application in fields like robotics, edge computing, and real-time processing. Unlike traditional neural networks, which process signals continuously, SNNs activate neurons only upon reaching a specific threshold, generating spikes. This approach simulates the way the brain processes information and conserves energy. However, SNNs are not strong at predicting future events based on historical data, a key function in sectors like transportation and energy.

To improve SNN’s predictive capabilities, researchers have proposed an SNN framework designed to predict trends over time, such as electricity consumption or traffic patterns. This approach utilizes the efficiency of spiking neurons in processing temporal information and synchronizes time-series data—collected at regular intervals—and SNNs. Two encoding layers transform the time-series data into spike sequences, allowing the SNNs to process them and make accurate predictions, shown in Figure 3.

Diagram illustrating a new framework for SNN-based time-series prediction. The image shows the process starting with time series input, which is encoded into spikes by a novel spike encoder. These spikes are then fed into different SNN models: (a) Spike-TCN, (b) Spike-RNN, and (c) Spike-Transformer. Finally, the learned features are input into a projection layer for prediction.
Figure 3. A new framework for SNN-based time-series prediction: Time series data is encoded into spikes using a novel spike encoder (middle, bottom). The spikes are then processed by SNN models (Spike-TCN, Spike-RNN, and Spike-Transformer) for learning (top). Finally, the learned features are fed into the projection layer for prediction (bottom-right).
Tests show that this SNN approach is very effective for time-series prediction, often matching or outperforming traditional methods while significantly reducing energy consumption. SNNs successfully capture temporal dependencies and model time-series dynamics, offering an energy-efficient approach closely aligns with how the brain processes information. We plan to continue exploring ways to further improve SNNs based on the way the brain processes information.

Refining SNN sequence prediction​

While SNNs can help models predict future events, research has shown that its reliance on spike-based communication makes it challenging to directly apply many techniques from artificial neural networks. For example, SNNs struggle to effectively process rhythmic and periodic patterns found in natural language processing and time-series analysis. In response, researchers developed a new approach for SNNs called CPG-PE, which combines two techniques:

  1. Central pattern generators (CPGs): Neural networks in the brainstem and spinal cord that autonomously generate rhythmic patterns, controlling function like moving, breathing, and chewing
  2. Positional encoding (PE): A process that helps artificial neural networks discern the order and relative positions of elements within a sequence
By integrating these two techniques, CPG-PE helps SNNs discern the position and timing of signals, improving their ability to process time-based information. This process is shown in Figure 4.

Diagram illustrating the application of CPG-PE in a SNN. It shows three main components: an input spike matrix labeled “X”, a transformation process involving positional encoding and linear transformation to produce “X’”, and the output from a spiking neuron layer labeled “X_output”. The input matrix “X” has multiple rows corresponding to different channels or neurons, each containing spikes over time steps. The transformation process maps the dimensionality from (D + 2N) to D. The spiking neuron layer takes the transformed input “X’” and produces the output spike matrix “X_output”.
Figure 4: Application of CPG-PE in an SNN. X, X′, and X-output are spike matrices.
We evaluated CPG-PE using four real-world datasets: two covering traffic patterns, and one each for electricity consumption and solar energy. Results demonstrate that SNNs using this method significantly outperform those without positional encoding (PE), shown in Table 1. Moreover, CPG-PE can be easily integrated into any SNN designed for sequence processing, making it adaptable to a wide range of neuromorphic chips and SNN hardware.

Table showing experimental results of time-series forecasting on two datasets, Metr-la and Pems-bay, with prediction lengths of 6, 24, 48, and 96. The table compares the performance of various models, including different configurations of SNN, RNN, and Transformers. Performance metrics such as RSE and R^2 are reported. The best SNN results are highlighted in bold, and up-arrows indicate higher scores, representing better performance.
Table 1: Evaluation results of time-series forecasting on two benchmarks with prediction lengths 6, 24, 48, 96. “Metr-la” and “Pems-bay” are traffic-pattern datasets. The best SNN results are in bold. The up-arrows indicate a higher score, representing better performance.

Ongoing AI research for greater capability, efficiency, and sustainability​

The innovations highlighted in this blog demonstrate the potential to create AI that is not only more capable but also more efficient. Looking ahead, we’re excited to deepen our collaborations and continue applying insights from neuroscience to AI research, continuing our commitment to exploring ways to develop more sustainable technology.


Here's another example of Microsoft researching SNN's. Research paper published Feb 2024.

This one looks like it would be a perfect fit with incorporation of TENNs.

Screenshot 2024-08-31 at 11.55.45 am.png





 
Last edited:
  • Like
  • Love
  • Fire
Reactions: 33 users

Bravo

If ARM was an arm, BRN would be its biceps💪!
Published Sat, Aug 31, 2024,

Extract
Screenshot 2024-08-31 at 12.33.04 pm.png
 
  • Like
  • Fire
  • Love
Reactions: 45 users

Bravo

If ARM was an arm, BRN would be its biceps💪!
KAIST strikes again! 😫

The "heterovalent ion doping" method??

giphy.gif




Korean Researchers Discover Method to Enhance Next-Gen. Neuromorphic Computer Performance​

  • Editor Jasmine Choi
  • 2024.06.21 16:52

Print URL Copy Fonts Size Down Fonts Size Up

facebook(으)로 기사보내기
twitter(으)로 기사보내기 URL Copy(으)로 기사보내기 링크드인(으)로 기사보내기 Send to Email Share Scrap
Neuromorphic computing technology, which mimics the human brain to implement artificial intelligence (AI) operations. (Photo by Getty Images Bank)



Neuromorphic computing technology, which mimics the human brain to implement artificial intelligence (AI) operations. (Photo by Getty Images Bank)

A team of South Korean researchers has developed a technology that enhances the reliability and commercialization of next-generation neuromorphic computing devices by addressing their irregular characteristics.
Professor Choi Sin-hyeon from the Korea Advanced Institute of Science and Technology (KAIST) and his team, in collaboration with researchers from Hanyang University, announced on June 21 that they have developed a heterovalent ion doping method that improves the reliability and performance of next-generation memory devices.

Neuromorphic computing is a technology that implements AI operations by emulating the human brain. It uses memristors as basic units, which are advantageous for their low power consumption, high integration, and efficiency. Memristors, a portmanteau of memory and resistor, are memory devices that retain all previous states. However, due to their unstable characteristics, memristors often have low reliability.
The research team developed a “Heterovalent ion doping method” to improve the uniformity and performance of these devices. Heterovalent ions are ions that have a different valency from the atoms that originally existed, with valency being a measure of bonding.
The team proved the performance of heterovalent ion doping through atomic-level simulation analysis. The doped heterovalent ions attracted vacancies in nearby oxygen, creating stable device operation. Additionally, the space near these ions was expanded, allowing for faster device operation. According to the team's analysis, the performance of memristors doped with heterovalent ions improved in both crystalline and amorphous environments.

Professor Choi Sin-hyeon stated, "The heterovalent ion doping method can enhance the reliability and performance of neuromorphic devices,” and added, “It can contribute to the commercialization of next-generation neuromorphic computing based on memristors."
The results of this study were published in the international academic journal “Science Advances” on June 7.

 
  • Haha
  • Love
  • Sad
Reactions: 6 users
  • Love
Reactions: 1 users

HopalongPetrovski

I'm Spartacus!
KAIST strikes again! 😫

The "heterovalent ion doping" method??

View attachment 68757



Korean Researchers Discover Method to Enhance Next-Gen. Neuromorphic Computer Performance​

  • Editor Jasmine Choi
  • 2024.06.21 16:52

Print URL Copy Fonts Size Down Fonts Size Up

facebook(으)로 기사보내기
twitter(으)로 기사보내기 URL Copy(으)로 기사보내기 링크드인(으)로 기사보내기 Send to Email Share Scrap
Neuromorphic computing technology, which mimics the human brain to implement artificial intelligence (AI) operations. (Photo by Getty Images Bank)



Neuromorphic computing technology, which mimics the human brain to implement artificial intelligence (AI) operations. (Photo by Getty Images Bank)

A team of South Korean researchers has developed a technology that enhances the reliability and commercialization of next-generation neuromorphic computing devices by addressing their irregular characteristics.
Professor Choi Sin-hyeon from the Korea Advanced Institute of Science and Technology (KAIST) and his team, in collaboration with researchers from Hanyang University, announced on June 21 that they have developed a heterovalent ion doping method that improves the reliability and performance of next-generation memory devices.

Neuromorphic computing is a technology that implements AI operations by emulating the human brain. It uses memristors as basic units, which are advantageous for their low power consumption, high integration, and efficiency. Memristors, a portmanteau of memory and resistor, are memory devices that retain all previous states. However, due to their unstable characteristics, memristors often have low reliability.
The research team developed a “Heterovalent ion doping method” to improve the uniformity and performance of these devices. Heterovalent ions are ions that have a different valency from the atoms that originally existed, with valency being a measure of bonding.
The team proved the performance of heterovalent ion doping through atomic-level simulation analysis. The doped heterovalent ions attracted vacancies in nearby oxygen, creating stable device operation. Additionally, the space near these ions was expanded, allowing for faster device operation. According to the team's analysis, the performance of memristors doped with heterovalent ions improved in both crystalline and amorphous environments.

Professor Choi Sin-hyeon stated, "The heterovalent ion doping method can enhance the reliability and performance of neuromorphic devices,” and added, “It can contribute to the commercialization of next-generation neuromorphic computing based on memristors."
The results of this study were published in the international academic journal “Science Advances” on June 7.

Ah, of course. The old atomic-level simulation analysis trick. 🤣
By Jove, why didn't we think of that???
Time to send in a gunboat!!! What!!!
victormeldrew0410_468x493.jpg
 
  • Haha
Reactions: 5 users

CHIPS

Regular


Neuromorphic Computing​

The use of Neural Networks is leading to exciting developments in artificial intelligence, with the ability to increase machine learning accuracy in areas like speech recognition and image classification.​

However, reaching this high degree of precision using traditional computing architectures takes a significant power toll. Since much of this power consumption is related to data movement between a system’s computing elements and memory modules, the industry is investigating new technologies that can reduce this data movement. The solution lies in integrating dense, low-power NVM closer to the computing elements – also called In-Memory Computing - and Weebit ReRAM is an ideal candidate.
In addition, the next wave of AI and Machine Learning architectures will take a new approach – Analog Computing, or Neuromorphic Computing – whereby the computation is done within the storage element in an analog fashion. These architectures are designed to accurately emulate the brain’s operation and can therefore achieve orders of magnitude better power efficiency.
The Weebit ReRAM cell functions similarly to a synapse in the brain, making it a promising solution for neuromorphic computing.
Many institutes are now studying this domain, which has the potential to become a huge market in the future.
We are collaborating with research partners – both academia and industry – to explore the possibilities of using ReRAM for neuromorphic computing. This includes ongoing projects with the Non-Volatile Memory Research Group of the Indian Institute of Technology Delhi (IITD), the Institute of Nanoscience and Nanotechnology (INN) NCSR ‘Demokritos’, the Politecnico di Milano and the Technion – Israel Institute of Technology, as well as CEA-Leti. Together with Leti we were the first in the industry to demonstrate ReRAM-based spiking neural networks (SNN).
 
  • Like
  • Thinking
  • Wow
Reactions: 10 users
KAIST strikes again! 😫

The "heterovalent ion doping" method??

View attachment 68757



Korean Researchers Discover Method to Enhance Next-Gen. Neuromorphic Computer Performance​

  • Editor Jasmine Choi
  • 2024.06.21 16:52

Print URL Copy Fonts Size Down Fonts Size Up

facebook(으)로 기사보내기
twitter(으)로 기사보내기 URL Copy(으)로 기사보내기 링크드인(으)로 기사보내기 Send to Email Share Scrap
Neuromorphic computing technology, which mimics the human brain to implement artificial intelligence (AI) operations. (Photo by Getty Images Bank)



Neuromorphic computing technology, which mimics the human brain to implement artificial intelligence (AI) operations. (Photo by Getty Images Bank)

A team of South Korean researchers has developed a technology that enhances the reliability and commercialization of next-generation neuromorphic computing devices by addressing their irregular characteristics.
Professor Choi Sin-hyeon from the Korea Advanced Institute of Science and Technology (KAIST) and his team, in collaboration with researchers from Hanyang University, announced on June 21 that they have developed a heterovalent ion doping method that improves the reliability and performance of next-generation memory devices.

Neuromorphic computing is a technology that implements AI operations by emulating the human brain. It uses memristors as basic units, which are advantageous for their low power consumption, high integration, and efficiency. Memristors, a portmanteau of memory and resistor, are memory devices that retain all previous states. However, due to their unstable characteristics, memristors often have low reliability.
The research team developed a “Heterovalent ion doping method” to improve the uniformity and performance of these devices. Heterovalent ions are ions that have a different valency from the atoms that originally existed, with valency being a measure of bonding.
The team proved the performance of heterovalent ion doping through atomic-level simulation analysis. The doped heterovalent ions attracted vacancies in nearby oxygen, creating stable device operation. Additionally, the space near these ions was expanded, allowing for faster device operation. According to the team's analysis, the performance of memristors doped with heterovalent ions improved in both crystalline and amorphous environments.

Professor Choi Sin-hyeon stated, "The heterovalent ion doping method can enhance the reliability and performance of neuromorphic devices,” and added, “It can contribute to the commercialization of next-generation neuromorphic computing based on memristors."
The results of this study were published in the international academic journal “Science Advances” on June 7.

Maybe they should just try digital..

Has anyone grabbed the patent for that? 🤔..
 
  • Haha
  • Thinking
Reactions: 2 users
Published Sat, Aug 31, 2024,

Extract
View attachment 68756

…sounds like catchup, we at BRN are familiar in setting these trends, only now slowly being understood by others in industry…..!

GO Chippa !
 
  • Like
  • Fire
Reactions: 7 users

"This year, we are deploying a limited number of NEO units in selected homes for research and development purposes"

Ahead of Tesla, in that respect, who is currently only using Optimus, in his factories, as far as I'm aware..



Thought this was some kind of skit, just to make me depressed, before finding the announcement..
 
Last edited:
  • Fire
  • Wow
  • Like
Reactions: 3 users

IloveLamp

Top 20
Last edited:
  • Like
  • Fire
  • Love
Reactions: 41 users
Very good news
 
  • Like
Reactions: 2 users

Guzzi62

Regular
Ask yourself, why would an employee from Qualcomm repost this.......

Happy Fathers Day 😊

View attachment 68775 View attachment 68776
But but, I posted the Lamborghini website to some friends.

Even if BRN ends up at $100 a share, I would never buy one, ever!

I can't take those private linkedin posts as dot joining to be honest, they MIGHT BE, but it's far from given.

Here is the Lambo web page so you have something nice to look at on a Sunday Father's Day, LOL.

 
  • Like
  • Haha
Reactions: 5 users
But but, I posted the Lamborghini website to some friends.

Even if BRN ends up at $100 a share, I would never buy one, ever!

I can't take those private linkedin posts as dot joining to be honest, they MIGHT BE, but it's far from given.

Here is the Lambo web page so you have something nice to look at on a Sunday Father's Day, LOL.

Then why are you here mate, To cause trouble no doubt
On your bike
 
  • Like
Reactions: 9 users

Kachoo

Regular
Ask yourself, why would an employee from Qualcomm repost this.......

Happy Fathers Day 😊

View attachment 68775 View attachment 68776
I mean it's good but I would say they are a fan an investor likely. If Qualcomm has not made any official statement and they are working with BRN Amanda want to keepnit hush as it is then he would be fired likely for letting info out. So I'm reluctant to believe think this is more then just a fan still good if it is.

Just how I'll process these leads from now. I mean TATA promotes us but there is no money from it yett
 
  • Like
Reactions: 8 users
Top Bottom