TechGirl
Founding Member
Just browsing around as I do & came across the Machine Learning Department at CMU
No mention of us but in their recent news it is all very relevant to us
Tuesday, November 29, 2022
Print this page.
New research from a team in the Machine Learning Department shows which regions of the brain processed the meaning of combined words and how the brain maintained and updated the meaning of words.
Humans accomplish a phenomenal amount of tasks by combining pieces of information. We perceive objects by combining edges, categorize scenes by combining objects, interpret events by combining actions, and understand sentences by combining words. But researchers don't yet have a clear understanding of how the brain forms and maintains the meaning of the whole — such as a sentence — from its parts. School of Computer Science (SCS) researchers in the Machine Learning Department (MLD) have shed new light on the brain processes that support the emergent meaning of combined words.
Mariya Toneva, a former MLD Ph.D. student now faculty at the Max Planck Institute for Software Systems, worked with Leila Wehbe, an assistant professor in MLD, and Tom Mitchell, the Founders University Professor in SCS, to study which regions of the brain processed the meaning of combined words and how the brain maintained and updated the meaning of words. This work could contribute to a more complete understanding of how the brain processes, maintains and updates the meaning of words, and could redirect research focus to areas of the brain suitable for future wearable neurotechnology, such as devices that can decode what a person is trying to say directly from brain activity. These devices can help people with diseases like Parkinson's or multiple sclerosis that limit muscle control.
Toneva, Mitchell and Wehbe used neural networks to build computational models that could predict the areas of the brain that process the new meaning of words when they are combined. They tested this model by recording the brain activity of eight people as they read a chapter of "Harry Potter and the Sorcerer's Stone." The results suggest that some regions of the brain process both the meaning of individual words and the meaning of combined words, while others process only the meanings of individual words. Crucially, the authors also found that one of the neural activity recording tools they used, magnetoencephalography (MEG), did not capture a signal that reflected the meaning of combined words. Since future wearable neurotechnology devices might use recording tools similar to MEG, one potential limitation is their inability to detect the meaning of combined words, which could affect their capacity to help users produce language.
The team's work builds on past research from Wehbe and Mitchell that used functional magnetic resonance imaging to identify the parts of the brain engaged as people read a chapter of the same Potter book. The result was the first integrated computational model of reading, identifying which parts of the brain are responsible for such subprocesses as parsing sentences, determining the meaning of words and understanding relationships between characters.
For more on the most recent findings, read the paper "Combining Computational Controls With Natural Text Reveals Aspects of Meaning Composition," in Nature Computational Science.
For More Information
Aaron Aupperlee | 412-268-9068 | aaupperlee@cmu.edu
Machine Learning | CMU | Carnegie Mellon University
Machine Learning Department at Carnegie Mellon University. Machine learning (ML) is a fascinating field of AI research and practice, where computer agents improve through experience. Machine learning is about agents improving from data, knowledge, experience and interaction...
www.ml.cmu.edu
No mention of us but in their recent news it is all very relevant to us
New Research Investigates How the Brain Processes Language
New research from a team in the Machine Learning Department shows which regions of the brain processed the meaning of combined words and how the brain maintained and updated the meaning of words.
www.cs.cmu.edu
New Research Investigates How the Brain Processes Language
Aaron AupperleeTuesday, November 29, 2022
Print this page.
New research from a team in the Machine Learning Department shows which regions of the brain processed the meaning of combined words and how the brain maintained and updated the meaning of words.
Humans accomplish a phenomenal amount of tasks by combining pieces of information. We perceive objects by combining edges, categorize scenes by combining objects, interpret events by combining actions, and understand sentences by combining words. But researchers don't yet have a clear understanding of how the brain forms and maintains the meaning of the whole — such as a sentence — from its parts. School of Computer Science (SCS) researchers in the Machine Learning Department (MLD) have shed new light on the brain processes that support the emergent meaning of combined words.
Mariya Toneva, a former MLD Ph.D. student now faculty at the Max Planck Institute for Software Systems, worked with Leila Wehbe, an assistant professor in MLD, and Tom Mitchell, the Founders University Professor in SCS, to study which regions of the brain processed the meaning of combined words and how the brain maintained and updated the meaning of words. This work could contribute to a more complete understanding of how the brain processes, maintains and updates the meaning of words, and could redirect research focus to areas of the brain suitable for future wearable neurotechnology, such as devices that can decode what a person is trying to say directly from brain activity. These devices can help people with diseases like Parkinson's or multiple sclerosis that limit muscle control.
Toneva, Mitchell and Wehbe used neural networks to build computational models that could predict the areas of the brain that process the new meaning of words when they are combined. They tested this model by recording the brain activity of eight people as they read a chapter of "Harry Potter and the Sorcerer's Stone." The results suggest that some regions of the brain process both the meaning of individual words and the meaning of combined words, while others process only the meanings of individual words. Crucially, the authors also found that one of the neural activity recording tools they used, magnetoencephalography (MEG), did not capture a signal that reflected the meaning of combined words. Since future wearable neurotechnology devices might use recording tools similar to MEG, one potential limitation is their inability to detect the meaning of combined words, which could affect their capacity to help users produce language.
The team's work builds on past research from Wehbe and Mitchell that used functional magnetic resonance imaging to identify the parts of the brain engaged as people read a chapter of the same Potter book. The result was the first integrated computational model of reading, identifying which parts of the brain are responsible for such subprocesses as parsing sentences, determining the meaning of words and understanding relationships between characters.
For more on the most recent findings, read the paper "Combining Computational Controls With Natural Text Reveals Aspects of Meaning Composition," in Nature Computational Science.
For More Information
Aaron Aupperlee | 412-268-9068 | aaupperlee@cmu.edu