In a new advancement of language neuroscience, researchers at the University of Texas in Austin have developed an AI-based decoder that can translate brain activity into continuous text, allowing thoughts to be read non-invasively for the first time [1]. The decoder utilizes functional magnetic resonance imaging (fMRI) scan data and can reconstruct speech with high accuracy while people listen to or imagine a story. This breakthrough offers potential applications in restoring speech for individuals with conditions like stroke or motor neurone disease, without the need for surgical implants, and opens up new possibilities for investigating dreams, background brain activity, and developing brain-computer interfaces.
Functional Magnetic Resonance Imaging (fMRI)
Previous limitations of fMRI, such as the time lag in tracking real-time brain activity, were overcome by leveraging large language models. The decoder focuses on the semantic meaning of speech rather than attempting to read activity word by word. While the system demonstrates impressive results in capturing the gist and meaning of the original words, it sometimes struggles with aspects of language like pronouns.
Basis of Language: Brain Regions
Language is a remarkable cognitive ability that distinguishes humans from other species. It enables us to communicate, express our thoughts and emotions, and engage in complex social interactions.
The two key brain regions implicated in language are Broca's area and Wernicke's area, both situated in the left hemisphere for most right-handed individuals [2]. Broca's area, located in the frontal lobe, plays a crucial role in language production and articulation. On the other hand, Wernicke's area, situated in the temporal lobe, is involved in language comprehension.
However, it is important to note that language processing is not confined to these two regions alone. The language network is extensive and interconnected, encompassing regions such as the angular gyrus, the superior temporal gyrus, and the inferior parietal lobule. These areas work in harmony to facilitate various aspects of language, including syntax, semantics, phonology, and pragmatics.
Measuring Language Processing in Humans
To investigate language processing in the brain, researchers employ a variety of methods, including fMRI, electroencephalography (EEG), magnetoencephalography (MEG), and lesion studies. fMRI allows researchers to examine brain activity by measuring blood flow changes, providing insight into which regions are active during specific language tasks. EEG and MEG measure the brain's electrical activity, offering precise temporal resolution but less spatial resolution. Lesion studies involve examining individuals with brain damage to identify the areas critical for language function [3].
Magnetoencephalography (MEG) and Electroencephalography (EEG)
How We Acquire Language
Language acquisition is a remarkable feat, beginning early in infancy and progressing throughout childhood. Infants start by perceiving and distinguishing speech sounds, eventually learning to produce them. One influential theory of language acquisition is the "Critical Period Hypothesis," suggesting that there is a window of opportunity during which language acquisition is most efficient [4]. This critical period is believed to occur during early childhood, highlighting the importance of exposure to language during this developmental stage.
As children grow, they acquire vocabulary, grammar, and social language skills. The process involves both innate predispositions and environmental influences. Parents and caregivers play a crucial role in supporting language development through interactions, exposure to diverse language experiences, and modelling language structures.
Sign Language: A Unique Language Modality
Sign language, used primarily by the Deaf community, provides a fascinating contrast to spoken languages. It is a visual-gestural language utilizing hand shapes, movements, facial expressions, and body postures to convey meaning. The neural mechanisms underlying sign language are similar to those of spoken language, with comparable brain regions involved in comprehension and production. Studies using fMRI have shown activation in Broca's and Wernicke's areas during sign language processing, reinforcing the idea that the brain is flexible in accommodating different modalities of language.
While there are similarities in the underlying neural mechanisms, learning sign language differs from learning spoken language in several ways. First, sign language relies heavily on visual perception, spatial processing, and motor control. Learners must acquire the ability to decode and produce complex visual gestures accurately. Second, the grammar and syntax of sign language differ significantly from spoken languages. Sign languages often have their unique linguistic structures and word orders, demanding separate learning processes.
---
Daniel Glassbrook, PhD
Daniel is a sports scientist and researcher, currently working as the first team sports scientist for the Newcastle Falcons Rugby Club, and a postdoctoral researcher in sports related concussion at Durham University.
References
1. Tang, J., LeBel, A., Jain, S., & Huth, A. G. (2023). Semantic reconstruction of continuous language from non-invasive brain recordings. Nature Neuroscience, 1-9.
2. Petrides, M. (2013). Neuroanatomy of language regions of the human brain. Academic Press.
3. Vaidya, A. R., Pujara, M. S., Petrides, M., Murray, E. A., & Fellows, L. K. (2019). Lesion studies in contemporary neuroscience. Trends in cognitive sciences, 23(8), 653-671.
4. Abello-Contesse, C. (2009). Age and the critical period hypothesis. ELT journal, 63(2), 170-172.
5. MacSweeney, M., Woll, B., Campbell, R., McGuire, P. K., David, A. S., Williams, S. C., ... & Brammer, M. J. (2002). Neural systems underlying British Sign Language and audio‐visual English processing in native users. Brain, 125(7), 1583-1593.
Comments