Facebook parent Meta announced they are launching a long-term research project to build a next-generation AI that can learn and process speech and text in the same way the human brain does. Meta described an effort to create a human-level AI.
Meta is partnering with a neuroimaging company, NeuroSpin, which images the human brain and with a software company Inria, to study how the human brain processes speech and text and then compare that with how AI language models.
NeuroSpin is a research center that is specifically focused on brain imaging. The researchers consist of physicists, mathematicians, neuroscientists and doctors who work together to build tools to learn about the human brain in different ways.
NeuroSpin explains what it does:
“Focused on neuroimaging, the research conducted ranges from technological and methodological developments (data acquisition and processing) to preclinical and clinical neuroscience, including cognitive neuroscience.”
“Today, we’re announcing a long-term AI research initiative to better understand how the human brain processes speech and text. In collaboration with neuroimaging center Neurospin (CEA) and Inria, we’re comparing how AI language models and the brain respond to the same spoken or written sentences.
We’ll use insights from this work to guide the development of AI that processes speech and text as efficiently as people.”
The problem with AI language models is that they need a lot of examples in order to learn. Human brains need only a few examples to learn.
The current research into brain-like AI language models discovered:
“Language models that most resemble brain activity are those that best predict the next word from context (e.g. once upon a …time).
While the brain anticipates words and ideas far ahead in time, most language models are trained to only predict the very next word. Unlocking this long-range forecasting capability could help improve modern AI language models.”
The announcement cited current research into modeling AI on human brain activity that used MRIs and other imaging tools to view human brain activity when the humans were accomplishing various language-related tasks.
The research paper cited is from 2021 and it is titled, Language processing in brains and deep neural networks: computational convergence and its limits (PDF).
A summary of the findings is discussed in the opening paragraphs of the research paper:
“The results show that (1) the position of the layer in the network and (2) the ability of the network to accurately predict words from context are the main factors responsible for the emergence of brain-like representations in artificial neural networks.
Together, these results show how perceptual, lexical and compositional representations precisely unfold within each cortical region and contribute to uncovering the governing principles of language processing in brains and algorithms.”
The importance of the above research is to show how researching how the brain processes data can yield insights into creating similar processes in an algorithm.
The Meta research teams are using thousands of scans of human brain activity to see which regions of the brain were activated during tasks.
This research was said to show the “computational organization of the human brain” which yielded insights useful toward Meta’s goal of developing “human-level AI.”
The benefits aren’t limited to generating human-level AI, the research also helps neuroscientists better understand the human brain.
Read the Official Meta Announcement
Building AI That Processes Language as People Do
Read a More In-Depth Description of Meta’s Human-Level AI Research
Studying the brain to build AI that processes language as people do