December 22, 2024
Facebook's parent company, Meta, has begun investing in and developing tools to allow computers to "hear" what another person is hearing by reading their brainwaves, a monumental step forward for neuroscience's ability to interpret thoughts.

Facebook’s parent company, Meta, has begun investing in and developing tools to allow computers to “hear” what another person is hearing by reading their brainwaves, a monumental step forward for neuroscience’s ability to interpret thoughts.

While Meta’s research is in the early stages, the company is funding research into artificial intelligence to help people with brain injuries communicate by recording their brain activity without the highly intrusive procedures of installing electrodes into their brains. The company announced in late August that it had compiled data from several subjects listening to audio and contrasted the audio with the people’s brain activity. It used that information to teach artificial intelligence to determine which brain activities correlate with specific words.

“The results of our research are encouraging because they show that self-supervised trained AI can successfully decode perceived speech from noninvasive recordings of brain activity, despite the noise and variability inherent in those data,” wrote Meta in a blog post.

FRENCH DENTISTS JAILED FOR UNNECESSARY AND HORRIFYING WORK ON PATIENTS

The study looked at 169 adult participants from multiple public datasets. Each person involved was hearing stories or sentences read aloud while scientists observed their brain activity. The data recorded during the scans were then fed into an AI model in hopes of it finding patterns or “hearing” what the participant was listening to during the research. What made the method difficult was that the recorded brain activity was recorded via nonintrusive methods, which meant that the brain activity was very “noisy.” If a developer wants accurate recordings of human brainwaves without attaching electrodes, then they will need to invest in much more expensive equipment, which makes it harder to use. There may also be a number of biological factors that could muck up the information tracking, such as the skull or skin.

There are also limits to the ability to determine if certain data points correlate with specific words. “Even if we had a very clear signal, without machine learning, it would be very difficult to say, ‘OK, this brain activity means this word, or this phoneme, or an intent to act, or whatever,'” Jean Remi King, a researcher at Facebook Artificial Intelligence Research Lab, told Time.

The results of Meta’s research are notable on their own but will require further research and development before the results can be replicated and converted into anything with commercial implementation. “What patients need down the line is a device that works at bedside and works for language production,” King said. “In our case, we only study speech perception. So I think one possible next step is to try to decode what people attend to in terms of speech — to try to see whether they can track what different people are telling them.”

CLICK HERE TO READ MORE FROM THE WASHINGTON EXAMINER

While the practice of decoding what others have already heard may not seem practical at first, the AI researcher is convinced that they offer insights into what brains typically transmit during listening or speech. “I take this [study] more as a proof of principle that there may be pretty rich representations in these signals — more than perhaps we would have thought,” King said.

Leave a Reply