In what seems like something out of a sci-fi movie, scientists have plucked the famous Pink Floyd song “Another Brick in the Wall” from individuals’ brains.
Researchers were able to decode individual brain patterns using computer models, electrodes and brain scans. The words “words” are not allowed.The entire You can also read our blog. from people’s brain activity (SN: 11/15/22; SN: 5/1/23).
New study published on August 15, PLOS Biology, adds in music, demonstrating that brain activity is also decoded by songs and revealing Different brain areas are able to pick up on a wide range of acoustical components. The finding could eventually help improve devices that allow communication from people with paralysis or other conditions that limit one’s ability to speak.
Neuroscientist Ludovic Bellier of the University of California Berkeley and his team analyzed brain activity recorded using electrodes implanted within the brains of individuals suffering from epilepsy. They listened to this 1979 rock song while being monitored for epilepsy.
People’s nerve cells, particularly those in auditory areas, responded to hearing the song, and the electrodes detected not only neural signals associated with words but also rhythm, harmony and other musical aspects, the team found. Researchers used the information they collected to develop a computer program that would reconstruct sounds based on brain activity. They found that the model could recreate sounds similar to the song.
“It’s a real tour de force,” says Robert Zatorre, a neuroscientist at McGill University in Montreal who was not involved in the study. “Because you’re recording the activity of neurons directly from the brain, you get very direct information about exactly what the patterns of activity are.”
The study reveals which brain areas respond to music in different ways. In one region of the superior temporal gryrus (STG), located at the lower center of each side, the activity intensified when specific sounds began to play, such as a guitar tone. A second area in the STG increased its activity and maintained it when vocals are used.
The STG on right-side of the brain is crucial for decoding music, but not on the left. The researchers found that removing information from this brain area in the computer models decreased the accuracy of song reconstruction.
“Music is a core part of human experience,” says Bellier, who has been playing instruments since he was 6 years old. “Understanding how the brain processes music can really tell us about human nature. You can go to a country and not understand the language, but be able to enjoy the music.”
It will be difficult to continue to investigate musical perception, because it is hard to reach the brain regions that process music without using invasive methods. Zatorre also wonders if the computer model can be used for more than one song. “Does [it] work on other kinds of sounds, like a dog barking or phone ringing?” he asks.
Bellier says the goal is to be able, in time, to generate and decode natural sounds, as well as music. In the near future, the incorporation of the musical aspects of speech such as pitch, timbre and volume into brain-computer devices may help people with brain lesions communicate better.