A patient with amyotrophic lateral sclerosis is using a brain-computer interface (BCI) developed by researchers at the University of California, USA. The patient speaks with family in real-time through a computer and can also sing simple melodies./Courtesy of University of California

The day is not far off when people who were unable to speak due to paralysis will regain their voices. American scientists have developed a technology that can convey thoughts in real time through artificial intelligence (AI).

Researchers at the University of California, Davis (UC Davis) noted on the 12th that they have developed a brain-computer interface (BCI) that decodes brain signals and effectively translates them into speech in real time. The research findings were published in the international journal Nature on the same day.

The BCI technology captures and analyzes neural signals from the brain, enabling interaction with external devices such as computers. In other words, it is a method of transferring thoughts into speech, text, or machine actions. There have been previous studies that successfully converted the brain signals of patients with paralysis into speech to communicate with others.

The issue is speed. Until now, there was a delay in converting thoughts into speech, making natural conversation difficult. The UC Davis researchers transferred thoughts into speech with a delay of 0.025 seconds this time, a speed similar to that of actual human conversation.

The research team inserted four microelectrodes into the area of the brain that controls language in patients to record the activity of hundreds of neurons in real time. First, they showed sentences to the participants and asked them to repeat them while collecting brain signals. The AI learned the brain signals corresponding to specific sentences on its own and later transferred those brain signals into speech immediately.

The research team also conducted experiments on patients with amyotrophic lateral sclerosis (ALS), a rare disease that causes slow paralysis of all muscles, named after a major league baseball player. The patient was unable to use facial muscles to express themselves verbally, but thanks to the AI system developed this time, they were able to communicate with family in real time. They were able to ask questions with intonation and even sing simple melodies.

The researchers said that the patient was able to speak not only pre-stored words but also new words spontaneously. People who heard the synthesized speech produced by the BCI system understood the meaning with approximately 60% accuracy. When no BCI was used, the understanding rate was only 4%, indicating an improvement in communicative ability by about 15 times.

However, it is still in the early stages. Only one patient has shown successful results, and it needs to be confirmed whether the same effect will occur for people who have difficulty speaking due to various causes, such as strokes. The research team plans to continue verification through a clinical trial named 'BrainGate2.'

Sergey Stavisky, a professor at UC Davis, said, "We will help more people communicate in their own voices," and noted, "It is expected to greatly assist in social participation and improve quality of life."

References

Nature (2025), DOI: https://doi.org/10.1038/s41586-025-09127-3