Al Keneda lost his vocal cords — and his voice — to cancer 16 years ago.
He remembers the panic he felt when he awoke in the ICU unable to speak, and the challenges of learning to communicate without his voice.
In the beginning, he communicated with a white board and a pen. Then, with something called an electrolarynx — a cylindrical metal device about 3 or 4 inches long that he holds to his throat when he speaks.
"I used this for three years before I got my voice prosthesis," he says in the robotic voice produced by the electrolarynx. "It was a barrel of laughs."
Recently, at a scientific conference in Washington, D.C., German researchers showed off a technology that might work even better for Keneda — once it's perfected. The system, called EMG-based Silent Speech Recognition, relies on a computer to construct words by reading the muscles in the face.
Keneda hasn't lost his sense of humor, but living without a real voice wasn't always funny. Like when he once called a stranger about buying a truck.
"When he answered and he heard me with my robotic voice, he just hung up," Keneda recalls. "He didn't believe it was a serious call. So I called him back, and he hung up with a threat —'Don't call me back,' you know. I'll never know if I would have bought that truck and he'll never know if he would of sold it," he says.
Today, Keneda speaks a lot better. Instead of the electrolarynx, he now has a voice prosthesis — a small valve surgically inserted into an opening in his throat. When he speaks, he holds his thumb across the opening and uses his breath to force air upward through his throat and mouth to form speech.
But even this prosthesis isn't perfect. His voice is raspy, and he speaks a bit slower than he used to because he has to take a breath in between each phrase. And certain sounds still elude him, like words that begin with the letter 'H'.
"Everything else you can pretty well do, but the letter 'H' is a bother," he says.
The new Silent Speech Recognition program, which could help people like Keneda, was demonstrated for me by German researcher Michael Wand.
"Let us have dinner at the pub," Wand says. About a second later, through a set of speakers attached to Wand's laptop, a computer-generated voice echoes back, "Let us have dinner at the pub."
"A good time would be 7 o'clock," Wand says.
"A good time would be 7 o'clock," replies the computer.
The program is the result of Wand's Ph.D. thesis research at the Karlsruhe Institute of Technology. Although Wand is speaking aloud so that I can hear him, the computer isn't listening to the sound of his voice, it's translating the muscle movements in his face into speech via a half-dozen or so wires connecting electrodes on Wand's face to a computer.
The idea is that a person can silently form the words they want to say — not actually speak them — and the computer interprets the muscle movements and turns them into sound. One might say that Wand has taught a computer to read lips.
But there is still a lot to do before this technology is ready for widespread use — starting with getting rid of the wires. What girl would say yes to a pub dinner with a guy who has electrodes all over his face?
Wand agrees and says that one day the electrodes and the software will be completely integrated into a smart phone that could be held discreetly to the side of the face and produce fluent speech.
The system currently recognizes about 2,000 words and operates with 90 percent accuracy, Wand says. Good enough for everyday speech, he says — well, at least most of the time. At the beginning of our interview, it didn't fare quite so well.
"My name is Michael Wand," tries Wand. "I would be glad to," replies the computer. Wand tries again, clearly a little embarrassed. "The recognizer interprets my muscle movements," Wand says. "The recognizer interprets my muscle," responds the computer. "Therefore I can talk to you by simply mouthing words," says Wand. A second later the computer responds, "Sure you can get my data."
We both start laughing.
"I have no idea what's happening. Maybe it's getting a little bit late. Electrodes falling off ..." Wand says.
Wand tells me that sometimes smiling and other nonverbal facial movements can confuse the system — it's just one of the kinks they are working out to make this a viable product people will actually buy. "We certainly hope to market the system, but it will look completely different then, I assure you."
Michael Benninger chairs of the Cleveland Clinic's Head and Neck Institute. He says that people who have lost their voices have a number of different choices, but the technology they select is very individualized.
"It's always good to have options. Anything we can do to improve our patients' ability to communicate is a good thing," says Benninger. "This may be a good device — particularly for those people who have articulation problems or for those who prefer an electrolarynx."
And for the rest of us, it may just be a way to have a private conversation in public. Just make sure no one is reading your lips.
Copyright 2020 NPR. To see more, visit https://www.npr.org.