Out of Silence|
Stroke victim Erik Ramsey is locked in a world without motion or speech. BU scientists are finding a way to hear what he's thinking.
By S.I. Rosenbaum | The Boston Globe | July 27, 2008
Frank Guenther leans toward the computer. On the screen is a blurry image of Erik Ramsey, motionless in his wheelchair 1,000 miles away in Georgia.
"Okay, Erik," Guenther's graduate student, Jonathan Brumberg, says into the microphone. "We'll say, 'Three, two, one,' then 'go,' and that's when you try to make your sound. OK?"
Three. Two. One. Go.
Ramsey is 25 years old and hasn't spoken in almost a decade - not since a stroke sealed him up, awake and intelligent, inside his own skull. He can feel, but he can't move any part of his body, except his eyes, which exhausts him.
But there's an electrode in his brain, and when his neurons fire, a computer records the pattern. For two years, Guenther and his team at Boston University have been decrypting those patterns, working to turn them into speech.
To do so, Guenther relied on an unorthodox insight: that the signals coming from the brain encode actual sound, something like the way an FM radio signal, a telephone line, or even an MP3 file does. His work with Ramsey shows that he may be right.
But this is more than just research. It's a rescue mission - a daring attempt to reach a man isolated in a way few can imagine and pull him back into the world.
How does the brain turn thought into speech?
You think of a word; milliseconds later, hundreds of muscles contract in complex choreography, pushing air through the larynx and past the tongue and creating a sound.
But what happens between thought and sound?
No one really knows.
"The nervous system is a very complicated place," said John Donoghue, a neurologist at Brown University. "It remains extremely controversial as to what is going on."
Speech in particular is uncharted territory, he said.
"We know a lot about hand movement," he said, because we can see it modeled in monkeys. "But there is no model for speech."
Guenther, director of the cognitive and neural systems lab at BU, began mapping the brain's language centers 15 years ago. As he learned more about speech, he started to pay attention to something called "formant frequencies" - the acoustic changes that make an "oh" sound different from an "ah" sound.
Guenther theorized that these frequencies are the key to how the brain encodes speech. Just before we speak, he believes, our brain forms a map of the sound we're about to produce.
He never thought about a practical application for his research - not until a scientist in Georgia asked for his help with an experimental project.
The project was Ramsey.
Until Ramsey was 16, he was an easy-going youth with a Georgia drawl who got in trouble for drawing in class and loved skateboards and heavy metal music.
One November night in 1999, Ramsey was riding home from a movie on a dark highway outside Atlanta. His friend's car collided with a minivan, and Ramsey was badly hurt.
In the emergency room, he lay screaming with pain. But when he emerged from 15 hours of surgery, he couldn't speak or move.
It took doctors days to realize what had happened: a trauma-induced stroke in Ramsey's brain stem had severed the link between brain and body.
The condition has a stark, evocative name: locked-in syndrome. It's not clear how many other people are locked in, but the condition is rare, mostly caused by brain stem strokes and nerve disorders such as Lou Gehrig's disease.
For a while after the stroke, Ramsey could move his eyes enough to pick out letters on an alphabet board his father, Eddie, devised. But after a bout of pneumonia, he weakened and lost that, too.
Eddie Ramsey began searching for another option. He found one in Dr. Philip R. Kennedy, founder of Neural Signals Inc. of Atlanta.
In 1986, Kennedy was the first to plant permanent electrodes in the brains of rats. By 1996, one of his test subjects, a locked-in man, was able to type his name on a computer by thinking about moving his hand.
But Kennedy wanted to go further: to reach into the brain's electro-chemical chatter and decode speech directly.
Ramsey volunteered to be his next test subject.
On Dec. 22, 2004, surgeons placed an electrode in the speech center of Ramsey's brain. Like a wiretap on a phone line, the electrode picks up brain signals and sends them to a nearby computer.
But the data the computer received were a mess - a cacophony of neurons all firing at once, incomprehensible as static. No one knew what it might mean. No one knew how to turn the data back into a voice. Kennedy made his data public, and scientists across the country tried to find patterns in the signal.
One of those scientists was Guenther.
When Guenther looked at the data, he thought the signals might encode the formant frequencies he had studied - much like the electric signal on a telephone wire, encoding raw sound. With Brumberg, Guenther set to work decrypting the signal into sound, starting with vowels: ah, oh, ooh, eee.
They sampled Eddie Ramsey's Georgia-inflected speech to make sure they were aiming for the right frequencies. "They sound a bit odd to us Bostonians," Guenther joked.
In February they flew to Atlanta to test their approach with Ramsey in real time.
If it worked, it would add support for Guenther's theory. And if that turns out to be correct, it would be "a fairly big event" in neuroscience, said Donoghue.
Guenther had studied Ramsey's brain for more than a year. But even then, he felt awkward around the unresponsive young man in the wheelchair.
"There's a little bit of trepidation meeting someone like that," he said. "You don't know how to react."
But then Guenther noticed Ramsey turning his eyes toward Guenther's team members whenever they were near, looking at them closely.
"It gave us a sign," Guenther said. "He knew who we were, and he was excited about what we were doing."
When they hooked Ramsey up to the computer, the first run-through was a failure - Ramsey was unable to make the target sound. Guenther was tense, he said later, though he tried not to show it.
"It was like watching the Red Sox when they were on the verge of being eliminated from the playoffs by the Yankees," he said.
Then, on the second try, Ramsey got one vowel right, then another.
It was the first time he'd made a voluntary sound - swiftly and without delay - in nine years.
The scientists in the room began to cheer.
On a balmy July day, in Guenther's Kenmore Square lab, he and Brumberg are fighting with the computer.
It freezes again. Brumberg sighs and reboots.
Finally everything's up and running. Brumberg types in commands to a computer in Kennedy's Atlanta lab, where Ramsey is waiting. On the screen, a live video shows Ramsey with his eyes closed, his father at his side.
"Are you ready?" Brumberg asks him. "Ok. Say 'ooh.' "
In the image on the screen, Ramsey doesn't move, but a flat computer tone fills the air. It starts out like a hum, then it deepens:
It's hard work. When Ramsey gets too tired, they take a break and let him listen to music: Metallica, Ozzy Osbourne. Inside his head, he sings along.
On days when they're not at Kennedy's lab, the Ramseys drive to a park about a mile from their house. Eddie Ramsey pushes his son's chair along the paved trails, the two of them feeling the muggy air against their faces.
"He's changed 100 percent - especially since Boston University and them have gotten involved," Eddie Ramsey said. "He's seen a lot more results from it, a lot better results. It's encouraged him a lot. ... I think it gives him another reason to live."
By now, the computer can recognize the vowel sound Ramsey is thinking with about 75 percent accuracy - and that number is climbing. Guenther and his team are going to try consonants next. They think whole words will take about two years of work.
What will Eddie want to ask his son, if he ever can really converse again?
"I think he'll have enough to say," he said. "I won't have to ask him any questions."