A team of scientists from the University of Washington (WU) developed a method that could potentially be applied to paralyzed people to establish a form of mind-reading communication.
Results of the recent experiments intimate that, with the use of brain implants and special software, computers could be able to convert information from brain signals to accurately determine what images a person sees at a particular moment, in real time.
Seven patients with severe epilepsy took part in an experiment led by neuroscientist Rajesh Rao and neurosurgeon Jeff Ojermann, alongside a team of scientists from WU. The patients had electrodes temporarily implanted into their temporal lobes so that doctors could observe the focal point of a seizure.
The program sampled and digitized the incoming brain signals at a rate of 1,000 times per second to determine which combination of electrode locations and signals correlated best to what the patients were seeing. It turned out that different neurons fired when people were looking at faces versus when they were looking at houses, the researchers reported.
"We got different responses from different (electrode) locations; some were sensitive to faces and some were sensitive to houses," Rao said.
Further study, beyond the document published on January 21 in the journal PLOS Computational Biology, is still required to see if the system would be able to learn a more diverse set of images and recognize the difference, for instance, between human face or the face of a dog.
Following the expected improvements, the technology could break the boundary between machines and humanity.