Whereas most software used in health care these days is coded by humans, Google's new machine can understand the data by itself. "They understand what problems are worth solving," Vik Bajaj, a former executive at Verily, a medical sister company to Google, said. "They've now done enough small experiments to know exactly what the fruitful directions are."
What makes a neural network so unique is that, unlike typical AI, it has the ability to learn. They analyze information by mining from examples instead of being hard-coded to follow a specific set of rules. For a paper published in May in Nature, Google fed a neural network with health record data and found that it improved the accuracy of projected outcomes. Those included the length of a patient's hospital visit, when they'd be ready to be discharged, the likelihood that they'd have to be readmitted soon, and even when they would die. Yes, you read that right.
According to Nigam Shah, an associate professor at Stanford University, about 80 percent of the time spent on other existing predictive models goes towards making the relevant information presented in a way that's digestible. With Google's program, "you can throw in the kitchen sink and not have to worry about it," Shah said, according to Bloomberg.
The study used data from two hospitals, The University of California, San Francisco and the University of Chicago, to create its predictive model. It built two separate models for each hospital, with a combined 46 billion data points, anonymized to protect patients and the hospital from potential violations of the Health Insurance Portability and Accountability Act (HIPAA).
The study highlighted the case of a woman with late-stage breast cancer. The hospital's computers said that she had a 9.3 percent chance of dying during her visit. Google analyzed 175,639 data points from the woman and found instead that her risk of dying was actually 19.9 percent. She died a few days later.
Medical professionals were most impressed by Google's ability to look for data that was previously hard to access, like notes scribbled in a PDF and old charts — things that would take a human hours to comb through for pertinent information. Google's model also showcased the records on which it based its conclusions.
Google's product is expected to have huge implications in the healthcare world. It'll help doctors diagnose diseases and prioritize patient care, among numerous other benefits. Already, other AI systems have proven more capable of diagnosing lung cancer and heart disease than human doctors.
But Google isn't stopping there. According to EHR (Electronic Health Records) Intelligence, Google recently posted four internal job openings for a project called "Medical Digital Assist," with the aim of finding developers to build a "next gen clinical visit experience."
That project will use Google's already existing voice recognition technology to listen to patients and take notes in place of their doctor. Simultaneously, it will process that information to find key pieces of data to paint a picture of the patient's ailments.
How long before your doctor starts to look like the holographic chief medical officer from Star Trek Voyager, whose only quarters were chips in the computer's memory? Nobody can say for sure, but Google's AI chief Jeff Dean told Bloomberg that the company's next step is to get clinics to pick up their predictive system. Once they've partnered with more institutions, their model will only improve as it gains access to their records. It follows that it could only become more ubiquitous at institutions whose workers are sworn to save lives.