Artificial intelligence originally aspired to replace doctors. Researchers imagined robots that could ask you questions, run the answers through an algorithm that would learn with experience and tell whether you had the flu or a cold. However, those promises largely failed, as artificial intelligent algorithms were too rudimentary to perform those functions.
Particularly tricky was the variability between people, which caused basic machine learning algorithms to miss the patterns. Eventually though, a subset of AI called deep learning became sensitive enough to recognize speech from voice data. Although deep learning algorithms required loads of training data, they could eventually learn to recognize words regardless of accents and other differences in speech patterns.
After recognizing speech, technologists applied deep learning to recognize objects in image data — which remains its primary application today. For instance, driverless cars largely depend on deep learning to identify and navigate around people to safely get their occupants home. In the health field, a pack of companies — including San Francisco-based startup Enlitic — are applying deep learning to recognize suspicious masses on radiological scans that are likely cancerous. Many of these image recognition tools are already used in hospitals.
But identifying diseases by processing pictures of them only taps a fraction of deep learning’s power. Examining only its shape and density ignores the mountains of data hiding at the molecular level. This has not gone unnoticed.
The precision of deep learning algorithms is becoming even greater as the digital health revolution unlocks new sources of data.
Now, more companies are whispering about promises of deep learning algorithms that can find suspicious patterns in the body’s biochemistry. For instance, another Bay Area startup, IPMD, Inc., is developing algorithms that can process thousands of protein concentrations from a single blood sample to find patterns that are linked to certain kinds of cancer. These have the potential to be enormously more powerful than algorithms that process image data, as different types of cells excrete thousands of molecules providing clues to its makeup. Deep learning algorithms that can unlock this information will diagnose an array of diseases far more accurately.
The precision of deep learning algorithms is becoming even greater as the digital health revolution unlocks new sources of data. For instance, the mandate to digitize medical records now provides easier access for AI to crunch demographic data and health histories, along with direct testing data.
Artificial intelligent heart disease diagnostics have benefited extraordinarily from new sources of data. As far back as 1997, technologists first developed neural networks that could determine whether a patient had a heart attack by analyzing digital ECG data alone. More recently, deep neural networks have been developed to diagnose an even more elusive malady — heart disease — with 94 percent accuracy.
The brilliance of deep learning is that it does not require the designer to pre-define what it is looking for.
The algorithms combined data that included the patient’s profession, fat and cholesterol levels, family history and genetic data. The researchers found that the accuracy increased as more types of data were included. Now, a Bay Area startup called Cardiogram is using deep learning to detect arrhythmia by tracking ECG data over long periods of time. Tracking long-term ECG data only became possible with the advent of fitness trackers like the Apple Watch.
But because the patterns these deep learning diagnostics detect will be so subtle, clinicians will not be able to validate them. This will make them hard to accept by the industry — unlike systems that only highlight suspect masses on radiological scans, where a clinician can verify the results. A deep learning algorithm that balances thousands of protein concentrations to diagnose cancer finds patterns that are not articulable to a human being. Like sitting in a driverless car, they make handing over our lives to the intelligence of a machine unnerving.
Also likely to object is the government. The FDA has already issued guidance for machine-learning-based systems that parse image-based radiological data. The guidance document details the requirements for 510(k) submissions — the abbreviated FDA approval path based on preexisting devices — for systems that only highlight suspicious masses scans. The submission requires lengthy disclosure of exactly how the algorithm’s fine points work. For instance, the guidance suggests adding an explanation of the geometrical features identified and used to classify suspicious shapes.
This will not be possible for many deep learning systems. In fact, the brilliance of deep learning is that it does not require the designer to pre-define what it is looking for — it finds them on its own. The result is that the features used by deep learning to identify diseases will be almost entirely unknown to the humans operating them. The FDA has yet to even comment on this wave of diagnostics.
Hopefully though, the power of these algorithms to make efficient screening for cancer and other diseases will overtake the regulatory burden. Eventually, like driverless cars, the benefits of handing over our lives to machines will be so high we will learn to trust the machine, hopefully not to our peril.