Your smartphone could soon be the first step for diagnosing skin cancer

If caught early, skin cancer isn’t particularly deadly. But unfortunately for many, signs and symptoms go unnoticed until health has irreversibly deteriorated. Research findings published in Nature today hint at a future where anyone, anywhere, might be able to perform a basic skin cancer screening on a smartphone.

Utilizing machine learning, a Stanford team, including Udacity’s Sebastian Thrun, was able to match the accuracy of dermatologists at identifying skin cancer. The classifier the group built is in no way a panacea offering people a precise and irrefutable cancer diagnoses. But even matching fallible human accuracy, the model could pave the way for a less costly, highly-scaleable, solution to get more people taking life-saving preliminary screenings.

The team started with an existing convolutional neural network architecture, previously trained on 1.28 million images from the ImageNet dataset. Afterwords, they utilized transfer leaning, bringing together stakeholders to assemble a database of 129,450 clinical images covering 2,000 different diseases. Over 18 online repositories were used to build the training dataset alongside Stanford University Medical Center.

“Our system requires no hand-crafted features; it is trained end-to-end directly from image labels and raw pixels, with a single network for both photographic and dermoscopic images,” the group noted.

A set of high-quality biopsy-confirmed images was then used for validation. With humans classifying from a set of 180 images, machines essentially tied the capabilities of clinicians.

This work underscores efforts by Google’s DeepMind and Microsoft to classify conditions that can lead to blindness using machine learning. The first line of defense when it comes to health is often a simple visual scan, something that data fueled deep learning has proven adept at. This year’s Data Science Bowl features a $1 million purse for engineers that can classify images of potentially cancerous lesions in the lungs.

But for machine learning to truly change the way we think about healthcare, algorithms need to escape the lab and find new homes on everyday electronic devices — this is easier said than done.

Computational capability is still paramount for most tasks in machine learning, something that just doesn’t exist yet on mobile. And perhaps more importantly, prior efforts have been thwarted by the highly-variant data produced by smartphone cameras used in the real-world. The question is whether greater amounts of data can overcome these factors like lighting, angle and zoom.

If we can pull it off, such technology has the potential to save lives and cut healthcare costs. It’s not hard to imagine making a small in-app purchase for a preliminary diagnosis in lieu of a doctors visit that would cost 10 times as much and require time off.