A type of artificial intelligence known as deep learning successfully detected the presence of molecular and genetic alterations based only on tumor images across 14 cancer types including those of the head and neck in an international study supported in part by the National Institute of Dental and Craniofacial Research (NIDCR). These findings, NIDCR said, raise the possibility that clinicians could adapt deep learning to more rapidly and cheaply deliver personalized cancer care.
Traditionally, many cancers are diagnosed by surgically removing a tissue sample from the area in question and examining thin slides of it under a microscope. With this method, pathologists can recognize cancer based on the size, shape, and structure of the tissue and cells. Recent advances in molecular and genetic testing enable clinicians to tailor treatment to the unique profile of a patient’s tumor. However, NIDCR said, these advanced tests can be costly and take days or even weeks to process, limiting their availability to many patients.
“We asked if it’s possible to molecularly subtype a patient’s cancer based only on slide images of tumors,” said Alexander Pearson, MD, PhD, assistant professor of medicine at the University of Chicago and co-lead of the study.
The researchers’ rationale is based on evidence that cancerous genetic alterations cause changes in tumor cell behavior, which in turn affects cell shape, size, and structure. If so, the researchers hypothesized, these features might be apparent in slide images and detectable by a computer. So, the researchers began developing a computer algorithm capable of detecting such changes using publicly available tumor images and corresponding genetic and molecular information.
Once the researchers were satisfied with the program’s predictive powers, they tested whether it could detect molecular alterations directly from tissue images of more than 5,000 patients across 14 cancer types, including head and neck. These anonymous patient images and data came from the Cancer Genome Atlas (TCGA) database, a National Cancer Institute portal including molecular characterizations of 20,000 patient samples spanning 33 cancer types.
“We had the algorithm focus exclusively on alterations that are clinically actionable, meaning there’s scientific to support their use to inform patient care,” said Pearson.
For many of the alterations used in the study, drugs targeting them already are approved by the Food and Drug Administration or currently being tested in clinical trials. According to the researchers, the deep learning program successfully predicted a range of genetic and molecular changes across all 14 cancer types tested.
For example, the algorithm detected with high accuracy a mutated form of the TP53 gene, which is thought to be a main driver of head and neck cancer. It also accurately predicted the presence of standard molecular markers such as hormone receptors in breast cancer. Hormone receptor status is an important factor in guiding treatment options for patients with breast cancer, NIDCR said.
“We demonstrated the feasibility of using deep learning to infer genetic and molecular alterations, including driver mutations responsible for carcinogenesis, from routine tissue slide images,” said Pearson.
According to the researchers, the deep learning program could be optimized for use on mobile devices, which might one day be easily adopted by clinicians. Pearson stressed, however, that the program isn’t quite ready for clinical use. The researchers are working to improve its accuracy, in part by re-training it on a larger number of patient samples and validating it against non-TCGA datasets.
Nevertheless, “the findings open up a path toward more rapid and less costly cancer diagnoses,” said Pearson. “It’s our hope that computational tools like ours could help clinicians develop earlier and more widely accessible personalized treatment plans for patients.”
The study, “Pan-Cancer Image-Based Detection of Clinically Actionable Genetic Alterations,” was published by Nature Cancer.